title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
4.242. pyparted | 4.242. pyparted 4.242.1. RHBA-2011:1641 - pyparted bug fix update An updated pyparted package that fixes a bug is now available for Red Hat Enterprise Linux 6. The pyparted package contains Python bindings for the libparted library. It is primarily used by the Red Hat Enterprise Linux installation software. Bug Fix BZ# 725558 Due to a missing flag in the GPT (Guid Partition Table) disklabel, the anaconda installer terminated with a traceback during the installation of Red Hat Enterprise Linux 6.2. With this update, support for the PARTITION_LEGACY_BOOT flag has been added to the pyparted package, thus fixing this bug. Users of pyparted are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/pyparted |
Chapter 83. tsigkey | Chapter 83. tsigkey This chapter describes the commands under the tsigkey command. 83.1. tsigkey create Create new tsigkey Usage: Table 83.1. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --secret SECRET Tsigkey secret --scope SCOPE Tsigkey scope --resource-id RESOURCE_ID Tsigkey resource_id --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 83.2. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 83.3. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 83.4. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 83.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 83.2. tsigkey delete Delete tsigkey Usage: Table 83.6. Positional Arguments Value Summary id Tsigkey id Table 83.7. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 83.3. tsigkey list List tsigkeys Usage: Table 83.8. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --scope SCOPE Tsigkey scope --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 83.9. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 83.10. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 83.11. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 83.12. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 83.4. tsigkey set Set tsigkey properties Usage: Table 83.13. Positional Arguments Value Summary id Tsigkey id Table 83.14. Optional Arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --secret SECRET Tsigkey secret --scope SCOPE Tsigkey scope --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 83.15. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 83.16. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 83.17. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 83.18. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 83.5. tsigkey show Show tsigkey details Usage: Table 83.19. Positional Arguments Value Summary id Tsigkey id Table 83.20. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 83.21. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 83.22. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 83.23. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 83.24. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack tsigkey create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name NAME --algorithm ALGORITHM --secret SECRET --scope SCOPE --resource-id RESOURCE_ID [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack tsigkey delete [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack tsigkey list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--name NAME] [--algorithm ALGORITHM] [--scope SCOPE] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack tsigkey set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] [--algorithm ALGORITHM] [--secret SECRET] [--scope SCOPE] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack tsigkey show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/tsigkey |
Chapter 8. Memory | Chapter 8. Memory 8.1. Introduction This chapter covers memory optimization options for virtualized environments. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-memory |
Chapter 8. File Systems | Chapter 8. File Systems XFS runtime statistics are available per file system in the /sys/fs/ directory The existing XFS global statistics directory has been moved from the /proc/fs/xfs/ directory to the /sys/fs/xfs/ directory while maintaining compatibility with earlier versions with a symbolic link in /proc/fs/xfs/stat . New subdirectories will be created and maintained for statistics per file system in /sys/fs/xfs/ , for example /sys/fs/xfs/sdb7/stats and /sys/fs/xfs/sdb8/stats . Previously, XFS runtime statistics were available only per server. Now, XFS runtime statistics are available per device. (BZ#1205640) XFS supported file-system size has been increased Previously, the supported file-system size for XFS was 100 TB. With this update, the supported file-system size for XFS has been increased to 300 TB. (BZ#1273090) The use_hostname_for_mounts autofs option is now available A new autofs option to override the use of an IP address when mounting to a host name with multiple associated addresses has been implemented. If strict Round Robin DNS is needed, the use_hostname_for_mounts option enables bypassing the usual availability and proximity check, and the host name is used in mount requests regardless of whether the requests have multiple IP addresses. (BZ#1248798) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_file_systems |
Installing on AWS | Installing on AWS OpenShift Container Platform 4.12 Installing OpenShift Container Platform on Amazon Web Services Red Hat OpenShift Documentation Team | [
"platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2",
"compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole",
"controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole",
"openshift-install create install-config --dir <installation_directory>",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component-secret> namespace: <component-namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"grep \"release.openshift.io/feature-set\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade",
"openshift-install create cluster --dir <installation_directory>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-96c6f8f7 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{\"auths\": ...}' 20",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{\"auths\": ...}' 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"subnets: - subnet-1 - subnet-2 - subnet-3",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\": ...}' 22",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name>",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"export CLUSTER_REGION=\"<region_name>\" 1",
"aws --region USD{CLUSTER_REGION} ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones",
"export ZONE_GROUP_NAME=\"<value_of_GroupName>\" 1",
"aws ec2 modify-availability-zone-group --group-name \"USD{ZONE_GROUP_NAME}\" --opt-in-status opted-in",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 5 \"ParameterValue\": \"3\" 6 }, { \"ParameterKey\": \"SubnetBits\", 7 \"ParameterValue\": \"12\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: ClusterName: Type: String Description: ClusterName used to prefix resource names VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: ClusterName: default: \"\" AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-vpc\" ] ] - Key: !Join [ \"\", [ \"kubernetes.io/cluster/unmanaged\" ] ] Value: \"shared\" PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-public-1\" ] ] PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-public-2\" ] ] PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-public-3\" ] ] InternetGateway: Type: \"AWS::EC2::InternetGateway\" Properties: Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-igw\" ] ] GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-rtb-public\" ] ] PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-private-1\" ] ] PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-rtb-private-1\" ] ] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-natgw-private-1\" ] ] EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-private-2\" ] ] PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-rtb-private-2\" ] ] PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-natgw-private-2\" ] ] EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-eip-private-2\" ] ] Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-private-3\" ] ] PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-rtb-private-3\" ] ] PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-natgw-private-3\" ] ] EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-eip-private-3\" ] ] Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableId: Description: Private Route table ID Value: !Ref PrivateRouteTable",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"VpcId\", 3 \"ParameterValue\": \"vpc-<random_string>\" 4 }, { \"ParameterKey\": \"PublicRouteTableId\", 5 \"ParameterValue\": \"<vpc_rtb_pub>\" 6 }, { \"ParameterKey\": \"LocalZoneName\", 7 \"ParameterValue\": \"<cluster_region_name>-<location_identifier>-<zone_identifier>\" 8 }, { \"ParameterKey\": \"LocalZoneNameShort\", 9 \"ParameterValue\": \"<lz_zone_shortname>\" 10 }, { \"ParameterKey\": \"PublicSubnetCidr\", 11 \"ParameterValue\": \"10.0.128.0/20\" 12 } ]",
"aws cloudformation create-stack --stack-name <subnet_stack_name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-lz-nyc1/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <subnet_stack_name>",
"CloudFormation template used to create Local Zone subnets and dependencies AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: ClusterName: Description: ClusterName used to prefix resource names Type: String VpcId: Description: VPC Id Type: String LocalZoneName: Description: Local Zone Name (Example us-east-1-bos-1) Type: String LocalZoneNameShort: Description: Short name for Local Zone used on tag Name (Example bos1) Type: String PublicRouteTableId: Description: Public Route Table ID to associate the Local Zone subnet Type: String PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for Public Subnet Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref LocalZoneName Tags: - Key: Name Value: !Join - \"\" - [ !Ref ClusterName, \"-public-\", !Ref LocalZoneNameShort, \"-1\" ] - Key: kubernetes.io/cluster/unmanaged Value: \"true\" PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId Outputs: PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \"\", [!Ref PublicSubnet] ]",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: aws: subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3",
"./openshift-install create manifests --dir <installation_directory> 1",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: mtu: 1200 EOF",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: mtu: 1250 EOF",
"export LZ_ZONE_NAME=\"<local_zone_name>\" 1",
"aws ec2 describe-instance-type-offerings --location-type availability-zone --filters Name=location,Values=USD{LZ_ZONE_NAME} --region <region> 1",
"export INSTANCE_TYPE=\"<instance_type>\" 1",
"export AMI_ID=USD(grep ami <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml | tail -n1 | awk '{printUSD2}')",
"export SUBNET_ID=USD(aws cloudformation describe-stacks --stack-name \"<subnet_stack_name>\" \\ 1 | jq -r '.Stacks[0].Outputs[0].OutputValue')",
"export CLUSTER_ID=\"USD(awk '/infrastructureName: / {print USD2}' <installation_directory>/manifests/cluster-infrastructure-02-config.yml)\"",
"cat <<EOF > <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-nyc1.yaml apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} name: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} machine.openshift.io/cluster-api-machineset: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} template: metadata: labels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} machine.openshift.io/cluster-api-machine-role: edge machine.openshift.io/cluster-api-machine-type: edge machine.openshift.io/cluster-api-machineset: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} spec: metadata: labels: machine.openshift.com/zone-type: local-zone machine.openshift.com/zone-group: USD{ZONE_GROUP_NAME} node-role.kubernetes.io/edge: \"\" taints: - key: node-role.kubernetes.io/edge effect: NoSchedule providerSpec: value: ami: id: USD{AMI_ID} apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: USD{CLUSTER_ID}-worker-profile instanceType: USD{INSTANCE_TYPE} kind: AWSMachineProviderConfig placement: availabilityZone: USD{LZ_ZONE_NAME} region: USD{CLUSTER_REGION} securityGroups: - filters: - name: tag:Name values: - USD{CLUSTER_ID}-worker-sg subnet: id: USD{SUBNET_ID} publicIp: true tags: - name: kubernetes.io/cluster/USD{CLUSTER_ID} value: owned userDataSecret: name: worker-user-data EOF",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name>",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"aws outposts get-outpost-instance-types --outpost-id <outpost_id> 1",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: {} replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: aws: type: m5.large 8 zones: - us-east-1a 9 rootVolume: type: gp2 10 size: 120 replicas: 3 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 13 propagateUserTags: true 14 userTags: adminContact: jdoe costCenter: 7536 subnets: 15 - subnet-1 - subnet-2 - subnet-3 sshKey: ssh-ed25519 AAAA... 16 pullSecret: '{\"auths\": ...}' 17",
"cp install-config.yaml install-config.yaml.backup",
"openshift-install create manifests --dir <installation_-_directory>",
"INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift",
"tree . ├── manifests │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_cloud-creds-secret.yaml ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-machines-0.yaml ├── 99_openshift-cluster-api_master-machines-1.yaml ├── 99_openshift-cluster-api_master-machines-2.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-machineset-0.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml ├── 99_role-cloud-creds-secret-reader.yaml └── openshift-install-manifests.yaml",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: mtu: 1250",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: mtu: 1200",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"oc annotate --overwrite storageclass gp3-csi storageclass.kubernetes.io/is-default-class=false oc annotate --overwrite storageclass gp2-csi storageclass.kubernetes.io/is-default-class=true",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl aws delete --name=<name> \\ 1 --region=<aws_region> 2",
"2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted",
"./openshift-install destroy cluster --dir <installation_directory> \\ 1 --log-level=debug 2",
"aws cloudformation delete-stack --stack-name <local_zone_stack_name>",
"aws cloudformation delete-stack --stack-name <vpc_stack_name>",
"aws cloudformation describe-stacks --stack-name <local_zone_stack_name>",
"aws cloudformation describe-stacks --stack-name <vpc_stack_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installing_on_aws/index |
C.4. Failure Recovery and Independent Subtrees | C.4. Failure Recovery and Independent Subtrees In most enterprise environments, the normal course of action for failure recovery of a service is to restart the entire service if any component in the service fails. For example, in Example C.6, "Service foo Normal Failure Recovery" , if any of the scripts defined in this service fail, the normal course of action is to restart (or relocate or disable, according to the service recovery policy) the service. However, in some circumstances certain parts of a service may be considered non-critical; it may be necessary to restart only part of the service in place before attempting normal recovery. To accomplish that, you can use the __independent_subtree attribute. For example, in Example C.7, "Service foo Failure Recovery with __independent_subtree Attribute" , the __independent_subtree attribute is used to accomplish the following actions: If script:script_one fails, restart script:script_one, script:script_two, and script:script_three. If script:script_two fails, restart just script:script_two. If script:script_three fails, restart script:script_one, script:script_two, and script:script_three. If script:script_four fails, restart the whole service. Example C.6. Service foo Normal Failure Recovery Example C.7. Service foo Failure Recovery with __independent_subtree Attribute In some circumstances, if a component of a service fails you may want to disable only that component without disabling the entire service, to avoid affecting other services that use other components of that service. As of the Red Hat Enterprise Linux 6.1 release, you can accomplish that by using the __independent_subtree="2" attribute, which designates the independent subtree as non-critical. Note You may only use the non-critical flag on singly-referenced resources. The non-critical flag works with all resources at all levels of the resource tree, but should not be used at the top level when defining services or virtual machines. As of the Red Hat Enterprise Linux 6.1 release, you can set maximum restart and restart expirations on a per-node basis in the resource tree for independent subtrees. To set these thresholds, you can use the following attributes: __max_restarts configures the maximum number of tolerated restarts prior to giving up. __restart_expire_time configures the amount of time, in seconds, after which a restart is no longer attempted. | [
"<service name=\"foo\"> <script name=\"script_one\" ...> <script name=\"script_two\" .../> </script> <script name=\"script_three\" .../> </service>",
"<service name=\"foo\"> <script name=\"script_one\" __independent_subtree=\"1\" ...> <script name=\"script_two\" __independent_subtree=\"1\" .../> <script name=\"script_three\" .../> </script> <script name=\"script_four\" .../> </service>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clust-rsc-failure-rec-CA |
5.3. The IdM Command-Line Utilities | 5.3. The IdM Command-Line Utilities The basic command-line script for IdM is named ipa . The ipa script is a parent script for a number of subcommands. These subcommands are then used to manage IdM. For example, the ipa user-add command adds a new user: Command-line management has certain benefits over management in UI; for example, the command-line utilities allow management tasks to be automated and performed repeatedly in a consistent way without manual intervention. Additionally, while most management operations are available both from the command line and in the web UI, some tasks can only be performed from the command line. Note This section only provides a general overview of the ipa subcommands. More information is available in the other sections dedicated to specific areas of managing IdM. For example, for information about managing user entries using the ipa subcommands, see Chapter 11, Managing User Accounts . 5.3.1. Getting Help for ipa Commands The ipa script can display help about a particular set of subcommands: a topic . To display the list of available topics, use the ipa help topics command: To display help for a particular topic, use the ipa help topic_name command. For example, to display information about the automember topic: The ipa script can also display a list of available ipa commands. To do this, use the ipa help commands command: For detailed help on the individual ipa commands, add the --help option to a command. For example: For more information about the ipa utility, see the ipa (1) man page. 5.3.2. Setting a List of Values IdM stores entry attributes in lists. For example: Any update to a list of attributes overwrites the list. For example, an attempt to add a single attribute by only specifying this attribute replaces the whole previously-defined list with the single new attribute. Therefore, when changing a list of attributes, you must specify the whole updated list. IdM supports the following methods of supplying a list of attributes: Using the same command-line argument multiple times within the same command invocation. For example: Enclosing the list in curly braces, which allows the shell to do the expansion. For example: 5.3.3. Using Special Characters When passing command-line arguments in ipa commands that include special characters, such as angle brackets (< and >), ampersand (&), asterisk (*), or vertical bar (|), you must escape these characters by using a backslash (\). For example, to escape an asterisk (*): Commands containing unescaped special characters do not work as expected because the shell cannot properly parse such characters. 5.3.4. Searching IdM Entries Listing IdM Entries Use the ipa *-find commands to search for a particular type of IdM entries. For example: To list all users: To list user groups whose specified attributes contain keyword : To configure the attributes IdM searches for users and user groups, see Section 13.5, "Setting Search Attributes for Users and User Groups" . When searching user groups, you can also limit the search results to groups that contain a particular user: You can also search for groups that do not contain a particular user: Showing Details for a Particular Entry Use the ipa *-show command to display details about a particular IdM entry. For example: 5.3.4.1. Adjusting the Search Size and Time Limit Some search results, such as viewing lists of users, can return a very large number of entries. By tuning these search operations, you can improve overall server performance when running the ipa *-find commands, such as ipa user-find , and when displaying corresponding lists in the web UI. The search size limit: Defines the maximum number of entries returned for a request sent to the server from a client, the IdM command-line tools, or the IdM web UI. Default value: 100 entries. The search time limit: Defines the maximum time that the server waits for searches to run. Once the search reaches this limit, the server stops the search and returns the entries that discovered in that time. Default value: 2 seconds. If you set the values to -1 , IdM will not apply any limits when searching. Important Setting search size or time limits too high can negatively affect server performance. Web UI: Adjusting the Search Size and Time Limit To adjust the limits globally for all queries: Select IPA Server Configuration . Set the required values in the Search Options area. Click Save at the top of the page. Command Line: Adjusting the Search Size and Time Limit To adjust the limits globally for all queries, use the ipa config-mod command and add the --searchrecordslimit and --searchtimelimit options. For example: From the command line, you can also adjust the limits only for a specific query. To do this, add the --sizelimit or --timelimit options to the command. For example: Important Note that adjusting the size or time limits using the ipa config-mod command with the --searchrecordslimit or the --searchtimelimit options affects the number of entries returned by ipa commands, such as ipa user-find . In addition to these limits, the settings configured at the Directory Server level are also taken into account and may impose stricter limits. For more information on Directory Server limits, see the Red Hat Directory Server Administration Guide . | [
"ipa user-add user_name",
"ipa help topics automember Auto Membership Rule. automount Automount caacl Manage CA ACL rules.",
"ipa help automember Auto Membership Rule. Bring clarity to the membership of hosts and users by configuring inclusive or exclusive regex patterns, you can automatically assign a new entries into a group or hostgroup based upon attribute information. EXAMPLES: Add the initial group or hostgroup: ipa hostgroup-add --desc=\"Web Servers\" webservers ipa group-add --desc=\"Developers\" devel",
"ipa help commands automember-add Add an automember rule. automember-add-condition Add conditions to an automember rule.",
"ipa automember-add --help Usage: ipa [global-options] automember-add AUTOMEMBER-RULE [options] Add an automember rule. Options: -h, --help show this help message and exit --desc=STR A description of this auto member rule",
"ipaUserSearchFields: uid,givenname,sn,telephonenumber,ou,title",
"ipa permission-add --permissions=read --permissions=write --permissions=delete",
"ipa permission-add --permissions={read,write,delete}",
"ipa certprofile-show certificate_profile --out= exported\\*profile.cfg",
"ipa user-find --------------- 4 users matched ---------------",
"ipa group-find keyword ---------------- 2 groups matched ----------------",
"ipa group-find --user= user_name",
"ipa group-find --no-user= user_name",
"ipa host-show server.example.com Host name: server.example.com Principal name: host/[email protected]",
"ipa config-mod --searchrecordslimit=500 --searchtimelimit=5",
"ipa user-find --sizelimit=200 --timelimit=120"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-idm-cli |
Chapter 17. System and Subscription Management | Chapter 17. System and Subscription Management cockpit rebased to version 154 The cockpit packages, which provide the Cockpit browser-based administration console, have been upgraded to version 154. This version provides a number of bug fixes and enhancements. Notable changes include: The Accounts page now enables the configuration of account locking and password expiry. Load graphs consistently ignore loopback traffic on all networks. Cockpit provides information about unmet conditions for systemd services. Newly created timers on the Services page are now started and enabled automatically. It is possible to dynamically resize the terminal window to use all available space. Various navigation and JavaScript errors with Internet Explorer have been fixed. Cockpit uses Self-Signed Certificate Generator (SSCG) to generate SSL certificates, if available. Loading SSH keys from arbitrary paths is now supported. Absent or invalid /etc/os-release files are now handled gracefully. Unprivileged users now cannot use the shutdown/reboot button on the System page. Note that certain cockpit packages are available in the Red Hat Enterprise Linux 7 Extras channel; see https://access.redhat.com/support/policy/updates/extras . (BZ# 1470780 , BZ#1425887, BZ# 1493756 ) Users of yum-utils now can perform actions prior to transactions A new yum-plugin-pre-transaction-actions plug-in has been added to the yum-utils collection. It allows users to perform actions before a transaction starts. The usage and configuration of the plug-in are almost identical to the existing yum-plugin-post-transaction-actions plug-in. (BZ#1470647) yum can disable creation of per-user cache as a non-root user New usercache option has been added to the yum.conf(5) configuration file of the yum utility. It allows the users to disable the creation of per-user cache when yum runs as a non-root user. The reason for this change is that in some cases users do not want to create and populate per-user cache, for example in cases where the space in the USDTMPDIR directory is consumed by the user cache data. (BZ# 1432319 ) yum-builddep now allows to define RPM macros The yum-builddep utility has been enhanced to allow you to define RPM macros for a .spec file parsing. This change has been made because, in some cases, RPM macros need to be defined in order for yum-builddep to successfully parse a .spec file. Similarly to the rpm utility, the yum-builddep tool now allows you to specify RPM macros with the --define option. (BZ#1437636) subscription-manager now displays the host name upon registration Until now, the user needed to search for the effective host name for a given system, which is determined by different Satellite settings. With this update, the subscription-manager utility displays the host name upon the registration of the system. (BZ# 1463325 ) A subscription-manager plugin now runs with yum-config-manager With this update, the subscription-manager plugin runs with the yum-config-manager utility. The yum-config-manager operations now trigger redhat.repo generation, allowing Red Hat Enterprise Linux containers to enable or disable repositories without first running yum commands. (BZ# 1329349 ) subscription-manager now protects all product certificates in /etc/pki/product-default/ Previously, the subscription-manager utility only protected those product certificates provided by the redhat-release package whose tag matched rhel-# . Consequently, product certificates such as RHEL-ALT or High Touch Beta were sometimes removed from the /etc/pki/product-default/ directory by the product-id yum plugin. With this update, subscription-manager has been modified to protect all certificates in /etc/pki/product-default/ against automatic removal. (BZ# 1526622 ) rhn-migrate-classic-to-rhsm now automatically enables the subscription-manager and product-id yum plugins With this update, the rhn-migrate-classic-to-rhsm utility automatically enables the yum plugins: subscription-manager and product-id . With this update, the subscription-manager utility automatically enables the yum plugins: subscription-manager and product-id . This update benefits users of Red Hat Enterprise Linux who previously used the rhn-client-tools utility to register their systems to Red Hat Network Classic or who still use it with Satellite 5 entitlement servers, and who have temporarily disabled the yum plugins. As a result, rhn-migrate-classic-to-rhsm allows an easy transition to using the newer subscription-manager tools for entitlements. Note that running rhn-migrate-classic-to-rhsm displays a warning message indicating how to change this default behavior if it is not desirable. (BZ# 1466453 ) subscription-manager now automatically enables the subscription-manager and product-id yum plugins With this update, the subscription-manager utility automatically enables the yum plugins: subscription-manager and product-id . This update benefits users of Red Hat Enterprise Linux who previously used the rhn-client-tools utility to register their systems to Red Hat Network Classic or who still use it with Satellite 5 entitlement servers, and who have temporarily disabled the yum plugins. As a result, it is easier for users to start using the newer subscription-manager tools for entitlements. Note that running subscription-manager displays a warning message indicating how to change this default behavior if it is not desirable. (BZ# 1319927 ) subscription-manager-cockpit replaces subscription functionality in cockpit-system This update introduces a new subscription-manager-cockpit RPM. The new subscription-manager-cockpit RPM provides a new dbus-based implementation and a few fixes to the same subscriptions functionality provided by cockpit-system . If both RPMs are installed, the implementation from subscription-manager-cockpit is used. (BZ# 1499977 ) virt-who logs where the host-guest mapping is sent The virt-who utility now uses the rhsm.log file to log the owner or account to which the host-guest mapping is sent. This helps proper configuration of virt-who . (BZ# 1408556 ) virt-who now provides configuration error information The virt-who utility now checks for common virt-who configuration errors and outputs log messages that specify the configuration items that caused these errors. As a result, it is easier for a user to correct virt-who configuration errors. (BZ# 1436617 ) reposync now by default skips packages whose location falls outside the destination directory Previously, the reposync command did not sanitize paths to packages specified in a remote repository, which was insecure. A security fix for CVE-2018-10897 has changed the default behavior of reposync to not store any packages outside the specified destination directory. To restore the original insecure behavior, use the new --allow-path-traversal option. (BZ#1609302) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/new_features_system_and_subscription_management |
probe::linuxmib.ListenDrops | probe::linuxmib.ListenDrops Name probe::linuxmib.ListenDrops - Count of times conn request that were dropped Synopsis linuxmib.ListenDrops Values sk Pointer to the struct sock being acted on op Value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function linuxmib_filter_key . If the packet passes the filter is is counted in the global ListenDrops (equivalent to SNMP's MIB LINUX_MIB_LISTENDROPS) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-linuxmib-listendrops |
Preface | Preface The Kernel Administration Guide describes working with the kernel and shows several practical tasks. Beginning with information on using kernel modules, the guide then covers interaction with the sysfs facility, manual upgrade of the kernel and using kpatch. The guide also introduces the crash dump mechanism, which steps through the process of setting up and testing vmcore collection in the event of a kernel failure. The Kernel Administration Guide also covers selected use cases of managing the kernel and includes reference material about command line options, kernel tunables (also known as switches), and a brief discussion of kernel features. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/kernel_administration_guide/pr01 |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.422_release_notes/openjdk8-temurin-support-policy |
Chapter 2. Using OpenID Connect to secure applications and services | Chapter 2. Using OpenID Connect to secure applications and services This section describes how you can secure applications and services with OpenID Connect using either Red Hat Single Sign-On adapters or generic OpenID Connect Relying Party libraries. 2.1. Java adapters Red Hat Single Sign-On comes with a range of different adapters for Java application. Selecting the correct adapter depends on the target platform. All Java adapters share a set of common configuration options described in the Java Adapters Config chapter. 2.1.1. Java adapter configuration Each Java adapter supported by Red Hat Single Sign-On can be configured by a simple JSON file. This is what one might look like: { "realm" : "demo", "resource" : "customer-portal", "realm-public-key" : "MIGfMA0GCSqGSIb3D...31LwIDAQAB", "auth-server-url" : "https://localhost:8443/auth", "ssl-required" : "external", "use-resource-role-mappings" : false, "enable-cors" : true, "cors-max-age" : 1000, "cors-allowed-methods" : "POST, PUT, DELETE, GET", "cors-exposed-headers" : "WWW-Authenticate, My-custom-exposed-Header", "bearer-only" : false, "enable-basic-auth" : false, "expose-token" : true, "verify-token-audience" : true, "credentials" : { "secret" : "234234-234234-234234" }, "connection-pool-size" : 20, "socket-timeout-millis" : 5000, "connection-timeout-millis" : 6000, "connection-ttl-millis" : 500, "disable-trust-manager" : false, "allow-any-hostname" : false, "truststore" : "path/to/truststore.jks", "truststore-password" : "geheim", "client-keystore" : "path/to/client-keystore.jks", "client-keystore-password" : "geheim", "client-key-password" : "geheim", "token-minimum-time-to-live" : 10, "min-time-between-jwks-requests" : 10, "public-key-cache-ttl" : 86400, "redirect-rewrite-rules" : { "^/wsmaster/api/(.*)USD" : "/api/USD1" } } You can use USD{... } enclosure for system property replacement. For example USD{jboss.server.config.dir} would be replaced by /path/to/Red Hat Single Sign-On . Replacement of environment variables is also supported via the env prefix, for example USD{env.MY_ENVIRONMENT_VARIABLE} . The initial config file can be obtained from the admin console. This can be done by opening the admin console, select Clients from the menu and clicking on the corresponding client. Once the page for the client is opened click on the Installation tab and select Keycloak OIDC JSON . Here is a description of each configuration option: realm Name of the realm. This is REQUIRED. resource The client-id of the application. Each application has a client-id that is used to identify the application. This is REQUIRED. realm-public-key PEM format of the realm public key. You can obtain this from the Admin Console. This is OPTIONAL and it's not recommended to set it. If not set, the adapter will download this from Red Hat Single Sign-On and it will always re-download it when needed (eg. Red Hat Single Sign-On rotates its keys). However if realm-public-key is set, then adapter will never download new keys from Red Hat Single Sign-On, so when Red Hat Single Sign-On rotate it's keys, adapter will break. auth-server-url The base URL of the Red Hat Single Sign-On server. All other Red Hat Single Sign-On pages and REST service endpoints are derived from this. It is usually of the form https://host:port/auth . This is REQUIRED. ssl-required Ensures that all communication to and from the Red Hat Single Sign-On server is over HTTPS. In production this should be set to all . This is OPTIONAL . The default value is external meaning that HTTPS is required by default for external requests. Valid values are 'all', 'external' and 'none'. confidential-port The confidential port used by the Red Hat Single Sign-On server for secure connections over SSL/TLS. This is OPTIONAL . The default value is 8443 . use-resource-role-mappings If set to true, the adapter will look inside the token for application level role mappings for the user. If false, it will look at the realm level for user role mappings. This is OPTIONAL . The default value is false . public-client If set to true, the adapter will not send credentials for the client to Red Hat Single Sign-On. This is OPTIONAL . The default value is false . enable-cors This enables CORS support. It will handle CORS preflight requests. It will also look into the access token to determine valid origins. This is OPTIONAL . The default value is false . cors-max-age If CORS is enabled, this sets the value of the Access-Control-Max-Age header. This is OPTIONAL . If not set, this header is not returned in CORS responses. cors-allowed-methods If CORS is enabled, this sets the value of the Access-Control-Allow-Methods header. This should be a comma-separated string. This is OPTIONAL . If not set, this header is not returned in CORS responses. cors-allowed-headers If CORS is enabled, this sets the value of the Access-Control-Allow-Headers header. This should be a comma-separated string. This is OPTIONAL . If not set, this header is not returned in CORS responses. cors-exposed-headers If CORS is enabled, this sets the value of the Access-Control-Expose-Headers header. This should be a comma-separated string. This is OPTIONAL . If not set, this header is not returned in CORS responses. bearer-only This should be set to true for services. If enabled the adapter will not attempt to authenticate users, but only verify bearer tokens. This is OPTIONAL . The default value is false . autodetect-bearer-only This should be set to true if your application serves both a web application and web services (for example SOAP or REST). It allows you to redirect unauthenticated users of the web application to the Red Hat Single Sign-On login page, but send an HTTP 401 status code to unauthenticated SOAP or REST clients instead as they would not understand a redirect to the login page. Red Hat Single Sign-On auto-detects SOAP or REST clients based on typical headers like X-Requested-With , SOAPAction or Accept . The default value is false . enable-basic-auth This tells the adapter to also support basic authentication. If this option is enabled, then secret must also be provided. This is OPTIONAL . The default value is false . expose-token If true , an authenticated browser client (via a JavaScript HTTP invocation) can obtain the signed access token via the URL root/k_query_bearer_token . This is OPTIONAL . The default value is false . credentials Specify the credentials of the application. This is an object notation where the key is the credential type and the value is the value of the credential type. Currently password and jwt is supported. This is REQUIRED only for clients with 'Confidential' access type. connection-pool-size This config option defines how many connections to the Red Hat Single Sign-On server should be pooled. This is OPTIONAL . The default value is 20 . socket-timeout-millis Timeout for socket waiting for data after establishing the connection in milliseconds. Maximum time of inactivity between two data packets. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default if applicable). The default value is -1 . This is OPTIONAL . connection-timeout-millis Timeout for establishing the connection with the remote host in milliseconds. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default if applicable). The default value is -1 . This is OPTIONAL . connection-ttl-millis Connection time-to-live for client in milliseconds. A value less than or equal to zero is interpreted as an infinite value. The default value is -1 . This is OPTIONAL . disable-trust-manager If the Red Hat Single Sign-On server requires HTTPS and this config option is set to true you do not have to specify a truststore. This setting should only be used during development and never in production as it will disable verification of SSL certificates. This is OPTIONAL . The default value is false . allow-any-hostname If the Red Hat Single Sign-On server requires HTTPS and this config option is set to true the Red Hat Single Sign-On server's certificate is validated via the truststore, but host name validation is not done. This setting should only be used during development and never in production as it will disable verification of SSL certificates. This seting may be useful in test environments This is OPTIONAL . The default value is false . proxy-url The URL for the HTTP proxy if one is used. truststore The value is the file path to a truststore file. If you prefix the path with classpath: , then the truststore will be obtained from the deployment's classpath instead. Used for outgoing HTTPS communications to the Red Hat Single Sign-On server. Client making HTTPS requests need a way to verify the host of the server they are talking to. This is what the trustore does. The keystore contains one or more trusted host certificates or certificate authorities. You can create this truststore by extracting the public certificate of the Red Hat Single Sign-On server's SSL keystore. This is REQUIRED unless ssl-required is none or disable-trust-manager is true . truststore-password Password for the truststore. This is REQUIRED if truststore is set and the truststore requires a password. client-keystore This is the file path to a keystore file. This keystore contains client certificate for two-way SSL when the adapter makes HTTPS requests to the Red Hat Single Sign-On server. This is OPTIONAL . client-keystore-password Password for the client keystore. This is REQUIRED if client-keystore is set. client-key-password Password for the client's key. This is REQUIRED if client-keystore is set. always-refresh-token If true , the adapter will refresh token in every request. Warning - when enabled this will result in a request to Red Hat Single Sign-On for every request to your application. register-node-at-startup If true , then adapter will send registration request to Red Hat Single Sign-On. It's false by default and useful only when application is clustered. See Application Clustering for details register-node-period Period for re-registration adapter to Red Hat Single Sign-On. Useful when application is clustered. See Application Clustering for details token-store Possible values are session and cookie . Default is session , which means that adapter stores account info in HTTP Session. Alternative cookie means storage of info in cookie. See Application Clustering for details token-cookie-path When using a cookie store, this option sets the path of the cookie used to store account info. If it's a relative path, then it is assumed that the application is running in a context root, and is interpreted relative to that context root. If it's an absolute path, then the absolute path is used to set the cookie path. Defaults to use paths relative to the context root. principal-attribute OpenID Connect ID Token attribute to populate the UserPrincipal name with. If token attribute is null, defaults to sub . Possible values are sub , preferred_username , email , name , nickname , given_name , family_name . turn-off-change-session-id-on-login The session id is changed by default on a successful login on some platforms to plug a security attack vector. Change this to true if you want to turn this off This is OPTIONAL . The default value is false . token-minimum-time-to-live Amount of time, in seconds, to preemptively refresh an active access token with the Red Hat Single Sign-On server before it expires. This is especially useful when the access token is sent to another REST client where it could expire before being evaluated. This value should never exceed the realm's access token lifespan. This is OPTIONAL . The default value is 0 seconds, so adapter will refresh access token just if it's expired. min-time-between-jwks-requests Amount of time, in seconds, specifying minimum interval between two requests to Red Hat Single Sign-On to retrieve new public keys. It is 10 seconds by default. Adapter will always try to download new public key when it recognize token with unknown kid . However it won't try it more than once per 10 seconds (by default). This is to avoid DoS when attacker sends lots of tokens with bad kid forcing adapter to send lots of requests to Red Hat Single Sign-On. public-key-cache-ttl Amount of time, in seconds, specifying maximum interval between two requests to Red Hat Single Sign-On to retrieve new public keys. It is 86400 seconds (1 day) by default. Adapter will always try to download new public key when it recognize token with unknown kid . If it recognize token with known kid , it will just use the public key downloaded previously. However at least once per this configured interval (1 day by default) will be new public key always downloaded even if the kid of token is already known. ignore-oauth-query-parameter Defaults to false , if set to true will turn off processing of the access_token query parameter for bearer token processing. Users will not be able to authenticate if they only pass in an access_token redirect-rewrite-rules If needed, specify the Redirect URI rewrite rule. This is an object notation where the key is the regular expression to which the Redirect URI is to be matched and the value is the replacement String. USD character can be used for backreferences in the replacement String. verify-token-audience If set to true , then during authentication with the bearer token, the adapter will verify whether the token contains this client name (resource) as an audience. The option is especially useful for services, which primarily serve requests authenticated by the bearer token. This is set to false by default, however for improved security, it is recommended to enable this. See Audience Support for more details about audience support. 2.1.2. JBoss EAP adapter You can install this adapter from a ZIP file or from an RPM. Installing JBOSS EAP adapters from a ZIP file Installing JBoss EAP 7 Adapters from an RPM Installing JBoss EAP 6 Adapters from an RPM 2.1.3. Installing JBOSS EAP adapters from a ZIP file To be able to secure WAR apps deployed on JBoss EAP, you must install and configure the Red Hat Single Sign-On adapter subsystem. You then have two options to secure your WARs. You can provide an adapter config file in your WAR and change the auth-method to KEYCLOAK within web.xml. Alternatively, you do not have to modify your WAR at all and you can secure it via the Red Hat Single Sign-On adapter subsystem configuration in the configuration file, such as standalone.xml . Both methods are described in this section. Adapters are available as a separate archive depending on what server version you are using. Procedure Install the adapter that applies to your application server from the Sotware Downloads site. Install on JBoss EAP 7: This ZIP archive contains JBoss Modules specific to the Red Hat Single Sign-On adapter. It also contains JBoss CLI scripts to configure the adapter subsystem. To configure the adapter subsystem, execute the appropriate command. Install on JBoss EAP 7.1 or newer if the server is not running. Note The offline script is not available for JBoss EAP 6.4 Install on JBoss EAP 7.1 or newer if the server is running. Note It is possible to use the legacy non-Elytron adapter on JBoss EAP 7.1 or newer as well, meaning you can use adapter-install-offline.cli Note EAP supports OpenJDK 17 and Oracle JDK 17 since 7.4.CP7 and 7.4.CP8 respectively. Note that the new java version makes the elytron variant compulsory, so do not use the legacy adapter with JDK 17. Also, after running the adapter CLI file, execute the enable-elytron-se17.cli script provided by EAP. Both scripts are necessary to configure the elytron adapter and remove the incompatible EAP subsystems. For more details, see this Security Configuration Changes article. Install on JBoss EAP 6.4 2.1.3.1. JBoss SSO JBoss EAP has built-in support for single sign-on for web applications deployed to the same JBoss EAP instance. This should not be enabled when using Red Hat Single Sign-On. 2.1.3.2. Securing a WAR This section describes how to secure a WAR directly by adding configuration and editing files within your WAR package. Procedure Create a keycloak.json adapter configuration file within the WEB-INF directory of your WAR. The format of this configuration file is described in the Java adapter configuration section. Set the auth-method to KEYCLOAK in web.xml . Use standard servlet security to specify role-base constraints on your URLs. Here's an example: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0"> <module-name>application</module-name> <security-constraint> <web-resource-collection> <web-resource-name>Admins</web-resource-name> <url-pattern>/admin/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK</auth-method> <realm-name>this is ignored currently</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app> 2.1.3.3. Securing WARs via adapter subsystem You do not have to modify your WAR to secure it with Red Hat Single Sign-On. Instead you can externally secure it via the Red Hat Single Sign-On Adapter Subsystem. While you don't have to specify KEYCLOAK as an auth-method , you still have to define the security-constraints in web.xml . You do not, however, have to create a WEB-INF/keycloak.json file. The metadata is instead defined within server configuration ( standalone.xml ) in the Red Hat Single Sign-On subsystem definition. <extensions> <extension module="org.keycloak.keycloak-adapter-subsystem"/> </extensions> <profile> <subsystem xmlns="urn:jboss:domain:keycloak:1.1"> <secure-deployment name="WAR MODULE NAME.war"> <realm>demo</realm> <auth-server-url>http://localhost:8081/auth</auth-server-url> <ssl-required>external</ssl-required> <resource>customer-portal</resource> <credential name="secret">password</credential> </secure-deployment> </subsystem> </profile> The secure-deployment name attribute identifies the WAR you want to secure. Its value is the module-name defined in web.xml with .war appended. The rest of the configuration corresponds pretty much one to one with the keycloak.json configuration options defined in Java adapter configuration . The exception is the credential element. To make it easier for you, you can go to the Red Hat Single Sign-On Admin Console and go to the Client/Installation tab of the application this WAR is aligned with. It provides an example XML file you can cut and paste. If you have multiple deployments secured by the same realm you can share the realm configuration in a separate element. For example: <subsystem xmlns="urn:jboss:domain:keycloak:1.1"> <realm name="demo"> <auth-server-url>http://localhost:8080/auth</auth-server-url> <ssl-required>external</ssl-required> </realm> <secure-deployment name="customer-portal.war"> <realm>demo</realm> <resource>customer-portal</resource> <credential name="secret">password</credential> </secure-deployment> <secure-deployment name="product-portal.war"> <realm>demo</realm> <resource>product-portal</resource> <credential name="secret">password</credential> </secure-deployment> <secure-deployment name="database.war"> <realm>demo</realm> <resource>database-service</resource> <bearer-only>true</bearer-only> </secure-deployment> </subsystem> 2.1.3.4. Security domain The security context is propagated to the EJB tier automatically. 2.1.4. Installing JBoss EAP 7 adapters from an RPM Note With Red Hat Enterprise Linux 7, the term channel was replaced with the term repository. In these instructions only the term repository is used. Prerequisites You must subscribe to the JBoss EAP 7.4 repository before you can install the JBoss EAP 7 adapters from an RPM. Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the Red Hat Subscription Management documentation . If you are already subscribed to another JBoss EAP repository, you must unsubscribe from that repository first. For Red Hat Enterprise Linux 6, 7: Using Red Hat Subscription Manager, subscribe to the JBoss EAP 7.4 repository using the following command. Replace <RHEL_VERSION> with either 6 or 7 depending on your Red Hat Enterprise Linux version. USD sudo subscription-manager repos --enable=jb-eap-7-for-rhel-<RHEL_VERSION>-server-rpms For Red Hat Enterprise Linux 8: Using Red Hat Subscription Manager, subscribe to the JBoss EAP 7.4 repository using the following command: USD sudo subscription-manager repos --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms Procedure Install the JBoss EAP 7 adapters for OIDC based on your version of Red Hat Enterprise Linux. Install on Red Hat Enterprise Linux 6, 7: USD sudo yum install eap7-keycloak-adapter-sso7_6 Install on Red Hat Enterprise Linux 8: USD sudo dnf install eap7-keycloak-adapter-sso7_6 Note The default EAP_HOME path for the RPM installation is /opt/rh/eap7/root/usr/share/wildfly. Run the installation script for the OIDC module. USD USDEAP_HOME/bin/jboss-cli.sh -c --file=USDEAP_HOME/bin/adapter-install.cli Your installation is complete. 2.1.5. Installing JBoss EAP 6 adapters from an RPM Note With Red Hat Enterprise Linux 7, the term channel was replaced with the term repository. In these instructions only the term repository is used. You must subscribe to the JBoss EAP 6 repository before you can install the EAP 6 adapters from an RPM. Prerequisites Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the Red Hat Subscription Management documentation . If you are already subscribed to another JBoss EAP repository, you must unsubscribe from that repository first. Using Red Hat Subscription Manager, subscribe to the JBoss EAP 6 repository using the following command. Replace <RHEL_VERSION> with either 6 or 7 depending on your Red Hat Enterprise Linux version. USD sudo subscription-manager repos --enable=jb-eap-6-for-rhel-<RHEL_VERSION>-server-rpms Procedure Install the EAP 6 adapters for OIDC using the following command: USD sudo yum install keycloak-adapter-sso7_6-eap6 Note The default EAP_HOME path for the RPM installation is /opt/rh/eap6/root/usr/share/wildfly. Run the installation script for the OIDC module. USD USDEAP_HOME/bin/jboss-cli.sh -c --file=USDEAP_HOME/bin/adapter-install.cli Your installation is complete. 2.1.6. JBoss Fuse 6 adapter Red Hat Single Sign-On supports securing your web applications running inside JBoss Fuse 6 . Warning The only supported version of Fuse 6 is the latest release. If you use earlier versions of Fuse 6, it is possible that some functions will not work correctly. In particular, the Hawtio integration will not work with earlier versions of Fuse 6. Security for the following items is supported for Fuse: Classic WAR applications deployed on Fuse with Pax Web War Extender Servlets deployed on Fuse as OSGI services with Pax Web Whiteboard Extender Apache Camel Jetty endpoints running with the Camel Jetty component Apache CXF endpoints running on their own separate Jetty engine Apache CXF endpoints running on the default engine provided by the CXF servlet SSH and JMX admin access Hawtio administration console 2.1.6.1. Securing your web applications inside Fuse 6 You must first install the Red Hat Single Sign-On Karaf feature. you will need to perform the steps according to the type of application you want to secure. All referenced web applications require injecting the Red Hat Single Sign-On Jetty authenticator into the underlying Jetty server. The steps to achieve this depend on the application type. The details are described below. 2.1.6.2. Installing the Keycloak feature You must first install the keycloak feature in the JBoss Fuse environment. The keycloak feature includes the Fuse adapter and all third-party dependencies. You can install it either from the Maven repository or from an archive. 2.1.6.2.1. Installing from the Maven repository Prerequisites You must be online and have access to the Maven repository. For Red Hat Single Sign-On, configure a proper Maven repository, so you can install the artifacts. For more information see the JBoss Enterprise Maven repository page. Assuming the Maven repository is https://maven.repository.redhat.com/ga/ , add the following to the USDFUSE_HOME/etc/org.ops4j.pax.url.mvn.cfg file and add the repository to the list of supported repositories. For example: Procedure Start JBoss Fuse 6.3.0 Rollup 12 In the Karaf terminal type: You might also need to install the Jetty 9 feature: Ensure that the features were installed: 2.1.6.2.2. Installing from the ZIP bundle This installation option is useful if you are offline or do not want to use Maven to obtain the JAR files and other artifacts. Procedure Download the Red Hat Single Sign-On Fuse adapter ZIP archive from the Sotware Downloads site. Unzip it into the root directory of JBoss Fuse. The dependencies are then installed under the system directory. You can overwrite all existing jar files. Use this for JBoss Fuse 6.3.0 Rollup 12: Start Fuse and run these commands in the fuse/karaf terminal: Install the corresponding Jetty adapter. Since the artifacts are available directly in the JBoss Fuse system directory, you do not need to use the Maven repository. 2.1.6.3. Securing a Classic WAR application Procedure In the /WEB-INF/web.xml file, declare the necessary: security constraints in the <security-constraint> element login configuration in the <login-config> element security roles in the <security-role> element. For example: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0"> <module-name>customer-portal</module-name> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>does-not-matter</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app> Add the jetty-web.xml file with the authenticator to the /WEB-INF/jetty-web.xml file. For example: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://www.eclipse.org/jetty/configure_9_0.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Get name="securityHandler"> <Set name="authenticator"> <New class="org.keycloak.adapters.jetty.KeycloakJettyAuthenticator"> </New> </Set> </Get> </Configure> Within the /WEB-INF/ directory of your WAR, create a new file, keycloak.json. The format of this configuration file is described in the Java Adapters Config section. It is also possible to make this file available externally as described in Configuring the External Adapter . Ensure your WAR application imports org.keycloak.adapters.jetty and maybe some more packages in the META-INF/MANIFEST.MF file, under the Import-Package header. Using maven-bundle-plugin in your project properly generates OSGI headers in manifest. Note that "*" resolution for the package does not import the org.keycloak.adapters.jetty package, since it is not used by the application or the Blueprint or Spring descriptor, but is rather used in the jetty-web.xml file. The list of the packages to import might look like this: 2.1.6.3.1. Configuring the External Adapter If you do not want the keycloak.json adapter configuration file to be bundled inside your WAR application, but instead made available externally and loaded based on naming conventions, use this configuration method. To enable the functionality, add this section to your /WEB_INF/web.xml file: <context-param> <param-name>keycloak.config.resolver</param-name> <param-value>org.keycloak.adapters.osgi.PathBasedKeycloakConfigResolver</param-value> </context-param> That component uses keycloak.config or karaf.etc java properties to search for a base folder to locate the configuration. Then inside one of those folders it searches for a file called <your_web_context>-keycloak.json . So, for example, if your web application has context my-portal , then your adapter configuration is loaded from the USDFUSE_HOME/etc/my-portal-keycloak.json file. 2.1.6.4. Securing a servlet deployed as an OSGI Service You can use this method if you have a servlet class inside your OSGI bundled project that is not deployed as a classic WAR application. Fuse uses Pax Web Whiteboard Extender to deploy such servlets as web applications. Procedure Red Hat Single Sign-On provides org.keycloak.adapters.osgi.undertow.PaxWebIntegrationService , which allows injecting jetty-web.xml and configuring security constraints for your application. You need to declare such services in the OSGI-INF/blueprint/blueprint.xml file inside your application. Note that your servlet needs to depend on it. An example configuration: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd"> <!-- Using jetty bean just for the compatibility with other fuse services --> <bean id="servletConstraintMapping" class="org.eclipse.jetty.security.ConstraintMapping"> <property name="constraint"> <bean class="org.eclipse.jetty.util.security.Constraint"> <property name="name" value="cst1"/> <property name="roles"> <list> <value>user</value> </list> </property> <property name="authenticate" value="true"/> <property name="dataConstraint" value="0"/> </bean> </property> <property name="pathSpec" value="/product-portal/*"/> </bean> <bean id="keycloakPaxWebIntegration" class="org.keycloak.adapters.osgi.PaxWebIntegrationService" init-method="start" destroy-method="stop"> <property name="jettyWebXmlLocation" value="/WEB-INF/jetty-web.xml" /> <property name="bundleContext" ref="blueprintBundleContext" /> <property name="constraintMappings"> <list> <ref component-id="servletConstraintMapping" /> </list> </property> </bean> <bean id="productServlet" class="org.keycloak.example.ProductPortalServlet" depends-on="keycloakPaxWebIntegration"> </bean> <service ref="productServlet" interface="javax.servlet.Servlet"> <service-properties> <entry key="alias" value="/product-portal" /> <entry key="servlet-name" value="ProductServlet" /> <entry key="keycloak.config.file" value="/keycloak.json" /> </service-properties> </service> </blueprint> You might need to have the WEB-INF directory inside your project (even if your project is not a web application) and create the /WEB-INF/jetty-web.xml and /WEB-INF/keycloak.json files as in the Classic WAR application section. Note you don't need the web.xml file as the security-constraints are declared in the blueprint configuration file. The Import-Package in META-INF/MANIFEST.MF must contain at least these imports: 2.1.6.5. Securing an Apache Camel application You can secure Apache Camel endpoints implemented with the camel-jetty component by adding the securityHandler with KeycloakJettyAuthenticator and the proper security constraints injected. You can add the OSGI-INF/blueprint/blueprint.xml file to your Camel application with a similar configuration as below. The roles, security constraint mappings, and Red Hat Single Sign-On adapter configuration might differ slightly depending on your environment and needs. For example: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/blueprint" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd"> <bean id="kcAdapterConfig" class="org.keycloak.representations.adapters.config.AdapterConfig"> <property name="realm" value="demo"/> <property name="resource" value="admin-camel-endpoint"/> <property name="bearerOnly" value="true"/> <property name="authServerUrl" value="http://localhost:8080/auth" /> <property name="sslRequired" value="EXTERNAL"/> </bean> <bean id="keycloakAuthenticator" class="org.keycloak.adapters.jetty.KeycloakJettyAuthenticator"> <property name="adapterConfig" ref="kcAdapterConfig"/> </bean> <bean id="constraint" class="org.eclipse.jetty.util.security.Constraint"> <property name="name" value="Customers"/> <property name="roles"> <list> <value>admin</value> </list> </property> <property name="authenticate" value="true"/> <property name="dataConstraint" value="0"/> </bean> <bean id="constraintMapping" class="org.eclipse.jetty.security.ConstraintMapping"> <property name="constraint" ref="constraint"/> <property name="pathSpec" value="/*"/> </bean> <bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler"> <property name="authenticator" ref="keycloakAuthenticator" /> <property name="constraintMappings"> <list> <ref component-id="constraintMapping" /> </list> </property> <property name="authMethod" value="BASIC"/> <property name="realmName" value="does-not-matter"/> </bean> <bean id="sessionHandler" class="org.keycloak.adapters.jetty.spi.WrappingSessionHandler"> <property name="handler" ref="securityHandler" /> </bean> <bean id="helloProcessor" class="org.keycloak.example.CamelHelloProcessor" /> <camelContext id="blueprintContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint"> <route id="httpBridge"> <from uri="jetty:http://0.0.0.0:8383/admin-camel-endpoint?handlers=sessionHandler&matchOnUriPrefix=true" /> <process ref="helloProcessor" /> <log message="The message from camel endpoint contains USD{body}"/> </route> </camelContext> </blueprint> The Import-Package in META-INF/MANIFEST.MF needs to contain these imports: 2.1.6.6. Camel RestDSL Camel RestDSL is a Camel feature used to define your REST endpoints in a fluent way. But you must still use specific implementation classes and provide instructions on how to integrate with Red Hat Single Sign-On. The way to configure the integration mechanism depends on the Camel component for which you configure your RestDSL-defined routes. The following example shows how to configure integration using the Jetty component, with references to some of the beans defined in Blueprint example. <bean id="securityHandlerRest" class="org.eclipse.jetty.security.ConstraintSecurityHandler"> <property name="authenticator" ref="keycloakAuthenticator" /> <property name="constraintMappings"> <list> <ref component-id="constraintMapping" /> </list> </property> <property name="authMethod" value="BASIC"/> <property name="realmName" value="does-not-matter"/> </bean> <bean id="sessionHandlerRest" class="org.keycloak.adapters.jetty.spi.WrappingSessionHandler"> <property name="handler" ref="securityHandlerRest" /> </bean> <camelContext id="blueprintContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint"> <restConfiguration component="jetty" contextPath="/restdsl" port="8484"> <!--the link with Keycloak security handlers happens here--> <endpointProperty key="handlers" value="sessionHandlerRest"></endpointProperty> <endpointProperty key="matchOnUriPrefix" value="true"></endpointProperty> </restConfiguration> <rest path="/hello" > <description>Hello rest service</description> <get uri="/{id}" outType="java.lang.String"> <description>Just an helllo</description> <to uri="direct:justDirect" /> </get> </rest> <route id="justDirect"> <from uri="direct:justDirect"/> <process ref="helloProcessor" /> <log message="RestDSL correctly invoked USD{body}"/> <setBody> <constant>(__This second sentence is returned from a Camel RestDSL endpoint__)</constant> </setBody> </route> </camelContext> 2.1.6.7. Securing an Apache CXF endpoint on a separate Jetty engine Procedure To run your CXF endpoints secured by Red Hat Single Sign-On on separate Jetty engines, perform the following procedure. Add META-INF/spring/beans.xml to your application, and in it, declare httpj:engine-factory with Jetty SecurityHandler with injected KeycloakJettyAuthenticator . The configuration for a CFX JAX-WS application might resemble this one: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxws="http://cxf.apache.org/jaxws" xmlns:httpj="http://cxf.apache.org/transports/http-jetty/configuration" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://www.springframework.org/schema/osgi http://www.springframework.org/schema/osgi/spring-osgi.xsd http://cxf.apache.org/transports/http-jetty/configuration http://cxf.apache.org/schemas/configuration/http-jetty.xsd"> <import resource="classpath:META-INF/cxf/cxf.xml" /> <bean id="kcAdapterConfig" class="org.keycloak.representations.adapters.config.AdapterConfig"> <property name="realm" value="demo"/> <property name="resource" value="custom-cxf-endpoint"/> <property name="bearerOnly" value="true"/> <property name="authServerUrl" value="http://localhost:8080/auth" /> <property name="sslRequired" value="EXTERNAL"/> </bean> <bean id="keycloakAuthenticator" class="org.keycloak.adapters.jetty.KeycloakJettyAuthenticator"> <property name="adapterConfig"> <ref local="kcAdapterConfig" /> </property> </bean> <bean id="constraint" class="org.eclipse.jetty.util.security.Constraint"> <property name="name" value="Customers"/> <property name="roles"> <list> <value>user</value> </list> </property> <property name="authenticate" value="true"/> <property name="dataConstraint" value="0"/> </bean> <bean id="constraintMapping" class="org.eclipse.jetty.security.ConstraintMapping"> <property name="constraint" ref="constraint"/> <property name="pathSpec" value="/*"/> </bean> <bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler"> <property name="authenticator" ref="keycloakAuthenticator" /> <property name="constraintMappings"> <list> <ref local="constraintMapping" /> </list> </property> <property name="authMethod" value="BASIC"/> <property name="realmName" value="does-not-matter"/> </bean> <httpj:engine-factory bus="cxf" id="kc-cxf-endpoint"> <httpj:engine port="8282"> <httpj:handlers> <ref local="securityHandler" /> </httpj:handlers> <httpj:sessionSupport>true</httpj:sessionSupport> </httpj:engine> </httpj:engine-factory> <jaxws:endpoint implementor="org.keycloak.example.ws.ProductImpl" address="http://localhost:8282/ProductServiceCF" depends-on="kc-cxf-endpoint" /> </beans> For the CXF JAX-RS application, the only difference might be in the configuration of the endpoint dependent on engine-factory: <jaxrs:server serviceClass="org.keycloak.example.rs.CustomerService" address="http://localhost:8282/rest" depends-on="kc-cxf-endpoint"> <jaxrs:providers> <bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" /> </jaxrs:providers> </jaxrs:server> The Import-Package in META-INF/MANIFEST.MF must contain those imports: 2.1.6.8. Securing an Apache CXF endpoint on the default Jetty Engine Some services automatically come with deployed servlets on startup. One such service is the CXF servlet running in the http://localhost:8181/cxf context. Securing such endpoints can be complicated. One approach, which Red Hat Single Sign-On is currently using, is ServletReregistrationService, which undeploys a built-in servlet at startup, enabling you to redeploy it on a context secured by Red Hat Single Sign-On. The configuration file OSGI-INF/blueprint/blueprint.xml inside your application might resemble the one below. Note that it adds the JAX-RS customerservice endpoint, which is endpoint-specific to your application, but more importantly, secures the entire /cxf context. <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxrs="http://cxf.apache.org/blueprint/jaxrs" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://cxf.apache.org/blueprint/jaxrs http://cxf.apache.org/schemas/blueprint/jaxrs.xsd"> <!-- JAXRS Application --> <bean id="customerBean" class="org.keycloak.example.rs.CxfCustomerService" /> <jaxrs:server id="cxfJaxrsServer" address="/customerservice"> <jaxrs:providers> <bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" /> </jaxrs:providers> <jaxrs:serviceBeans> <ref component-id="customerBean" /> </jaxrs:serviceBeans> </jaxrs:server> <!-- Securing of whole /cxf context by unregister default cxf servlet from paxweb and re-register with applied security constraints --> <bean id="cxfConstraintMapping" class="org.eclipse.jetty.security.ConstraintMapping"> <property name="constraint"> <bean class="org.eclipse.jetty.util.security.Constraint"> <property name="name" value="cst1"/> <property name="roles"> <list> <value>user</value> </list> </property> <property name="authenticate" value="true"/> <property name="dataConstraint" value="0"/> </bean> </property> <property name="pathSpec" value="/cxf/*"/> </bean> <bean id="cxfKeycloakPaxWebIntegration" class="org.keycloak.adapters.osgi.PaxWebIntegrationService" init-method="start" destroy-method="stop"> <property name="bundleContext" ref="blueprintBundleContext" /> <property name="jettyWebXmlLocation" value="/WEB-INF/jetty-web.xml" /> <property name="constraintMappings"> <list> <ref component-id="cxfConstraintMapping" /> </list> </property> </bean> <bean id="defaultCxfReregistration" class="org.keycloak.adapters.osgi.ServletReregistrationService" depends-on="cxfKeycloakPaxWebIntegration" init-method="start" destroy-method="stop"> <property name="bundleContext" ref="blueprintBundleContext" /> <property name="managedServiceReference"> <reference interface="org.osgi.service.cm.ManagedService" filter="(service.pid=org.apache.cxf.osgi)" timeout="5000" /> </property> </bean> </blueprint> As a result, all other CXF services running on the default CXF HTTP destination are also secured. Similarly, when the application is undeployed, the entire /cxf context becomes unsecured as well. For this reason, using your own Jetty engine for your applications as described in Secure CXF Application on separate Jetty Engine then gives you more control over security for each individual application. The WEB-INF directory might need to be inside your project (even if your project is not a web application). You might also need to edit the /WEB-INF/jetty-web.xml and /WEB-INF/keycloak.json files in a similar way as in Classic WAR application . Note that you do not need the web.xml file as the security constraints are declared in the blueprint configuration file. The Import-Package in META-INF/MANIFEST.MF must contain these imports: 2.1.6.9. Securing Fuse Administration Services 2.1.6.9.1. Using SSH Authentication to Fuse Terminal Red Hat Single Sign-On mainly addresses use cases for authentication of web applications; however, if your other web services and applications are protected with Red Hat Single Sign-On, protecting non-web administration services such as SSH with Red Hat Single Sign-On credentials is a best pracrice. You can do this using the JAAS login module, which allows remote connection to Red Hat Single Sign-On and verifies credentials based on Resource Owner Password Credentials . To enable SSH authentication, perform the following procedure. Procedure In Red Hat Single Sign-On create a client (for example, ssh-jmx-admin-client ), which will be used for SSH authentication. This client needs to have Direct Access Grants Enabled selected to On . In the USDFUSE_HOME/etc/org.apache.karaf.shell.cfg file, update or specify this property: Add the USDFUSE_HOME/etc/keycloak-direct-access.json file with content similar to the following (based on your environment and Red Hat Single Sign-On client settings): { "realm": "demo", "resource": "ssh-jmx-admin-client", "ssl-required" : "external", "auth-server-url" : "http://localhost:8080/auth", "credentials": { "secret": "password" } } This file specifies the client application configuration, which is used by JAAS DirectAccessGrantsLoginModule from the keycloak JAAS realm for SSH authentication. Start Fuse and install the keycloak JAAS realm. The easiest way is to install the keycloak-jaas feature, which has the JAAS realm predefined. You can override the feature's predefined realm by using your own keycloak JAAS realm with higher ranking. For details see the JBoss Fuse documentation . Use these commands in the Fuse terminal: Log in using SSH as admin user by typing the following in the terminal: Log in with password password . Note On some later operating systems, you might also need to use the SSH command's -o option -o HostKeyAlgorithms=+ssh-dss because later SSH clients do not allow use of the ssh-dss algorithm, by default. However, by default, it is currently used in JBoss Fuse 6.3.0 Rollup 12. Note that the user needs to have realm role admin to perform all operations or another role to perform a subset of operations (for example, the viewer role that restricts the user to run only read-only Karaf commands). The available roles are configured in USDFUSE_HOME/etc/org.apache.karaf.shell.cfg or USDFUSE_HOME/etc/system.properties . 2.1.6.9.2. Using JMX authentication JMX authentication might be necessary if you want to use jconsole or another external tool to remotely connect to JMX through RMI. Otherwise it might be better to use hawt.io/jolokia, since the jolokia agent is installed in hawt.io by default. For more details see Hawtio Admin Console . Procedure In the USDFUSE_HOME/etc/org.apache.karaf.management.cfg file, change the jmxRealm property to: Install the keycloak-jaas feature and configure the USDFUSE_HOME/etc/keycloak-direct-access.json file as described in the SSH section above. In jconsole you can use a URL such as: and credentials: admin/password (based on the user with admin privileges according to your environment). 2.1.6.10. Securing the Hawtio Administration Console To secure the Hawtio Administration Console with Red Hat Single Sign-On, perform the following procedure. Procedure Add these properties to the USDFUSE_HOME/etc/system.properties file: Create a client in the Red Hat Single Sign-On Admin Console in your realm. For example, in the Red Hat Single Sign-On demo realm, create a client hawtio-client , specify public as the Access Type, and specify a redirect URI pointing to Hawtio: http://localhost:8181/hawtio/*. You must also have a corresponding Web Origin configured (in this case, http://localhost:8181). Create the keycloak-hawtio-client.json file in the USDFUSE_HOME/etc directory using content similar to that shown in the example below. Change the realm , resource , and auth-server-url properties according to your Red Hat Single Sign-On environment. The resource property must point to the client created in the step. This file is used by the client (Hawtio JavaScript application) side. { "realm" : "demo", "resource" : "hawtio-client", "auth-server-url" : "http://localhost:8080/auth", "ssl-required" : "external", "public-client" : true } Create the keycloak-hawtio.json file in the USDFUSE_HOME/etc dicrectory using content similar to that shown in the example below. Change the realm and auth-server-url properties according to your Red Hat Single Sign-On environment. This file is used by the adapters on the server (JAAS Login module) side. { "realm" : "demo", "resource" : "jaas", "bearer-only" : true, "auth-server-url" : "http://localhost:8080/auth", "ssl-required" : "external", "use-resource-role-mappings": false, "principal-attribute": "preferred_username" } Start JBoss Fuse 6.3.0 Rollup 12 and install the keycloak feature if you have not already done so. The commands in Karaf terminal are similar to this example: Go to http://localhost:8181/hawtio and log in as a user from your Red Hat Single Sign-On realm. Note that the user needs to have the proper realm role to successfully authenticate to Hawtio. The available roles are configured in the USDFUSE_HOME/etc/system.properties file in hawtio.roles . 2.1.6.10.1. Securing Hawtio on JBoss EAP 6.4 Prerequisites Set up Red Hat Single Sign-On as described in Securing the Hawtio Administration Console . It is assumed that: you have a Red Hat Single Sign-On realm demo and client hawtio-client your Red Hat Single Sign-On is running on localhost:8080 the JBoss EAP 6.4 server with deployed Hawtio will be running on localhost:8181 . The directory with this server is referred in steps as USDEAP_HOME . Procedure Copy the hawtio-wildfly-1.4.0.redhat-630396.war archive to the USDEAP_HOME/standalone/configuration directory. For more details about deploying Hawtio see the Fuse Hawtio documentation . Copy the keycloak-hawtio.json and keycloak-hawtio-client.json files with the above content to the USDEAP_HOME/standalone/configuration directory. Install the Red Hat Single Sign-On adapter subsystem to your JBoss EAP 6.4 server as described in the JBoss adapter documentation . In the USDEAP_HOME/standalone/configuration/standalone.xml file configure the system properties as in this example: <extensions> ... </extensions> <system-properties> <property name="hawtio.authenticationEnabled" value="true" /> <property name="hawtio.realm" value="hawtio" /> <property name="hawtio.roles" value="admin,viewer" /> <property name="hawtio.rolePrincipalClasses" value="org.keycloak.adapters.jaas.RolePrincipal" /> <property name="hawtio.keycloakEnabled" value="true" /> <property name="hawtio.keycloakClientConfig" value="USD{jboss.server.config.dir}/keycloak-hawtio-client.json" /> <property name="hawtio.keycloakServerConfig" value="USD{jboss.server.config.dir}/keycloak-hawtio.json" /> </system-properties> Add the Hawtio realm to the same file in the security-domains section: <security-domain name="hawtio" cache-type="default"> <authentication> <login-module code="org.keycloak.adapters.jaas.BearerTokenLoginModule" flag="required"> <module-option name="keycloak-config-file" value="USD{hawtio.keycloakServerConfig}"/> </login-module> </authentication> </security-domain> Add the secure-deployment section hawtio to the adapter subsystem. This ensures that the Hawtio WAR is able to find the JAAS login module classes. <subsystem xmlns="urn:jboss:domain:keycloak:1.1"> <secure-deployment name="hawtio-wildfly-1.4.0.redhat-630396.war" /> </subsystem> Restart the JBoss EAP 6.4 server with Hawtio: cd USDEAP_HOME/bin ./standalone.sh -Djboss.socket.binding.port-offset=101 Access Hawtio at http://localhost:8181/hawtio . It is secured by Red Hat Single Sign-On. 2.1.7. JBoss Fuse 7 Adapter Red Hat Single Sign-On supports securing your web applications running inside JBoss Fuse 7 . JBoss Fuse 7 leverages Undertow adapter which is essentially the same as JBoss EAP 7 Adapter as JBoss Fuse 7.4.0 is bundled with the Undertow HTTP engine under the covers and Undertow is used for running various kinds of web applications. Warning The only supported version of Fuse 7 is the latest release. If you use earlier versions of Fuse 7, it is possible that some functions will not work correctly. In particular, integration will not work at all for versions of Fuse 7 lower than 7.12.0. Security for the following items is supported for Fuse: Classic WAR applications deployed on Fuse with Pax Web War Extender Servlets deployed on Fuse as OSGI services with Pax Web Whiteboard Extender and additionally servlets registered through org.osgi.service.http.HttpService#registerServlet() which is standard OSGi Enterprise HTTP Service Apache Camel Undertow endpoints running with the Camel Undertow component Apache CXF endpoints running on their own separate Undertow engine Apache CXF endpoints running on the default engine provided by the CXF servlet SSH and JMX admin access Hawtio administration console 2.1.7.1. Securing your web applications inside Fuse 7 You must first install the Red Hat Single Sign-On Karaf feature. you will need to perform the steps according to the type of application you want to secure. All referenced web applications require injecting the Red Hat Single Sign-On Undertow authentication mechanism into the underlying web server. The steps to achieve this depend on the application type. The details are described below. 2.1.7.2. Installing the Keycloak feature You must first install the keycloak-pax-http-undertow and keycloak-jaas features in the JBoss Fuse environment. The keycloak-pax-http-undertow feature includes the Fuse adapter and all third-party dependencies. The keycloak-jaas contains JAAS module used in realm for SSH and JMX authentication. You can install it either from the Maven repository or from an archive. 2.1.7.2.1. Installing from the Maven repository Prerequisites You must be online and have access to the Maven repository. For Red Hat Single Sign-On, configure a proper Maven repository, so you can install the artifacts. For more information see the JBoss Enterprise Maven repository page. Assuming the Maven repository is https://maven.repository.redhat.com/ga/ , add the following to the USDFUSE_HOME/etc/org.ops4j.pax.url.mvn.cfg file and add the repository to the list of supported repositories. For example: Procedure Start JBoss Fuse 7.4.0 In the Karaf terminal, type: You might also need to install the Undertow feature: Ensure that the features were installed: 2.1.7.2.2. Installing from the ZIP bundle This is useful if you are offline or do not want to use Maven to obtain the JAR files and other artifacts. Procedure Download the Red Hat Single Sign-On Fuse adapter ZIP archive from the Sotware Downloads site. Unzip it into the root directory of JBoss Fuse. The dependencies are then installed under the system directory. You can overwrite all existing jar files. Use this for JBoss Fuse 7.4.0: Start Fuse and run these commands in the fuse/karaf terminal: Install the corresponding Undertow adapter. Since the artifacts are available directly in the JBoss Fuse system directory, you do not need to use the Maven repository. 2.1.7.3. Securing a Classic WAR application Procedure In the /WEB-INF/web.xml file, declare the necessary: security constraints in the <security-constraint> element login configuration in the <login-config> element. Make sure that the <auth-method> is KEYCLOAK . security roles in the <security-role> element For example: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0"> <module-name>customer-portal</module-name> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK</auth-method> <realm-name>does-not-matter</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app> Within the /WEB-INF/ directory of your WAR, create a new file, keycloak.json. The format of this configuration file is described in the Java Adapters Config section. It is also possible to make this file available externally as described in Configuring the External Adapter . For example: { "realm": "demo", "resource": "customer-portal", "auth-server-url": "http://localhost:8080/auth", "ssl-required" : "external", "credentials": { "secret": "password" } } Contrary to the Fuse 6 adapter, there are no special OSGi imports needed in MANIFEST.MF. 2.1.7.3.1. Configuration resolvers The keycloak.json adapter configuration file can be stored inside a bundle, which is default behaviour, or in a directory on a filesystem. To specify the actual source of the configuration file, set the keycloak.config.resolver deployment parameter to the desired configuration resolver class. For example, in a classic WAR application, set the keycloak.config.resolver context parameter in web.xml file like this: <context-param> <param-name>keycloak.config.resolver</param-name> <param-value>org.keycloak.adapters.osgi.PathBasedKeycloakConfigResolver</param-value> </context-param> The following resolvers are available for keycloak.config.resolver : org.keycloak.adapters.osgi.BundleBasedKeycloakConfigResolver This is the default resolver. The configuration file is expected inside the OSGi bundle that is being secured. By default, it loads file named WEB-INF/keycloak.json but this file name can be configured via configLocation property. org.keycloak.adapters.osgi.PathBasedKeycloakConfigResolver This resolver searches for a file called <your_web_context>-keycloak.json inside a folder that is specified by keycloak.config system property. If keycloak.config is not set, karaf.etc system property is used instead. For example, if your web application is deployed into context my-portal , then your adapter configuration would be loaded either from the USD{keycloak.config}/my-portal-keycloak.json file, or from USD{karaf.etc}/my-portal-keycloak.json . org.keycloak.adapters.osgi.HierarchicalPathBasedKeycloakConfigResolver This resolver is similar to PathBasedKeycloakConfigResolver above, where for given URI path, configuration locations are checked from most to least specific. For example, for /my/web-app/context URI, the following configuration locations are searched for existence until the first one exists: USD{karaf.etc}/my-web-app-context-keycloak.json USD{karaf.etc}/my-web-app-keycloak.json USD{karaf.etc}/my-keycloak.json USD{karaf.etc}/keycloak.json 2.1.7.4. Securing a servlet deployed as an OSGI Service You can use this method if you have a servlet class inside your OSGI bundled project that is not deployed as a classic WAR application. Fuse uses Pax Web Whiteboard Extender to deploy such servlets as web applications. Procedure Red Hat Single Sign-On provides org.keycloak.adapters.osgi.undertow.PaxWebIntegrationService , which allows configuring authentication method and security constraints for your application. You need to declare such services in the OSGI-INF/blueprint/blueprint.xml file inside your application. Note that your servlet needs to depend on it. An example configuration: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd"> <bean id="servletConstraintMapping" class="org.keycloak.adapters.osgi.PaxWebSecurityConstraintMapping"> <property name="roles"> <list> <value>user</value> </list> </property> <property name="authentication" value="true"/> <property name="url" value="/product-portal/*"/> </bean> <!-- This handles the integration and setting the login-config and security-constraints parameters --> <bean id="keycloakPaxWebIntegration" class="org.keycloak.adapters.osgi.undertow.PaxWebIntegrationService" init-method="start" destroy-method="stop"> <property name="bundleContext" ref="blueprintBundleContext" /> <property name="constraintMappings"> <list> <ref component-id="servletConstraintMapping" /> </list> </property> </bean> <bean id="productServlet" class="org.keycloak.example.ProductPortalServlet" depends-on="keycloakPaxWebIntegration" /> <service ref="productServlet" interface="javax.servlet.Servlet"> <service-properties> <entry key="alias" value="/product-portal" /> <entry key="servlet-name" value="ProductServlet" /> <entry key="keycloak.config.file" value="/keycloak.json" /> </service-properties> </service> </blueprint> You might need to have the WEB-INF directory inside your project (even if your project is not a web application) and create the /WEB-INF/keycloak.json file as described in the Classic WAR application section. Note you don't need the web.xml file as the security-constraints are declared in the blueprint configuration file. Contrary to the Fuse 6 adapter, there are no special OSGi imports needed in MANIFEST.MF. 2.1.7.5. Securing an Apache Camel application You can secure Apache Camel endpoints implemented with the camel-undertow component by injecting the proper security constraints via blueprint and updating the used component to undertow-keycloak . You have to add the OSGI-INF/blueprint/blueprint.xml file to your Camel application with a similar configuration as below. The roles, security constraint mappings, and adapter configuration might differ slightly depending on your environment and needs. Compared to the standard undertow component, undertow-keycloak component adds two new properties: configResolver is a resolver bean that supplies Red Hat Single Sign-On adapter configuration. Available resolvers are listed in Configuration Resolvers section. allowedRoles is a comma-separated list of roles. User accessing the service has to have at least one role to be permitted the access. For example: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/blueprint" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint-2.17.1.xsd"> <bean id="keycloakConfigResolver" class="org.keycloak.adapters.osgi.BundleBasedKeycloakConfigResolver" > <property name="bundleContext" ref="blueprintBundleContext" /> </bean> <bean id="helloProcessor" class="org.keycloak.example.CamelHelloProcessor" /> <camelContext id="blueprintContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint"> <route id="httpBridge"> <from uri="undertow-keycloak:http://0.0.0.0:8383/admin-camel-endpoint?matchOnUriPrefix=true&configResolver=#keycloakConfigResolver&allowedRoles=admin" /> <process ref="helloProcessor" /> <log message="The message from camel endpoint contains USD{body}"/> </route> </camelContext> </blueprint> The Import-Package in META-INF/MANIFEST.MF needs to contain these imports: 2.1.7.6. Camel RestDSL Camel RestDSL is a Camel feature used to define your REST endpoints in a fluent way. But you must still use specific implementation classes and provide instructions on how to integrate with Red Hat Single Sign-On. The way to configure the integration mechanism depends on the Camel component for which you configure your RestDSL-defined routes. The following example shows how to configure integration using the undertow-keycloak component, with references to some of the beans defined in Blueprint example. <camelContext id="blueprintContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint"> <!--the link with Keycloak security handlers happens by using undertow-keycloak component --> <restConfiguration apiComponent="undertow-keycloak" contextPath="/restdsl" port="8484"> <endpointProperty key="configResolver" value="#keycloakConfigResolver" /> <endpointProperty key="allowedRoles" value="admin,superadmin" /> </restConfiguration> <rest path="/hello" > <description>Hello rest service</description> <get uri="/{id}" outType="java.lang.String"> <description>Just a hello</description> <to uri="direct:justDirect" /> </get> </rest> <route id="justDirect"> <from uri="direct:justDirect"/> <process ref="helloProcessor" /> <log message="RestDSL correctly invoked USD{body}"/> <setBody> <constant>(__This second sentence is returned from a Camel RestDSL endpoint__)</constant> </setBody> </route> </camelContext> 2.1.7.7. Securing an Apache CXF endpoint on a separate Undertow Engine To run your CXF endpoints secured by Red Hat Single Sign-On on a separate Undertow engine, perform the following procedure. Procedure Add OSGI-INF/blueprint/blueprint.xml to your application, and in it, add the proper configuration resolver bean similarly to Camel configuration . In the httpu:engine-factory declare org.keycloak.adapters.osgi.undertow.CxfKeycloakAuthHandler handler using that camel configuration. The configuration for a CFX JAX-WS application might resemble this one: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxws="http://cxf.apache.org/blueprint/jaxws" xmlns:cxf="http://cxf.apache.org/blueprint/core" xmlns:httpu="http://cxf.apache.org/transports/http-undertow/configuration". xsi:schemaLocation=" http://cxf.apache.org/transports/http-undertow/configuration http://cxf.apache.org/schemas/configuration/http-undertow.xsd http://cxf.apache.org/blueprint/core http://cxf.apache.org/schemas/blueprint/core.xsd http://cxf.apache.org/blueprint/jaxws http://cxf.apache.org/schemas/blueprint/jaxws.xsd"> <bean id="keycloakConfigResolver" class="org.keycloak.adapters.osgi.BundleBasedKeycloakConfigResolver" > <property name="bundleContext" ref="blueprintBundleContext" /> </bean> <httpu:engine-factory bus="cxf" id="kc-cxf-endpoint"> <httpu:engine port="8282"> <httpu:handlers> <bean class="org.keycloak.adapters.osgi.undertow.CxfKeycloakAuthHandler"> <property name="configResolver" ref="keycloakConfigResolver" /> </bean> </httpu:handlers> </httpu:engine> </httpu:engine-factory> <jaxws:endpoint implementor="org.keycloak.example.ws.ProductImpl" address="http://localhost:8282/ProductServiceCF" depends-on="kc-cxf-endpoint"/> </blueprint> For the CXF JAX-RS application, the only difference might be in the configuration of the endpoint dependent on engine-factory: <jaxrs:server serviceClass="org.keycloak.example.rs.CustomerService" address="http://localhost:8282/rest" depends-on="kc-cxf-endpoint"> <jaxrs:providers> <bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" /> </jaxrs:providers> </jaxrs:server> The Import-Package in META-INF/MANIFEST.MF must contain those imports: 2.1.7.8. Securing an Apache CXF endpoint on the default Undertow Engine Some services automatically come with deployed servlets on startup. One such service is the CXF servlet running in the http://localhost:8181/cxf context. Fuse's Pax Web supports altering existing contexts via configuration admin. This can be used to secure endpoints by Red Hat Single Sign-On. The configuration file OSGI-INF/blueprint/blueprint.xml inside your application might resemble the one below. Note that it adds the JAX-RS customerservice endpoint, which is endpoint-specific to your application. <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxrs="http://cxf.apache.org/blueprint/jaxrs" xsi:schemaLocation=" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://cxf.apache.org/blueprint/jaxrs http://cxf.apache.org/schemas/blueprint/jaxrs.xsd"> <!-- JAXRS Application --> <bean id="customerBean" class="org.keycloak.example.rs.CxfCustomerService" /> <jaxrs:server id="cxfJaxrsServer" address="/customerservice"> <jaxrs:providers> <bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" /> </jaxrs:providers> <jaxrs:serviceBeans> <ref component-id="customerBean" /> </jaxrs:serviceBeans> </jaxrs:server> </blueprint> Furthermore, you have to create USD{karaf.etc}/org.ops4j.pax.web.context- anyName .cfg file . It will be treated as factory PID configuration that is tracked by pax-web-runtime bundle. Such configuration may contain the following properties that correspond to some of the properties of standard web.xml : For full description of available properties in configuration admin file, please refer to Fuse documentation. The properties above have the following meaning: bundle.symbolicName and context.id Identification of the bundle and its deployment context within org.ops4j.pax.web.service.WebContainer . context.param.keycloak.config.resolver Provides value of keycloak.config.resolver context parameter to the bundle just the same as in web.xml for classic WARs. Available resolvers are described in Configuration Resolvers section. login.config.authMethod Authentication method. Must be KEYCLOAK . security. anyName .url and security. anyName .roles Values of properties of individual security constraints just as they would be set in security-constraint/web-resource-collection/url-pattern and security-constraint/auth-constraint/role-name in web.xml , respectively. Roles are separated by comma and whitespace around it. The anyName identifier can be arbitrary but must match for individual properties of the same security constraint. Note Some Fuse versions contain a bug that requires roles to be separated by ", " (comma and single space). Make sure you use precisely this notation for separating the roles. The Import-Package in META-INF/MANIFEST.MF must contain at least these imports: 2.1.7.9. Securing Fuse Administration Services 2.1.7.9.1. Using SSH Authentication to Fuse Terminal Red Hat Single Sign-On mainly addresses use cases for authentication of web applications; however, if your other web services and applications are protected with Red Hat Single Sign-On, protecting non-web administration services such as SSH with Red Hat Single Sign-On credentials is a best pracrice. You can do this using the JAAS login module, which allows remote connection to Red Hat Single Sign-On and verifies credentials based on Resource Owner Password Credentials . To enable SSH authentication, perform the following procedure. Procedure In Red Hat Single Sign-On create a client (for example, ssh-jmx-admin-client ), which will be used for SSH authentication. This client needs to have Direct Access Grants Enabled selected to On . In the USDFUSE_HOME/etc/org.apache.karaf.shell.cfg file, update or specify this property: Add the USDFUSE_HOME/etc/keycloak-direct-access.json file with content similar to the following (based on your environment and Red Hat Single Sign-On client settings): { "realm": "demo", "resource": "ssh-jmx-admin-client", "ssl-required" : "external", "auth-server-url" : "http://localhost:8080/auth", "credentials": { "secret": "password" } } This file specifies the client application configuration, which is used by JAAS DirectAccessGrantsLoginModule from the keycloak JAAS realm for SSH authentication. Start Fuse and install the keycloak JAAS realm. The easiest way is to install the keycloak-jaas feature, which has the JAAS realm predefined. You can override the feature's predefined realm by using your own keycloak JAAS realm with higher ranking. For details see the JBoss Fuse documentation . Use these commands in the Fuse terminal: Log in using SSH as admin user by typing the following in the terminal: Log in with password password . Note On some later operating systems, you might also need to use the SSH command's -o option -o HostKeyAlgorithms=+ssh-dss because later SSH clients do not allow use of the ssh-dss algorithm, by default. However, by default, it is currently used in JBoss Fuse 7.4.0. Note that the user needs to have realm role admin to perform all operations or another role to perform a subset of operations (for example, the viewer role that restricts the user to run only read-only Karaf commands). The available roles are configured in USDFUSE_HOME/etc/org.apache.karaf.shell.cfg or USDFUSE_HOME/etc/system.properties . 2.1.7.9.2. Using JMX authentication JMX authentication might be necessary if you want to use jconsole or another external tool to remotely connect to JMX through RMI. Otherwise it might be better to use hawt.io/jolokia, since the jolokia agent is installed in hawt.io by default. For more details see Hawtio Admin Console . To use JMX authentication, perform the following procedure. Procedure In the USDFUSE_HOME/etc/org.apache.karaf.management.cfg file, change the jmxRealm property to: Install the keycloak-jaas feature and configure the USDFUSE_HOME/etc/keycloak-direct-access.json file as described in the SSH section above. In jconsole you can use a URL such as: and credentials: admin/password (based on the user with admin privileges according to your environment). 2.1.7.10. Securing the Hawtio Administration Console To secure the Hawtio Administration Console with Red Hat Single Sign-On, perform the following procedure. Procedure Create a client in the Red Hat Single Sign-On Admin Console in your realm. For example, in the Red Hat Single Sign-On demo realm, create a client hawtio-client , specify public as the Access Type, and specify a redirect URI pointing to Hawtio: http://localhost:8181/hawtio/*. Configure corresponding Web Origin (in this case, http://localhost:8181). Setup client scope mapping to include view-profile client role of account client in Scope tab in hawtio-client client detail. Create the keycloak-hawtio-client.json file in the USDFUSE_HOME/etc directory using content similar to that shown in the example below. Change the realm , resource , and auth-server-url properties according to your Red Hat Single Sign-On environment. The resource property must point to the client created in the step. This file is used by the client (Hawtio JavaScript application) side. { "realm" : "demo", "clientId" : "hawtio-client", "url" : "http://localhost:8080/auth", "ssl-required" : "external", "public-client" : true } Create the keycloak-direct-access.json file in the USDFUSE_HOME/etc directory using content similar to that shown in the example below. Change the realm and url properties according to your Red Hat Single Sign-On environment. This file is used by JavaScript client. { "realm" : "demo", "resource" : "ssh-jmx-admin-client", "auth-server-url" : "http://localhost:8080/auth", "ssl-required" : "external", "credentials": { "secret": "password" } } Create the keycloak-hawtio.json file in the USDFUSE_HOME/etc dicrectory using content similar to that shown in the example below. Change the realm and auth-server-url properties according to your Red Hat Single Sign-On environment. This file is used by the adapters on the server (JAAS Login module) side. { "realm" : "demo", "resource" : "jaas", "bearer-only" : true, "auth-server-url" : "http://localhost:8080/auth", "ssl-required" : "external", "use-resource-role-mappings": false, "principal-attribute": "preferred_username" } Start JBoss Fuse 7.4.0, install the Keycloak feature . Then type in the Karaf terminal: Go to http://localhost:8181/hawtio and log in as a user from your Red Hat Single Sign-On realm. Note that the user needs to have the proper realm role to successfully authenticate to Hawtio. The available roles are configured in the USDFUSE_HOME/etc/system.properties file in hawtio.roles . 2.1.8. Spring Boot adapter Note The Spring Boot Adapter is deprecated and will not be included in the 8.0 and higher versions of RH-SSO. This adapter will be maintained during the lifecycle of RH-SSO 7.x. Users are urged to migrate to Spring Security to integrate their Spring Boot applications with RH-SSO. 2.1.8.1. Installing the Spring Boot adapter To be able to secure Spring Boot apps you must add the Keycloak Spring Boot adapter JAR to your app. You then have to provide some extra configuration via normal Spring Boot configuration ( application.properties ). The Keycloak Spring Boot adapter takes advantage of Spring Boot's autoconfiguration so all you need to do is add this adapter Keycloak Spring Boot starter to your project. Procedure To add the starter to your project using Maven, add the following to your dependencies: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-spring-boot-starter</artifactId> </dependency> Add the Adapter BOM dependency: <dependencyManagement> <dependencies> <dependency> <groupId>org.keycloak.bom</groupId> <artifactId>keycloak-adapter-bom</artifactId> <version>18.0.18.redhat-00001</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Currently the following embedded containers are supported and do not require any extra dependencies if using the Starter: Tomcat Undertow Jetty 2.1.8.2. Configuring the Spring Boot Adapter Use the procedure to configure your Spring Boot app to use Red Hat Single Sign-On. Procedure Instead of a keycloak.json file, you configure the realm for the Spring Boot adapter via the normal Spring Boot configuration. For example: You can disable the Keycloak Spring Boot Adapter (for example in tests) by setting keycloak.enabled = false . To configure a Policy Enforcer, unlike keycloak.json, use policy-enforcer-config instead of just policy-enforcer . Specify the Jakarta EE security config that would normally go in the web.xml . The Spring Boot Adapter will set the login-method to KEYCLOAK and configure the security-constraints at startup time. Here's an example configuration: Warning If you plan to deploy your Spring Application as a WAR then you should not use the Spring Boot Adapter and use the dedicated adapter for the application server or servlet container you are using. Your Spring Boot should also contain a web.xml file. 2.1.9. Java servlet filter adapter If you are deploying your Java Servlet application on a platform where there is no Red Hat Single Sign-On adapter you opt to use the servlet filter adapter. This adapter works a bit differently than the other adapters. You do not define security constraints in web.xml. Instead you define a filter mapping using the Red Hat Single Sign-On servlet filter adapter to secure the url patterns you want to secure. Warning Backchannel logout works a bit differently than the standard adapters. Instead of invalidating the HTTP session it marks the session id as logged out. There's no standard way to invalidate an HTTP session based on a session id. <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0"> <module-name>application</module-name> <filter> <filter-name>Keycloak Filter</filter-name> <filter-class>org.keycloak.adapters.servlet.KeycloakOIDCFilter</filter-class> </filter> <filter-mapping> <filter-name>Keycloak Filter</filter-name> <url-pattern>/keycloak/*</url-pattern> <url-pattern>/protected/*</url-pattern> </filter-mapping> </web-app> In the snippet above there are two url-patterns. /protected/* are the files we want protected, while the /keycloak/* url-pattern handles callbacks from the Red Hat Single Sign-On server. If you need to exclude some paths beneath the configured url-patterns you can use the Filter init-param keycloak.config.skipPattern to configure a regular expression that describes a path-pattern for which the keycloak filter should immediately delegate to the filter-chain. By default no skipPattern is configured. Patterns are matched against the requestURI without the context-path . Given the context-path /myapp a request for /myapp/index.html will be matched with /index.html against the skip pattern. <init-param> <param-name>keycloak.config.skipPattern</param-name> <param-value>^/(path1|path2|path3).*</param-value> </init-param> Note that you should configure your client in the Red Hat Single Sign-On Admin Console with an Admin URL that points to a secured section covered by the filter's url-pattern. The Admin URL will make callbacks to the Admin URL to do things like backchannel logout. So, the Admin URL in this example should be http[s]://hostname/{context-root}/keycloak . If you need to customize the session ID mapper, you can configure the fully qualified name of the class in the Filter init-param keycloak.config.idMapper. Session ID mapper is a mapper that is used to map user IDs and session IDs. By default org.keycloak.adapters.spi.InMemorySessionIdMapper is configured. <init-param> <param-name>keycloak.config.idMapper</param-name> <param-value>org.keycloak.adapters.spi.InMemorySessionIdMapper</param-value> </init-param> The Red Hat Single Sign-On filter has the same configuration parameters as the other adapters except you must define them as filter init params instead of context params. To use this filter, include this maven artifact in your WAR poms: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-servlet-filter-adapter</artifactId> <version>18.0.18.redhat-00001</version> </dependency> 2.1.10. Security Context The KeycloakSecurityContext interface is available if you need to access to the tokens directly. This could be useful if you want to retrieve additional details from the token (such as user profile information) or you want to invoke a RESTful service that is protected by Red Hat Single Sign-On. In servlet environments it is available in secured invocations as an attribute in HttpServletRequest: httpServletRequest .getAttribute(KeycloakSecurityContext.class.getName()); Or, it is available in insecured requests in the HttpSession: httpServletRequest.getSession() .getAttribute(KeycloakSecurityContext.class.getName()); 2.1.11. Error handling Red Hat Single Sign-On has some error handling facilities for servlet based client adapters. When an error is encountered in authentication, Red Hat Single Sign-On will call HttpServletResponse.sendError() . You can set up an error-page within your web.xml file to handle the error however you want. Red Hat Single Sign-On can throw 400, 401, 403, and 500 errors. <error-page> <error-code>403</error-code> <location>/ErrorHandler</location> </error-page> Red Hat Single Sign-On also sets a HttpServletRequest attribute that you can retrieve. The attribute name is org.keycloak.adapters.spi.AuthenticationError , which should be casted to org.keycloak.adapters.OIDCAuthenticationError . For example: import org.keycloak.adapters.OIDCAuthenticationError; import org.keycloak.adapters.OIDCAuthenticationError.Reason; ... OIDCAuthenticationError error = (OIDCAuthenticationError) httpServletRequest .getAttribute('org.keycloak.adapters.spi.AuthenticationError'); Reason reason = error.getReason(); System.out.println(reason.name()); 2.1.12. Logout You can log out of a web application in multiple ways. For Jakarta EE servlet containers, you can call HttpServletRequest.logout() . For other browser applications, you can redirect the browser to http://auth-server/auth/realms/{realm-name}/protocol/openid-connect/logout , which logs the user out if that user has an SSO session with his browser. The actual logout is done once the user confirms the logout. You can optionally include parameters such as id_token_hint , post_logout_redirect_uri , client_id and others as described in the OpenID Connect RP-Initiated Logout . As a result, that logout does not need to be explicitly confirmed by the user if you include the id_token_hint parameter. After logout, the user will be automatically redirected to the specified post_logout_redirect_uri as long as it is provided. Note that you need to include either the client_id or id_token_hint parameter in case that post_logout_redirect_uri is included. If you want to avoid logging out of an external identity provider as part of the logout process, you can supply the parameter initiating_idp , with the value being the identity (alias) of the identity provider in question. This parameter is useful when the logout endpoint is invoked as part of single logout initiated by the external identity provider. The parameter initiating_idp is the supported parameter of the Red Hat Single Sign-On logout endpoint in addition to the parameters described in the RP-Initiated Logout specification. When using the HttpServletRequest.logout() option the adapter executes a back-channel POST call against the Red Hat Single Sign-On server passing the refresh token. If the method is executed from an unprotected page (a page that does not check for a valid token) the refresh token can be unavailable and, in that case, the adapter skips the call. For this reason, using a protected page to execute HttpServletRequest.logout() is recommended so that current tokens are always taken into account and an interaction with the Red Hat Single Sign-On server is performed if needed. 2.1.13. Parameters forwarding The Red Hat Single Sign-On initial authorization endpoint request has support for various parameters. Most of the parameters are described in OIDC specification . Some parameters are added automatically by the adapter based on the adapter configuration. However, there are also a few parameters that can be added on a per-invocation basis. When you open the secured application URI, the particular parameter will be forwarded to the Red Hat Single Sign-On authorization endpoint. For example, if you request an offline token, then you can open the secured application URI with the scope parameter like: and the parameter scope=offline_access will be automatically forwarded to the Red Hat Single Sign-On authorization endpoint. The supported parameters are: scope - Use a space-delimited list of scopes. A space-delimited list typically references Client scopes defined on particular client. Note that the scope openid will be always be added to the list of scopes by the adapter. For example, if you enter the scope options address phone , then the request to Red Hat Single Sign-On will contain the scope parameter scope=openid address phone . prompt - Red Hat Single Sign-On supports these settings: login - SSO will be ignored and the Red Hat Single Sign-On login page will be always shown, even if the user is already authenticated consent - Applicable only for the clients with Consent Required . If it is used, the Consent page will always be displayed, even if the user previously granted consent to this client. none - The login page will never be shown; instead the user will be redirected to the application, with an error if the user is not yet authenticated. This setting allows you to create a filter/interceptor on the application side and show a custom error page to the user. See more details in the specification. max_age - Used only if a user is already authenticated. Specifies maximum permitted time for the authentication to persist, measured from when the user authenticated. If user is authenticated longer than maxAge , the SSO is ignored and he must re-authenticate. login_hint - Used to pre-fill the username/email field on the login form. kc_idp_hint - Used to tell Red Hat Single Sign-On to skip showing login page and automatically redirect to specified identity provider instead. More info in the Identity Provider documentation . Most of the parameters are described in the OIDC specification . The only exception is parameter kc_idp_hint , which is specific to Red Hat Single Sign-On and contains the name of the identity provider to automatically use. For more information see the Identity Brokering section in the Server Administration Guide . Warning If you open the URL using the attached parameters, the adapter will not redirect you to Red Hat Single Sign-On if you are already authenticated in the application. For example, opening http://myappserver/mysecuredapp?prompt=login will not automatically redirect you to the Red Hat Single Sign-On login page if you are already authenticated to the application mysecuredapp . This behavior may be changed in the future. 2.1.14. Client authentication When a confidential OIDC client needs to send a backchannel request (for example, to exchange code for the token, or to refresh the token) it needs to authenticate against the Red Hat Single Sign-On server. By default, there are three ways to authenticate the client: client ID and client secret, client authentication with signed JWT, or client authentication with signed JWT using client secret. 2.1.14.1. Client ID and Client Secret This is the traditional method described in the OAuth2 specification. The client has a secret, which needs to be known to both the adapter (application) and the Red Hat Single Sign-On server. You can generate the secret for a particular client in the Red Hat Single Sign-On Admin Console, and then paste this secret into the keycloak.json file on the application side: "credentials": { "secret": "19666a4f-32dd-4049-b082-684c74115f28" } 2.1.14.2. Client authentication with Signed JWT This is based on the RFC7523 specification. It works this way: The client must have the private key and certificate. For Red Hat Single Sign-On this is available through the traditional keystore file, which is either available on the client application's classpath or somewhere on the file system. Once the client application is started, it allows to download its public key in JWKS format using a URL such as http://myhost.com/myapp/k_jwks, assuming that http://myhost.com/myapp is the base URL of your client application. This URL can be used by Red Hat Single Sign-On (see below). During authentication, the client generates a JWT token and signs it with its private key and sends it to Red Hat Single Sign-On in the particular backchannel request (for example, code-to-token request) in the client_assertion parameter. Red Hat Single Sign-On must have the public key or certificate of the client so that it can verify the signature on JWT. In Red Hat Single Sign-On you need to configure client credentials for your client. First you need to choose Signed JWT as the method of authenticating your client in the tab Credentials in the Admin Console. Then you can choose to either in the tab Keys : Configure the JWKS URL where Red Hat Single Sign-On can download the client's public keys. This can be a URL such as http://myhost.com/myapp/k_jwks (see details above). This option is the most flexible, since the client can rotate its keys anytime and Red Hat Single Sign-On then always downloads new keys when needed without needing to change the configuration. More accurately, Red Hat Single Sign-On downloads new keys when it sees the token signed by an unknown kid (Key ID). Upload the client's public key or certificate, either in PEM format, in JWK format, or from the keystore. With this option, the public key is hardcoded and must be changed when the client generates a new key pair. You can even generate your own keystore from the Red Hat Single Sign-On Admin Console if you don't have your own available. For more details on how to set up the Red Hat Single Sign-On Admin Console, see the Server Administration Guide . For set up on the adapter side you need to have something like this in your keycloak.json file: "credentials": { "jwt": { "client-keystore-file": "classpath:keystore-client.jks", "client-keystore-type": "JKS", "client-keystore-password": "storepass", "client-key-password": "keypass", "client-key-alias": "clientkey", "algorithm": "RS256", "token-expiration": 10 } } With this configuration, the keystore file keystore-client.jks must be available on classpath in your WAR. If you do not use the prefix classpath: you can point to any file on the file system where the client application is running. The algorithm field specifies the algorithm used for the Signed JWT and it defaults to RS256 . This field should be in sync with the key pair. For example, the RS256 algorithm needs a RSA key pair while the ES256 algorithm requires an EC key pair. Please refer to Cryptographic Algorithms for Digital Signatures and MACs for more information. 2.1.15. Multi Tenancy Multi Tenancy, in our context, means that a single target application (WAR) can be secured with multiple Red Hat Single Sign-On realms. The realms can be located on the same Red Hat Single Sign-On instance or on different instances. In practice, this means that the application needs to have multiple keycloak.json adapter configuration files. You could have multiple instances of your WAR with different adapter configuration files deployed to different context-paths. However, this may be inconvenient and you may also want to select the realm based on something else than context-path. Red Hat Single Sign-On makes it possible to have a custom config resolver so you can choose what adapter config is used for each request. To achieve this first you need to create an implementation of org.keycloak.adapters.KeycloakConfigResolver . For example: package example; import org.keycloak.adapters.KeycloakConfigResolver; import org.keycloak.adapters.KeycloakDeployment; import org.keycloak.adapters.KeycloakDeploymentBuilder; public class PathBasedKeycloakConfigResolver implements KeycloakConfigResolver { @Override public KeycloakDeployment resolve(OIDCHttpFacade.Request request) { if (path.startsWith("alternative")) { KeycloakDeployment deployment = cache.get(realm); if (null == deployment) { InputStream is = getClass().getResourceAsStream("/tenant1-keycloak.json"); return KeycloakDeploymentBuilder.build(is); } } else { InputStream is = getClass().getResourceAsStream("/default-keycloak.json"); return KeycloakDeploymentBuilder.build(is); } } } You also need to configure which KeycloakConfigResolver implementation to use with the keycloak.config.resolver context-param in your web.xml : <web-app> ... <context-param> <param-name>keycloak.config.resolver</param-name> <param-value>example.PathBasedKeycloakConfigResolver</param-value> </context-param> </web-app> 2.1.16. Application clustering This chapter is related to supporting clustered applications deployed to JBoss EAP. There are a few options available depending on whether your application is: Stateless or stateful Distributable (replicated http session) or non-distributable Relying on sticky sessions provided by load balancer Hosted on same domain as Red Hat Single Sign-On Dealing with clustering is not quite as simple as for a regular application. Mainly due to the fact that both the browser and the server-side application sends requests to Red Hat Single Sign-On, so it's not as simple as enabling sticky sessions on your load balancer. 2.1.16.1. Stateless token store By default, the web application secured by Red Hat Single Sign-On uses the HTTP session to store security context. This means that you either have to enable sticky sessions or replicate the HTTP session. As an alternative to storing the security context in the HTTP session the adapter can be configured to store this in a cookie instead. This is useful if you want to make your application stateless or if you don't want to store the security context in the HTTP session. To use the cookie store for saving the security context, edit your applications WEB-INF/keycloak.json and add: "token-store": "cookie" Note The default value for token-store is session , which stores the security context in the HTTP session. One limitation of using the cookie store is that the whole security context is passed in the cookie for every HTTP request. This may impact performance. Another small limitation is limited support for Single-Sign Out. It works without issues if you init servlet logout (HttpServletRequest.logout) from the application itself as the adapter will delete the KEYCLOAK_ADAPTER_STATE cookie. However, back-channel logout initialized from a different application isn't propagated by Red Hat Single Sign-On to applications using cookie store. Hence it's recommended to use a short value for the access token timeout (for example 1 minute). Note Some load balancers do not allow any configuration of the sticky session cookie name or contents, such as Amazon ALB. For these, it is recommended to set the shouldAttachRoute option to false . 2.1.16.2. Relative URI optimization In deployment scenarios where Red Hat Single Sign-On and the application is hosted on the same domain (through a reverse proxy or load balancer) it can be convenient to use relative URI options in your client configuration. With relative URIs the URI is resolved as relative to the URL used to access Red Hat Single Sign-On. For example if the URL to your application is https://acme.org/myapp and the URL to Red Hat Single Sign-On is https://acme.org/auth , then you can use the redirect-uri /myapp instead of https://acme.org/myapp . 2.1.16.3. Admin URL configuration Admin URL for a particular client can be configured in the Red Hat Single Sign-On Admin Console. It's used by the Red Hat Single Sign-On server to send backend requests to the application for various tasks, like logout users or push revocation policies. For example the way backchannel logout works is: User sends logout request from one application The application sends logout request to Red Hat Single Sign-On The Red Hat Single Sign-On server invalidates the user session The Red Hat Single Sign-On server then sends a backchannel request to application with an admin url that are associated with the session When an application receives the logout request it invalidates the corresponding HTTP session If admin URL contains USD{application.session.host} it will be replaced with the URL to the node associated with the HTTP session. 2.1.16.4. Registration of application nodes The section describes how Red Hat Single Sign-On can send logout request to node associated with a specific HTTP session. However, in some cases admin may want to propagate admin tasks to all registered cluster nodes, not just one of them. For example to push a new not before policy to the application or to logout all users from the application. In this case Red Hat Single Sign-On needs to be aware of all application cluster nodes, so it can send the event to all of them. To achieve this, we support auto-discovery mechanism: When a new application node joins the cluster, it sends a registration request to the Red Hat Single Sign-On server The request may be re-sent to Red Hat Single Sign-On in configured periodic intervals If the Red Hat Single Sign-On server doesn't receive a re-registration request within a specified timeout then it automatically unregisters the specific node The node is also unregistered in Red Hat Single Sign-On when it sends an unregistration request, which is usually during node shutdown or application undeployment. This may not work properly for forced shutdown when undeployment listeners are not invoked, which results in the need for automatic unregistration Sending startup registrations and periodic re-registration is disabled by default as it's only required for some clustered applications. To enable the feature edit the WEB-INF/keycloak.json file for your application and add: "register-node-at-startup": true, "register-node-period": 600, This means the adapter will send the registration request on startup and re-register every 10 minutes. In the Red Hat Single Sign-On Admin Console you can specify the maximum node re-registration timeout (should be larger than register-node-period from the adapter configuration). You can also manually add and remove cluster nodes in through the Admin Console, which is useful if you don't want to rely on the automatic registration feature or if you want to remove stale application nodes in the event your not using the automatic unregistration feature. 2.1.16.5. Refresh token in each request By default the application adapter will only refresh the access token when it's expired. However, you can also configure the adapter to refresh the token on every request. This may have a performance impact as your application will send more requests to the Red Hat Single Sign-On server. To enable the feature edit the WEB-INF/keycloak.json file for your application and add: "always-refresh-token": true Note This may have a significant impact on performance. Only enable this feature if you can't rely on backchannel messages to propagate logout and not before policies. Another thing to consider is that by default access tokens has a short expiration so even if logout is not propagated the token will expire within minutes of the logout. 2.2. JavaScript adapter Red Hat Single Sign-On comes with a client-side JavaScript library that can be used to secure HTML5/JavaScript applications. The JavaScript adapter has built-in support for Cordova applications. A good practice is to include the JavaScript adapter in your application using a package manager like NPM or Yarn. The keycloak-js package is available on the following locations: NPM: https://www.npmjs.com/package/keycloak-js Yarn: https://yarnpkg.com/package/keycloak-js Alternatively, the library can be retrieved directly from the Red Hat Single Sign-On server at /auth/js/keycloak.js and is also distributed as a ZIP archive. One important thing to note about using client-side applications is that the client has to be a public client as there is no secure way to store client credentials in a client-side application. This makes it very important to make sure the redirect URIs you have configured for the client are correct and as specific as possible. To use the JavaScript adapter you must first create a client for your application in the Red Hat Single Sign-On Admin Console. Make sure public is selected for Access Type . You achieve this in Capability config by turning OFF client authentication toggle. You also need to configure Valid Redirect URIs and Web Origins . Be as specific as possible as failing to do so may result in a security vulnerability. Once the client is created click on the Action tab in the upper right corner and select Download adapter config . Select Keycloak OIDC JSON for Format Option then click Download . The downloaded keycloak.json file should be hosted on your web server at the same location as your HTML pages. Alternatively, you can skip the configuration file and manually configure the adapter. The following example shows how to initialize the JavaScript adapter: <html> <head> <script src="keycloak.js"></script> <script> function initKeycloak() { const options = {}; const keycloak = new Keycloak(); keycloak.init(options) .then(function(authenticated) { console.log('keycloak:' + (authenticated ? 'authenticated' : 'not authenticated')); }).catch(function(error) { for (const property in error) { console.error(`USD{property}: USD{error[property]}`); } }); } </script> </head> <body onload="initKeycloak()"> <!-- your page content goes here --> </body> </html> If the keycloak.json file is in a different location you can specify it: const keycloak = new Keycloak('http://localhost:8080/myapp/keycloak.json'); Alternatively, you can pass in a JavaScript object with the required configuration instead: const keycloak = new Keycloak({ url: 'http://keycloak-serverUSD/auth', realm: 'myrealm', clientId: 'myapp' }); By default to authenticate you need to call the login function. However, there are two options available to make the adapter automatically authenticate. You can pass login-required or check-sso to the init function. login-required will authenticate the client if the user is logged-in to Red Hat Single Sign-On or display the login page if not. check-sso will only authenticate the client if the user is already logged-in, if the user is not logged-in the browser will be redirected back to the application and remain unauthenticated. You can configure a silent check-sso option. With this feature enabled, your browser won't do a full redirect to the Red Hat Single Sign-On server and back to your application, but this action will be performed in a hidden iframe, so your application resources only need to be loaded and parsed once by the browser when the app is initialized and not again after the redirect back from Red Hat Single Sign-On to your app. This is particularly useful in case of SPAs (Single Page Applications). To enable the silent check-sso , you have to provide a silentCheckSsoRedirectUri attribute in the init method. This URI needs to be a valid endpoint in the application (and of course it must be configured as a valid redirect for the client in the Red Hat Single Sign-On Admin Console): keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: window.location.origin + '/silent-check-sso.html' }) The page at the silent check-sso redirect uri is loaded in the iframe after successfully checking your authentication state and retrieving the tokens from the Red Hat Single Sign-On server. It has no other task than sending the received tokens to the main application and should only look like this: <html> <body> <script> parent.postMessage(location.href, location.origin) </script> </body> </html> Please keep in mind that this page at the specified location must be provided by the application itself and is not part of the JavaScript adapter! Warning Silent check-sso functionality is limited in some modern browsers. Please see the Modern Browsers with Tracking Protection Section . To enable login-required set onLoad to login-required and pass to the init method: keycloak.init({ onLoad: 'login-required' }) After the user is authenticated the application can make requests to RESTful services secured by Red Hat Single Sign-On by including the bearer token in the Authorization header. For example: const loadData = function () { document.getElementById('username').innerText = keycloak.subject; const url = 'http://localhost:8080/restful-service'; const req = new XMLHttpRequest(); req.open('GET', url, true); req.setRequestHeader('Accept', 'application/json'); req.setRequestHeader('Authorization', 'Bearer ' + keycloak.token); req.onreadystatechange = function () { if (req.readyState == 4) { if (req.status == 200) { alert('Success'); } else if (req.status == 403) { alert('Forbidden'); } } } req.send(); }; One thing to keep in mind is that the access token by default has a short life expiration so you may need to refresh the access token prior to sending the request. You can do this by the updateToken method. The updateToken method returns a promise which makes it easy to invoke the service only if the token was successfully refreshed and display an error to the user if it wasn't. For example: keycloak.updateToken(30).then(function() { loadData(); }).catch(function() { alert('Failed to refresh token'); }); 2.2.1. Session Status iframe By default, the JavaScript adapter creates a hidden iframe that is used to detect if a Single-Sign Out has occurred. This does not require any network traffic, instead the status is retrieved by looking at a special status cookie. This feature can be disabled by setting checkLoginIframe: false in the options passed to the init method. You should not rely on looking at this cookie directly. Its format can change and it's also associated with the URL of the Red Hat Single Sign-On server, not your application. Warning Session Status iframe functionality is limited in some modern browsers. Please see Modern Browsers with Tracking Protection Section . 2.2.2. Implicit and hybrid flow By default, the JavaScript adapter uses the Authorization Code flow. With this flow the Red Hat Single Sign-On server returns an authorization code, not an authentication token, to the application. The JavaScript adapter exchanges the code for an access token and a refresh token after the browser is redirected back to the application. Red Hat Single Sign-On also supports the Implicit flow where an access token is sent immediately after successful authentication with Red Hat Single Sign-On. This may have better performance than standard flow, as there is no additional request to exchange the code for tokens, but it has implications when the access token expires. However, sending the access token in the URL fragment can be a security vulnerability. For example the token could be leaked through web server logs and or browser history. To enable implicit flow, you need to enable the Implicit Flow Enabled flag for the client in the Red Hat Single Sign-On Admin Console. You also need to pass the parameter flow with value implicit to init method: keycloak.init({ flow: 'implicit' }) One thing to note is that only an access token is provided and there is no refresh token. This means that once the access token has expired the application has to do the redirect to the Red Hat Single Sign-On again to obtain a new access token. Red Hat Single Sign-On also supports the Hybrid flow. This requires the client to have both the Standard Flow Enabled and Implicit Flow Enabled flags enabled in the admin console. The Red Hat Single Sign-On server will then send both the code and tokens to your application. The access token can be used immediately while the code can be exchanged for access and refresh tokens. Similar to the implicit flow, the hybrid flow is good for performance because the access token is available immediately. But, the token is still sent in the URL, and the security vulnerability mentioned earlier may still apply. One advantage in the Hybrid flow is that the refresh token is made available to the application. For the Hybrid flow, you need to pass the parameter flow with value hybrid to the init method: keycloak.init({ flow: 'hybrid' }) 2.2.3. Hybrid Apps with Cordova Keycloak support hybrid mobile apps developed with Apache Cordova . The JavaScript adapter has two modes for this: cordova and cordova-native : The default is cordova, which the adapter will automatically select if no adapter type has been configured and window.cordova is present. When logging in, it will open an InApp Browser that lets the user interact with Red Hat Single Sign-On and afterwards returns to the app by redirecting to http://localhost . Because of this, you must whitelist this URL as a valid redirect-uri in the client configuration section of the Admin Console. While this mode is easy to set up, it also has some disadvantages: The InApp-Browser is a browser embedded in the app and is not the phone's default browser. Therefore it will have different settings and stored credentials will not be available. The InApp-Browser might also be slower, especially when rendering more complex themes. There are security concerns to consider, before using this mode, such as that it is possible for the app to gain access to the credentials of the user, as it has full control of the browser rendering the login page, so do not allow its use in apps you do not trust. Use this example app to help you get started: https://github.com/keycloak/keycloak/tree/master/examples/cordova The alternative mode cordova-native takes a different approach. It opens the login page using the system's browser. After the user has authenticated, the browser redirects back into the app using a special URL. From there, the Red Hat Single Sign-On adapter can finish the login by reading the code or token from the URL. You can activate the native mode by passing the adapter type cordova-native to the init method: keycloak.init({ adapter: 'cordova-native' }) This adapter required two additional plugins: cordova-plugin-browsertab : allows the app to open webpages in the system's browser cordova-plugin-deeplinks : allow the browser to redirect back to your app by special URLs The technical details for linking to an app differ on each platform and special setup is needed. Please refer to the Android and iOS sections of the deeplinks plugin documentation for further instructions. There are different kinds of links for opening apps: custom schemes (i.e. myapp://login or android-app://com.example.myapp/https/example.com/login ) and Universal Links (iOS) ) / Deep Links (Android) . While the former are easier to set up and tend to work more reliably, the later offer extra security as they are unique and only the owner of a domain can register them. Custom-URLs are deprecated on iOS. We recommend that you use universal links, combined with a fallback site with a custom-url link on it for best reliability. Furthermore, we recommend the following steps to improve compatibility with the Keycloak Adapter: Universal Links on iOS seem to work more reliably with response-mode set to query To prevent Android from opening a new instance of your app on redirect add the following snippet to config.xml : <preference name="AndroidLaunchMode" value="singleTask" /> There is an example app that shows how to use the native-mode: https://github.com/keycloak/keycloak/tree/master/examples/cordova-native 2.2.4. Custom Adapters Sometimes it's necessary to run the JavaScript client in environments that are not supported by default (such as Capacitor). To make it possible to use the JavasScript client in these kind of unknown environments is possible to pass a custom adapter. For example a 3rd party library could provide such an adapter to make it possible to run the JavaScript client without issues: import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak(); keycloak.init({ adapter: KeycloakCapacitorAdapter, }); This specific package does not exist, but it gives a pretty good example of how such an adapter could be passed into the client. It's also possible to make your own adapter, to do so you will have to implement the methods described in the KeycloakAdapter interface. For example the following TypeScript code ensures that all the methods are properly implemented: import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { login(options) { // Write your own implementation here. } // The other methods go here... }; const keycloak = new Keycloak(); keycloak.init({ adapter: MyCustomAdapter, }); Naturally you can also do this without TypeScript by omitting the type information, but ensuring implementing the interface properly will then be left entirely up to you. 2.2.5. Earlier Browsers The JavaScript adapter depends on Base64 (window.btoa and window.atob), HTML5 History API and optionally the Promise API. If you need to support browsers that do not have these available (for example, IE9) you need to add polyfillers. Example polyfill libraries: Base64 - https://github.com/davidchambers/Base64.js HTML5 History - https://github.com/devote/HTML5-History-API Promise - https://github.com/stefanpenner/es6-promise 2.2.6. Modern Browsers with Tracking Protection In the latest versions of some browsers various cookies policies are applied to prevent tracking of the users by third-parties, like SameSite in Chrome or completely blocked third-party cookies. It is expected that those policies will become even more restrictive and adopted by other browsers over time, eventually leading to cookies in third-party contexts to be completely unsupported and blocked by the browsers. The adapter features affected by this might get deprecated in the future. Javascript adapter relies on third-party cookies for Session Status iframe, silent check-sso and partially also for regular (non-silent) check-sso . Those features have limited functionality or are completely disabled based on how the browser is restrictive regarding cookies. The adapter tries to detect this setting and reacts accordingly. 2.2.6.1. Browsers with "SameSite=Lax by Default" Policy All features are supported if SSL / TLS connection is configured on the Red Hat Single Sign-On side as well as on the application side. Affected is for example Chrome starting with version 84. 2.2.6.2. Browsers with Blocked Third-Party Cookies Session Status iframe is not supported and is automatically disabled if such browser behavior is detected by the JS adapter. This means the adapter cannot use session cookie for Single Sign-Out detection and have to rely purely on tokens. This implies that when user logs out in another window, the application using JavaScript adapter won't be logged out until it tries to refresh the Access Token. Therefore, it is recommended to set Access Token Lifespan to relatively short time, so that the logout is detected rather sooner than later. Please see Session and Token Timeouts . Silent check-sso is not supported and falls back to regular (non-silent) check-sso by default. This behaviour can be changed by setting silentCheckSsoFallback: false in the options passed to the init method. In this case, check-sso will be completely disabled if restrictive browser behavior is detected. Regular check-sso is affected as well. Since Session Status iframe is unsupported, an additional redirect to Red Hat Single Sign-On has to be made when the adapter is initialized to check user's login status. This is different from standard behavior when the iframe is used to tell whether the user is logged in, and the redirect is performed only when logged out. An affected browser is for example Safari starting with version 13.1. 2.2.7. JavaScript Adapter Reference 2.2.7.1. Constructor new Keycloak(); new Keycloak('http://localhost/keycloak.json'); new Keycloak({ url: 'http://localhost/auth', realm: 'myrealm', clientId: 'myApp' }); 2.2.7.2. Properties authenticated Is true if the user is authenticated, false otherwise. token The base64 encoded token that can be sent in the Authorization header in requests to services. tokenParsed The parsed token as a JavaScript object. subject The user id. idToken The base64 encoded ID token. idTokenParsed The parsed id token as a JavaScript object. realmAccess The realm roles associated with the token. resourceAccess The resource roles associated with the token. refreshToken The base64 encoded refresh token that can be used to retrieve a new token. refreshTokenParsed The parsed refresh token as a JavaScript object. timeSkew The estimated time difference between the browser time and the Red Hat Single Sign-On server in seconds. This value is just an estimation, but is accurate enough when determining if a token is expired or not. responseMode Response mode passed in init (default value is fragment). flow Flow passed in init. adapter Allows you to override the way that redirects and other browser-related functions will be handled by the library. Available options: "default" - the library uses the browser api for redirects (this is the default) "cordova" - the library will try to use the InAppBrowser cordova plugin to load keycloak login/registration pages (this is used automatically when the library is working in a cordova ecosystem) "cordova-native" - the library tries to open the login and registration page using the phone's system browser using the BrowserTabs cordova plugin. This requires extra setup for redirecting back to the app (see Section 2.2.3, "Hybrid Apps with Cordova" ). "custom" - allows you to implement a custom adapter (only for advanced use cases) responseType Response type sent to Red Hat Single Sign-On with login requests. This is determined based on the flow value used during initialization, but can be overridden by setting this value. 2.2.7.3. Methods init(options) Called to initialize the adapter. Options is an Object, where: useNonce - Adds a cryptographic nonce to verify that the authentication response matches the request (default is true ). onLoad - Specifies an action to do on load. Supported values are login-required or check-sso . silentCheckSsoRedirectUri - Set the redirect uri for silent authentication check if onLoad is set to 'check-sso'. silentCheckSsoFallback - Enables fall back to regular check-sso when silent check-sso is not supported by the browser (default is true ). token - Set an initial value for the token. refreshToken - Set an initial value for the refresh token. idToken - Set an initial value for the id token (only together with token or refreshToken). scope - Set the default scope parameter to the Red Hat Single Sign-On login endpoint. Use a space-delimited list of scopes. Those typically reference Client scopes defined on a particular client. Note that the scope openid will always be added to the list of scopes by the adapter. For example, if you enter the scope options address phone , then the request to Red Hat Single Sign-On will contain the scope parameter scope=openid address phone . Note that the default scope specified here is overwritten if the login() options specify scope explicitly. timeSkew - Set an initial value for skew between local time and Red Hat Single Sign-On server in seconds (only together with token or refreshToken). checkLoginIframe - Set to enable/disable monitoring login state (default is true ). checkLoginIframeInterval - Set the interval to check login state (default is 5 seconds). responseMode - Set the OpenID Connect response mode send to Red Hat Single Sign-On server at login request. Valid values are query or fragment . Default value is fragment , which means that after successful authentication will Red Hat Single Sign-On redirect to JavaScript application with OpenID Connect parameters added in URL fragment. This is generally safer and recommended over query . flow - Set the OpenID Connect flow. Valid values are standard , implicit or hybrid . enableLogging - Enables logging messages from Keycloak to the console (default is false ). pkceMethod - The method for Proof Key Code Exchange ( PKCE ) to use. Configuring this value enables the PKCE mechanism. Available options: "S256" - The SHA256 based PKCE method messageReceiveTimeout - Set a timeout in milliseconds for waiting for message responses from the Keycloak server. This is used, for example, when waiting for a message during 3rd party cookies check. The default value is 10000. Returns a promise that resolves when initialization completes. login(options) Redirects to login form. Options is an optional Object, where: redirectUri - Specifies the uri to redirect to after login. prompt - This parameter allows to slightly customize the login flow on the Red Hat Single Sign-On server side. For example enforce displaying the login screen in case of value login . See Parameters Forwarding Section for the details and all the possible values of the prompt parameter. maxAge - Used just if user is already authenticated. Specifies maximum time since the authentication of user happened. If user is already authenticated for longer time than maxAge , the SSO is ignored and he will need to re-authenticate again. loginHint - Used to pre-fill the username/email field on the login form. scope - Override the scope configured in init with a different value for this specific login. idpHint - Used to tell Red Hat Single Sign-On to skip showing the login page and automatically redirect to the specified identity provider instead. More info in the Identity Provider documentation . acr - Contains the information about acr claim, which will be sent inside claims parameter to the Red Hat Single Sign-On server. Typical usage is for step-up authentication. Example of use { values: ["silver", "gold"], essential: true } . See OpenID Connect specification and Step-up authentication documentation for more details. action - If value is register then user is redirected to registration page, if the value is UPDATE_PASSWORD then the user will be redirected to the reset password page (if not authenticated will send user to login page first and redirect after authenticated), otherwise to login page. locale - Sets the 'ui_locales' query param in compliance with section 3.1.2.1 of the OIDC 1.0 specification . cordovaOptions - Specifies the arguments that are passed to the Cordova in-app-browser (if applicable). Options hidden and location are not affected by these arguments. All available options are defined at https://cordova.apache.org/docs/en/latest/reference/cordova-plugin-inappbrowser/ . Example of use: { zoom: "no", hardwareback: "yes" } ; createLoginUrl(options) Returns the URL to login form. Options is an optional Object, which supports same options as the function login . logout(options) Redirects to logout. Options is an Object, where: redirectUri - Specifies the uri to redirect to after logout. createLogoutUrl(options) Returns the URL to log out the user. Options is an Object, where: redirectUri - Specifies the uri to redirect to after logout. register(options) Redirects to registration form. Shortcut for login with option action = 'register' Options are same as for the login method but 'action' is set to 'register' createRegisterUrl(options) Returns the url to registration page. Shortcut for createLoginUrl with option action = 'register' Options are same as for the createLoginUrl method but 'action' is set to 'register' accountManagement() Redirects to the Account Management Console. createAccountUrl(options) Returns the URL to the Account Management Console. Options is an Object, where: redirectUri - Specifies the uri to redirect to when redirecting back to the application. hasRealmRole(role) Returns true if the token has the given realm role. hasResourceRole(role, resource) Returns true if the token has the given role for the resource (resource is optional, if not specified clientId is used). loadUserProfile() Loads the users profile. Returns a promise that resolves with the profile. For example: keycloak.loadUserProfile() .then(function(profile) { alert(JSON.stringify(profile, null, " ")) }).catch(function() { alert('Failed to load user profile'); }); isTokenExpired(minValidity) Returns true if the token has less than minValidity seconds left before it expires (minValidity is optional, if not specified 0 is used). updateToken(minValidity) If the token expires within minValidity seconds (minValidity is optional, if not specified 5 is used) the token is refreshed. If -1 is passed as the minValidity, the token will be forcibly refreshed. If the session status iframe is enabled, the session status is also checked. Returns a promise that resolves with a boolean indicating whether or not the token has been refreshed. For example: keycloak.updateToken(5) .then(function(refreshed) { if (refreshed) { alert('Token was successfully refreshed'); } else { alert('Token is still valid'); } }).catch(function() { alert('Failed to refresh the token, or the session has expired'); }); clearToken() Clear authentication state, including tokens. This can be useful if application has detected the session was expired, for example if updating token fails. Invoking this results in onAuthLogout callback listener being invoked. 2.2.7.4. Callback Events The adapter supports setting callback listeners for certain events. Keep in mind that these have to be set before the call to the init function. For example: keycloak.onAuthSuccess = function() { alert('authenticated'); } The available events are: onReady(authenticated) - Called when the adapter is initialized. onAuthSuccess - Called when a user is successfully authenticated. onAuthError - Called if there was an error during authentication. onAuthRefreshSuccess - Called when the token is refreshed. onAuthRefreshError - Called if there was an error while trying to refresh the token. onAuthLogout - Called if the user is logged out (will only be called if the session status iframe is enabled, or in Cordova mode). onTokenExpired - Called when the access token is expired. If a refresh token is available the token can be refreshed with updateToken, or in cases where it is not (that is, with implicit flow) you can redirect to the login screen to obtain a new access token. 2.3. Node.js adapter Red Hat Single Sign-On provides a Node.js adapter built on top of Connect to protect server-side JavaScript apps - the goal was to be flexible enough to integrate with frameworks like Express.js . To use the Node.js adapter, first you must create a client for your application in the Red Hat Single Sign-On Admin Console. The adapter supports public, confidential, and bearer-only access type. Which one to choose depends on the use-case scenario. Once the client is created click the Installation tab, select Red Hat Single Sign-On OIDC JSON for Format Option , and then click Download . The downloaded keycloak.json file should be at the root folder of your project. 2.3.1. Installation Assuming you've already installed Node.js , create a folder for your application: Use npm init command to create a package.json for your application. Now add the Red Hat Single Sign-On connect adapter in the dependencies list: "dependencies": { "keycloak-connect": "file:keycloak-connect-18.0.7.tgz" } 2.3.2. Usage Instantiate a Keycloak class The Keycloak class provides a central point for configuration and integration with your application. The simplest creation involves no arguments. In the root directory of your project create a file called server.js and add the following code: const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore }); Install the express-session dependency: To start the server.js script, add the following command in the 'scripts' section of the package.json : Now we have the ability to run our server with following command: By default, this will locate a file named keycloak.json alongside the main executable of your application, in our case on the root folder, to initialize keycloak-specific settings such as public key, realm name, various URLs. In that case a Keycloak deployment is necessary to access Keycloak admin console. Please visit links on how to deploy a Keycloak admin console with Podman or Docker Now we are ready to obtain the keycloak.json file by visiting the Red Hat Single Sign-On Admin Console clients (left sidebar) choose your client Installation Format Option Keycloak OIDC JSON Download Paste the downloaded file on the root folder of our project. Instantiation with this method results in all of the reasonable defaults being used. As alternative, it's also possible to provide a configuration object, rather than the keycloak.json file: const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080/auth', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig); Applications can also redirect users to their preferred identity provider by using: const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig); Configuring a web session store If you want to use web sessions to manage server-side state for authentication, you need to initialize the Keycloak(... ) with at least a store parameter, passing in the actual session store that express-session is using. const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore }); Passing a custom scope value By default, the scope value openid is passed as a query parameter to Red Hat Single Sign-On's login URL, but you can add an additional custom value: const keycloak = new Keycloak({ scope: 'offline_access' }); 2.3.3. Installing middleware Once instantiated, install the middleware into your connect-capable app: In order to do so, first we have to install Express: then require Express in our project as outlined below: const express = require('express'); const app = express(); and configure Keycloak middleware in Express, by adding at the code below: app.use( keycloak.middleware() ); Last but not least, let's set up our server to listen for HTTP requests on port 3000 by adding the following code to main.js : app.listen(3000, function () { console.log('App listening on port 3000'); }); 2.3.4. Configuration for proxies If the application is running behind a proxy that terminates an SSL connection Express must be configured per the express behind proxies guide. Using an incorrect proxy configuration can result in invalid redirect URIs being generated. Example configuration: const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() ); 2.3.5. Checking authentication To check that a user is authenticated before accessing a resource, simply use keycloak.checkSso() . It will only authenticate if the user is already logged-in. If the user is not logged-in, the browser will be redirected back to the originally-requested URL and remain unauthenticated: app.get( '/check-sso', keycloak.checkSso(), checkSsoHandler ); 2.3.6. Protecting resources Simple authentication To enforce that a user must be authenticated before accessing a resource, simply use a no-argument version of keycloak.protect() : app.get( '/complain', keycloak.protect(), complaintHandler ); Role-based authorization To secure a resource with an application role for the current app: app.get( '/special', keycloak.protect('special'), specialHandler ); To secure a resource with an application role for a different app: app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler ); To secure a resource with a realm role: app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler ); Resource-Based Authorization Resource-Based Authorization allows you to protect resources, and their specific methods/actions, * * based on a set of policies defined in Keycloak, thus externalizing authorization from your application. This is achieved by exposing a keycloak.enforcer method which you can use to protect resources.* app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler); The keycloak-enforcer method operates in two modes, depending on the value of the response_mode configuration option. app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler); If response_mode is set to token , permissions are obtained from the server on behalf of the subject represented by the bearer token that was sent to your application. In this case, a new access token is issued by Keycloak with the permissions granted by the server. If the server did not respond with a token with the expected permissions, the request is denied. When using this mode, you should be able to obtain the token from the request as follows: app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile }); Prefer this mode when your application is using sessions and you want to cache decisions from the server, as well automatically handle refresh tokens. This mode is especially useful for applications acting as a client and resource server. If response_mode is set to permissions (default mode), the server only returns the list of granted permissions, without issuing a new access token. In addition to not issuing a new token, this method exposes the permissions granted by the server through the request as follows: app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile }); Regardless of the response_mode in use, the keycloak.enforcer method will first try to check the permissions within the bearer token that was sent to your application. If the bearer token already carries the expected permissions, there is no need to interact with the server to obtain a decision. This is specially useful when your clients are capable of obtaining access tokens from the server with the expected permissions before accessing a protected resource, so they can use some capabilities provided by Keycloak Authorization Services such as incremental authorization and avoid additional requests to the server when keycloak.enforcer is enforcing access to the resource. By default, the policy enforcer will use the client_id defined to the application (for instance, via keycloak.json ) to reference a client in Keycloak that supports Keycloak Authorization Services. In this case, the client can not be public given that it is actually a resource server. If your application is acting as both a public client(frontend) and resource server(backend), you can use the following configuration to reference a different client in Keycloak with the policies that you want to enforce: keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'}) It is recommended to use distinct clients in Keycloak to represent your frontend and backend. If the application you are protecting is enabled with Keycloak authorization services and you have defined client credentials in keycloak.json , you can push additional claims to the server and make them available to your policies in order to make decisions. For that, you can define a claims configuration option which expects a function that returns a JSON with the claims you want to push: app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { "http.uri": ["/protected/resource"], "user.agent": // get user agent from request } } }), function (req, res) { // access granted For more details about how to configure Keycloak to protected your application resources, please take a look at the Authorization Services Guide . Advanced authorization To secure resources based on parts of the URL itself, assuming a role exists for each section: function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler ); Advanced Login Configuration: By default, all unauthorized requests will be redirected to the Red Hat Single Sign-On login page unless your client is bearer-only. However, a confidential or public client may host both browsable and API endpoints. To prevent redirects on unauthenticated API requests and instead return an HTTP 401, you can override the redirectToLogin function. For example, this override checks if the URL contains /api/ and disables login redirects: Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\/api\//i; return !apiReqMatcher.test(req.originalUrl || req.url); }; 2.3.7. Additional URLs Explicit user-triggered logout By default, the middleware catches calls to /logout to send the user through a Red Hat Single Sign-On-centric logout workflow. This can be changed by specifying a logout configuration parameter to the middleware() call: app.use( keycloak.middleware( { logout: '/logoff' } )); When the user-triggered logout is invoked a query parameter redirect_url can be passed: This parameter is then used as the redirect url of the OIDC logout endpoint and the user will be redirected to https://example.com/logged/out . Red Hat Single Sign-On Admin Callbacks Also, the middleware supports callbacks from the Red Hat Single Sign-On console to log out a single session or all sessions. By default, these type of admin callbacks occur relative to the root URL of / but can be changed by providing an admin parameter to the middleware() call: app.use( keycloak.middleware( { admin: '/callbacks' } ); 2.3.8. Complete example A complete example using the Node.js adapter usage can be found in Keycloak quickstarts for Node.js 2.4. Other OpenID Connect libraries Red Hat Single Sign-On can be secured by supplied adapters that are usually easier to use and provide better integration with Red Hat Single Sign-On. However, if an adapter is not available for your programming language, framework, or platform you might opt to use a generic OpenID Connect Relying Party (RP) library instead. This chapter describes details specific to Red Hat Single Sign-On and does not contain specific protocol details. For more information see the OpenID Connect specifications and OAuth2 specification . 2.4.1. Endpoints The most important endpoint to understand is the well-known configuration endpoint. It lists endpoints and other configuration options relevant to the OpenID Connect implementation in Red Hat Single Sign-On. The endpoint is: To obtain the full URL, add the base URL for Red Hat Single Sign-On and replace {realm-name} with the name of your realm. For example: http://localhost:8080/auth/realms/master/.well-known/openid-configuration Some RP libraries retrieve all required endpoints from this endpoint, but for others you might need to list the endpoints individually. 2.4.1.1. Authorization endpoint The authorization endpoint performs authentication of the end-user. This is done by redirecting the user agent to this endpoint. For more details see the Authorization Endpoint section in the OpenID Connect specification. 2.4.1.2. Token endpoint The token endpoint is used to obtain tokens. Tokens can either be obtained by exchanging an authorization code or by supplying credentials directly depending on what flow is used. The token endpoint is also used to obtain new access tokens when they expire. For more details see the Token Endpoint section in the OpenID Connect specification. 2.4.1.3. Userinfo endpoint The userinfo endpoint returns standard claims about the authenticated user, and is protected by a bearer token. For more details see the Userinfo Endpoint section in the OpenID Connect specification. 2.4.1.4. Logout endpoint The logout endpoint logs out the authenticated user. The user agent can be redirected to the endpoint, in which case the active user session is logged out. Afterward the user agent is redirected back to the application. The endpoint can also be invoked directly by the application. To invoke this endpoint directly the refresh token needs to be included as well as the credentials required to authenticate the client. 2.4.1.5. Certificate endpoint The certificate endpoint returns the public keys enabled by the realm, encoded as a JSON Web Key (JWK). Depending on the realm settings there can be one or more keys enabled for verifying tokens. For more information see the Server Administration Guide and the JSON Web Key specification . 2.4.1.6. Introspection endpoint The introspection endpoint is used to retrieve the active state of a token. In other words, you can use it to validate an access or refresh token. It can only be invoked by confidential clients. For more details on how to invoke on this endpoint, see OAuth 2.0 Token Introspection specification . 2.4.1.7. Dynamic Client Registration endpoint The dynamic client registration endpoint is used to dynamically register clients. For more details see the Client Registration chapter and the OpenID Connect Dynamic Client Registration specification . 2.4.1.8. Token Revocation endpoint The token revocation endpoint is used to revoke tokens. Both refresh tokens and access tokens are supported by this endpoint. For more details on how to invoke on this endpoint, see OAuth 2.0 Token Revocation specification . 2.4.1.9. Device Authorization endpoint The device authorization endpoint is used to obtain a device code and a user code. It can be invoked by confidential or public clients. For more details on how to invoke on this endpoint, see OAuth 2.0 Device Authorization Grant specification . 2.4.1.10. Backchannel Authentication endpoint The backchannel authentication endpoint is used to obtain an auth_req_id that identifies the authentication request made by the client. It can only be invoked by confidential clients. For more details on how to invoke on this endpoint, see OpenID Connect Client Initiated Backchannel Authentication Flow specification . Also please refer to other places of Red Hat Single Sign-On documentation like Client Initiated Backchannel Authentication Grant section of this guide and Client Initiated Backchannel Authentication Grant section of Server Administration Guide. 2.4.2. Validating access tokens If you need to manually validate access tokens issued by Red Hat Single Sign-On you can invoke the Introspection Endpoint . The downside to this approach is that you have to make a network invocation to the Red Hat Single Sign-On server. This can be slow and possibly overload the server if you have too many validation requests going on at the same time. Red Hat Single Sign-On issued access tokens are JSON Web Tokens (JWT) digitally signed and encoded using JSON Web Signature (JWS) . Because they are encoded in this way, this allows you to locally validate access tokens using the public key of the issuing realm. You can either hard code the realm's public key in your validation code, or lookup and cache the public key using the certificate endpoint with the Key ID (KID) embedded within the JWS. Depending what language you code in, there are a multitude of third party libraries out there that can help you with JWS validation. 2.4.3. Flows 2.4.3.1. Authorization code The Authorization Code flow redirects the user agent to Red Hat Single Sign-On. Once the user has successfully authenticated with Red Hat Single Sign-On an Authorization Code is created and the user agent is redirected back to the application. The application then uses the authorization code along with its credentials to obtain an Access Token, Refresh Token and ID Token from Red Hat Single Sign-On. The flow is targeted towards web applications, but is also recommended for native applications, including mobile applications, where it is possible to embed a user agent. For more details refer to the Authorization Code Flow in the OpenID Connect specification. 2.4.3.2. Implicit The Implicit flow redirects works similarly to the Authorization Code flow, but instead of returning an Authorization Code the Access Token and ID Token is returned. This reduces the need for the extra invocation to exchange the Authorization Code for an Access Token. However, it does not include a Refresh Token. This results in the need to either permit Access Tokens with a long expiration, which is problematic as it's very hard to invalidate these. Or requires a new redirect to obtain new Access Token once the initial Access Token has expired. The Implicit flow is useful if the application only wants to authenticate the user and deals with logout itself. There's also a Hybrid flow where both the Access Token and an Authorization Code is returned. One thing to note is that both the Implicit flow and Hybrid flow has potential security risks as the Access Token may be leaked through web server logs and browser history. This is somewhat mitigated by using short expiration for Access Tokens. For more details refer to the Implicit Flow in the OpenID Connect specification. 2.4.3.3. Resource Owner Password Credentials Resource Owner Password Credentials, referred to as Direct Grant in Red Hat Single Sign-On, allows exchanging user credentials for tokens. It's not recommended to use this flow unless you absolutely need to. Examples where this could be useful are legacy applications and command-line interfaces. There are a number of limitations of using this flow, including: User credentials are exposed to the application Applications need login pages Application needs to be aware of the authentication scheme Changes to authentication flow requires changes to application No support for identity brokering or social login Flows are not supported (user self-registration, required actions, etc.) For a client to be permitted to use the Resource Owner Password Credentials grant the client has to have the Direct Access Grants Enabled option enabled. This flow is not included in OpenID Connect, but is a part of the OAuth 2.0 specification. For more details refer to the Resource Owner Password Credentials Grant chapter in the OAuth 2.0 specification. 2.4.3.3.1. Example using CURL The following example shows how to obtain an access token for a user in the realm master with username user and password password . The example is using the confidential client myclient : curl \ -d "client_id=myclient" \ -d "client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578" \ -d "username=user" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/auth/realms/master/protocol/openid-connect/token" 2.4.3.4. Client credentials Client Credentials is used when clients (applications and services) wants to obtain access on behalf of themselves rather than on behalf of a user. This can for example be useful for background services that applies changes to the system in general rather than for a specific user. Red Hat Single Sign-On provides support for clients to authenticate either with a secret or with public/private keys. This flow is not included in OpenID Connect, but is a part of the OAuth 2.0 specification. For more details refer to the Client Credentials Grant chapter in the OAuth 2.0 specification. 2.4.3.5. Device Authorization Grant Device Authorization Grant is used by clients running on internet-connected devices that have limited input capabilities or lack a suitable browser. The application requests Red Hat Single Sign-On a device code and a user code. Red Hat Single Sign-On creates a device code and a user code. Red Hat Single Sign-On returns a response including the device code and the user code to the application. Then the application provides the user with the user code and the verification URI. The user accesses a verification URI to be authenticated by using another browser. The application repeatedly polls Red Hat Single Sign-On until Red Hat Single Sign-On completes the user authorization. If user authentication is complete, the application obtains the device code. Then the application uses the device code along with its credentials to obtain an Access Token, Refresh Token and ID Token from Red Hat Single Sign-On. For more details refer to the OAuth 2.0 Device Authorization Grant specification . 2.4.3.6. Client Initiated Backchannel Authentication Grant Client Initiated Backchannel Authentication Grant is used by clients who want to initiate the authentication flow by communicating with the OpenID Provider directly without redirect through the user's browser like OAuth 2.0's authorization code grant. The client requests Red Hat Single Sign-On an auth_req_id that identifies the authentication request made by the client. Red Hat Single Sign-On creates the auth_req_id. After receiving this auth_req_id, this client repeatedly needs to poll Red Hat Single Sign-On to obtain an Access Token, Refresh Token and ID Token from Red Hat Single Sign-On in return for the auth_req_id until the user is authenticated. In case that client uses ping mode, it does not need to repeatedly poll the token endpoint, but it can wait for the notification sent by Red Hat Single Sign-On to the specified Client Notification Endpoint. The Client Notification Endpoint can be configured in the Red Hat Single Sign-On Admin Console. The details of the contract for Client Notification Endpoint are described in the CIBA specification. For more details refer to OpenID Connect Client Initiated Backchannel Authentication Flow specification . Also please refer to other places of Red Hat Single Sign-On documentation like Backchannel Authentication Endpoint of this guide and Client Initiated Backchannel Authentication Grant section of Server Administration Guide. For the details about FAPI CIBA compliance, please refer to the FAPI section of this guide . 2.4.4. Redirect URIs When using the redirect based flows it's important to use valid redirect uris for your clients. The redirect uris should be as specific as possible. This especially applies to client-side (public clients) applications. Failing to do so could result in: Open redirects - this can allow attackers to create spoof links that looks like they are coming from your domain Unauthorized entry - when users are already authenticated with Red Hat Single Sign-On an attacker can use a public client where redirect uris have not be configured correctly to gain access by redirecting the user without the users knowledge In production for web applications always use https for all redirect URIs. Do not allow redirects to http. There's also a few special redirect URIs: http://localhost This redirect URI is useful for native applications and allows the native application to create a web server on a random port that can be used to obtain the authorization code. This redirect uri allows any port. urn:ietf:wg:oauth:2.0:oob If its not possible to start a web server in the client (or a browser is not available) it is possible to use the special urn:ietf:wg:oauth:2.0:oob redirect uri. When this redirect uri is used Red Hat Single Sign-On displays a page with the code in the title and in a box on the page. The application can either detect that the browser title has changed, or the user can copy/paste the code manually to the application. With this redirect uri it is also possible for a user to use a different device to obtain a code to paste back to the application. 2.5. Financial-grade API (FAPI) Support Red Hat Single Sign-On makes it easier for administrators to make sure that their clients are compliant with these specifications: Financial-grade API Security Profile 1.0 - Part 1: Baseline Financial-grade API Security Profile 1.0 - Part 2: Advanced Financial-grade API: Client Initiated Backchannel Authentication Profile (FAPI CIBA) This compliance means that the Red Hat Single Sign-On server will verify the requirements for the authorization server, which are mentioned in the specifications. Red Hat Single Sign-On adapters do not have any specific support for the FAPI, hence the required validations on the client (application) side may need to be still done manually or through some other third-party solutions. 2.5.1. FAPI client profiles To make sure that your clients are FAPI compliant, you can configure Client Policies in your realm as described in the Server Administration Guide and link them to the global client profiles for FAPI support, which are automatically available in each realm. You can use either fapi-1-baseline or fapi-1-advanced profile based on which FAPI profile you need your clients to conform with. In case you want to use Pushed Authorization Request (PAR) , it is recommended that your client use both the fapi-1-baseline profile and fapi-1-advanced for PAR requests. Specifically, the fapi-1-baseline profile contains pkce-enforcer executor, which makes sure that client use PKCE with secured S256 algorithm. This is not required for FAPI Advanced clients unless they use PAR requests. In case you want to use CIBA in a FAPI compliant way, make sure that your clients use both fapi-1-advanced and fapi-ciba client profiles. There is a need to use the fapi-1-advanced profile, or other client profile containing the requested executors, as the fapi-ciba profile contains just CIBA-specific executors. When enforcing the requirements of the FAPI CIBA specification, there is a need for more requirements, such as enforcement of confidential clients or certificate-bound access tokens. 2.5.2. Open Banking Brasil Financial-grade API Security Profile Red Hat Single Sign-On is compliant with the Open Banking Brasil Financial-grade API Security Profile 1.0 Implementers Draft 2 . This one is more strict in some requirements than the FAPI 1 Advanced specification and hence it may be needed to configure Client Policies in the more strict way to enforce some of the requirements. Especially: If your client does not use PAR, make sure that it uses encrypted OIDC request objects. This can be achieved by using a client profile with the secure-request-object executor configured with Encryption Required enabled. Make sure that for JWS, the client uses the PS256 algorithm. For JWE, the client should use the RSA-OAEP with A256GCM . This may need to be set in all the Client Settings where these algorithms are applicable. 2.5.3. TLS considerations As confidential information is being exchanged, all interactions shall be encrypted with TLS (HTTPS). Moreover, there are some requirements in the FAPI specification for the cipher suites and TLS protocol versions used. To match these requirements, you can consider configure allowed ciphers. This configuration can be done in the KEYCLOAK_HOME/standalone/configuration/standalone-*.xml file in the Elytron subsystem. For example this element can be added under tls server-ssl-contexts <server-ssl-context name="kcSSLContext" want-client-auth="true" protocols="TLSv1.2" \ key-manager="kcKeyManager" trust-manager="kcTrustManager" \ cipher-suite-filter="TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384" protocols="TLSv1.2" /> The references to kcKeyManager and kcTrustManager refers to the corresponding Keystore and Truststore. See the documentation of Wildfly Elytron subsystem for more details and also other parts of Red Hat Single Sign-On documentation such as Network Setup Section or X.509 Authentication Section . | [
"{ \"realm\" : \"demo\", \"resource\" : \"customer-portal\", \"realm-public-key\" : \"MIGfMA0GCSqGSIb3D...31LwIDAQAB\", \"auth-server-url\" : \"https://localhost:8443/auth\", \"ssl-required\" : \"external\", \"use-resource-role-mappings\" : false, \"enable-cors\" : true, \"cors-max-age\" : 1000, \"cors-allowed-methods\" : \"POST, PUT, DELETE, GET\", \"cors-exposed-headers\" : \"WWW-Authenticate, My-custom-exposed-Header\", \"bearer-only\" : false, \"enable-basic-auth\" : false, \"expose-token\" : true, \"verify-token-audience\" : true, \"credentials\" : { \"secret\" : \"234234-234234-234234\" }, \"connection-pool-size\" : 20, \"socket-timeout-millis\" : 5000, \"connection-timeout-millis\" : 6000, \"connection-ttl-millis\" : 500, \"disable-trust-manager\" : false, \"allow-any-hostname\" : false, \"truststore\" : \"path/to/truststore.jks\", \"truststore-password\" : \"geheim\", \"client-keystore\" : \"path/to/client-keystore.jks\", \"client-keystore-password\" : \"geheim\", \"client-key-password\" : \"geheim\", \"token-minimum-time-to-live\" : 10, \"min-time-between-jwks-requests\" : 10, \"public-key-cache-ttl\" : 86400, \"redirect-rewrite-rules\" : { \"^/wsmaster/api/(.*)USD\" : \"/api/USD1\" } }",
"cd USDEAP_HOME unzip rh-sso-7.6.11-eap7-adapter.zip",
"./bin/jboss-cli.sh --file=bin/adapter-elytron-install-offline.cli",
"./bin/jboss-cli.sh -c --file=bin/adapter-elytron-install.cli",
"./bin/jboss-cli.sh -c --file=bin/adapter-install.cli",
"<web-app xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\" version=\"3.0\"> <module-name>application</module-name> <security-constraint> <web-resource-collection> <web-resource-name>Admins</web-resource-name> <url-pattern>/admin/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK</auth-method> <realm-name>this is ignored currently</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app>",
"<extensions> <extension module=\"org.keycloak.keycloak-adapter-subsystem\"/> </extensions> <profile> <subsystem xmlns=\"urn:jboss:domain:keycloak:1.1\"> <secure-deployment name=\"WAR MODULE NAME.war\"> <realm>demo</realm> <auth-server-url>http://localhost:8081/auth</auth-server-url> <ssl-required>external</ssl-required> <resource>customer-portal</resource> <credential name=\"secret\">password</credential> </secure-deployment> </subsystem> </profile>",
"<subsystem xmlns=\"urn:jboss:domain:keycloak:1.1\"> <realm name=\"demo\"> <auth-server-url>http://localhost:8080/auth</auth-server-url> <ssl-required>external</ssl-required> </realm> <secure-deployment name=\"customer-portal.war\"> <realm>demo</realm> <resource>customer-portal</resource> <credential name=\"secret\">password</credential> </secure-deployment> <secure-deployment name=\"product-portal.war\"> <realm>demo</realm> <resource>product-portal</resource> <credential name=\"secret\">password</credential> </secure-deployment> <secure-deployment name=\"database.war\"> <realm>demo</realm> <resource>database-service</resource> <bearer-only>true</bearer-only> </secure-deployment> </subsystem>",
"sudo subscription-manager repos --enable=jb-eap-7-for-rhel-<RHEL_VERSION>-server-rpms",
"sudo subscription-manager repos --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install eap7-keycloak-adapter-sso7_6",
"sudo dnf install eap7-keycloak-adapter-sso7_6",
"USDEAP_HOME/bin/jboss-cli.sh -c --file=USDEAP_HOME/bin/adapter-install.cli",
"sudo subscription-manager repos --enable=jb-eap-6-for-rhel-<RHEL_VERSION>-server-rpms",
"sudo yum install keycloak-adapter-sso7_6-eap6",
"USDEAP_HOME/bin/jboss-cli.sh -c --file=USDEAP_HOME/bin/adapter-install.cli",
"org.ops4j.pax.url.mvn.repositories= https://maven.repository.redhat.com/ga/@id=redhat.product.repo http://repo1.maven.org/maven2@id=maven.central.repo,",
"features:addurl mvn:org.keycloak/keycloak-osgi-features/18.0.18.redhat-00001/xml/features features:install keycloak",
"features:install keycloak-jetty9-adapter",
"features:list | grep keycloak",
"cd /path-to-fuse/jboss-fuse-6.3.0.redhat-254 unzip -q /path-to-adapter-zip/rh-sso-7.6.11-fuse-adapter.zip",
"features:addurl mvn:org.keycloak/keycloak-osgi-features/18.0.18.redhat-00001/xml/features features:install keycloak",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\" version=\"3.0\"> <module-name>customer-portal</module-name> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>does-not-matter</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app>",
"<?xml version=\"1.0\"?> <!DOCTYPE Configure PUBLIC \"-//Mort Bay Consulting//DTD Configure//EN\" \"http://www.eclipse.org/jetty/configure_9_0.dtd\"> <Configure class=\"org.eclipse.jetty.webapp.WebAppContext\"> <Get name=\"securityHandler\"> <Set name=\"authenticator\"> <New class=\"org.keycloak.adapters.jetty.KeycloakJettyAuthenticator\"> </New> </Set> </Get> </Configure>",
"org.keycloak.adapters.jetty;version=\"18.0.18.redhat-00001\", org.keycloak.adapters;version=\"18.0.18.redhat-00001\", org.keycloak.constants;version=\"18.0.18.redhat-00001\", org.keycloak.util;version=\"18.0.18.redhat-00001\", org.keycloak.*;version=\"18.0.18.redhat-00001\", *;resolution:=optional",
"<context-param> <param-name>keycloak.config.resolver</param-name> <param-value>org.keycloak.adapters.osgi.PathBasedKeycloakConfigResolver</param-value> </context-param>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <!-- Using jetty bean just for the compatibility with other fuse services --> <bean id=\"servletConstraintMapping\" class=\"org.eclipse.jetty.security.ConstraintMapping\"> <property name=\"constraint\"> <bean class=\"org.eclipse.jetty.util.security.Constraint\"> <property name=\"name\" value=\"cst1\"/> <property name=\"roles\"> <list> <value>user</value> </list> </property> <property name=\"authenticate\" value=\"true\"/> <property name=\"dataConstraint\" value=\"0\"/> </bean> </property> <property name=\"pathSpec\" value=\"/product-portal/*\"/> </bean> <bean id=\"keycloakPaxWebIntegration\" class=\"org.keycloak.adapters.osgi.PaxWebIntegrationService\" init-method=\"start\" destroy-method=\"stop\"> <property name=\"jettyWebXmlLocation\" value=\"/WEB-INF/jetty-web.xml\" /> <property name=\"bundleContext\" ref=\"blueprintBundleContext\" /> <property name=\"constraintMappings\"> <list> <ref component-id=\"servletConstraintMapping\" /> </list> </property> </bean> <bean id=\"productServlet\" class=\"org.keycloak.example.ProductPortalServlet\" depends-on=\"keycloakPaxWebIntegration\"> </bean> <service ref=\"productServlet\" interface=\"javax.servlet.Servlet\"> <service-properties> <entry key=\"alias\" value=\"/product-portal\" /> <entry key=\"servlet-name\" value=\"ProductServlet\" /> <entry key=\"keycloak.config.file\" value=\"/keycloak.json\" /> </service-properties> </service> </blueprint>",
"org.keycloak.adapters.jetty;version=\"18.0.18.redhat-00001\", org.keycloak.adapters;version=\"18.0.18.redhat-00001\", org.keycloak.constants;version=\"18.0.18.redhat-00001\", org.keycloak.util;version=\"18.0.18.redhat-00001\", org.keycloak.*;version=\"18.0.18.redhat-00001\", *;resolution:=optional",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:camel=\"http://camel.apache.org/schema/blueprint\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd\"> <bean id=\"kcAdapterConfig\" class=\"org.keycloak.representations.adapters.config.AdapterConfig\"> <property name=\"realm\" value=\"demo\"/> <property name=\"resource\" value=\"admin-camel-endpoint\"/> <property name=\"bearerOnly\" value=\"true\"/> <property name=\"authServerUrl\" value=\"http://localhost:8080/auth\" /> <property name=\"sslRequired\" value=\"EXTERNAL\"/> </bean> <bean id=\"keycloakAuthenticator\" class=\"org.keycloak.adapters.jetty.KeycloakJettyAuthenticator\"> <property name=\"adapterConfig\" ref=\"kcAdapterConfig\"/> </bean> <bean id=\"constraint\" class=\"org.eclipse.jetty.util.security.Constraint\"> <property name=\"name\" value=\"Customers\"/> <property name=\"roles\"> <list> <value>admin</value> </list> </property> <property name=\"authenticate\" value=\"true\"/> <property name=\"dataConstraint\" value=\"0\"/> </bean> <bean id=\"constraintMapping\" class=\"org.eclipse.jetty.security.ConstraintMapping\"> <property name=\"constraint\" ref=\"constraint\"/> <property name=\"pathSpec\" value=\"/*\"/> </bean> <bean id=\"securityHandler\" class=\"org.eclipse.jetty.security.ConstraintSecurityHandler\"> <property name=\"authenticator\" ref=\"keycloakAuthenticator\" /> <property name=\"constraintMappings\"> <list> <ref component-id=\"constraintMapping\" /> </list> </property> <property name=\"authMethod\" value=\"BASIC\"/> <property name=\"realmName\" value=\"does-not-matter\"/> </bean> <bean id=\"sessionHandler\" class=\"org.keycloak.adapters.jetty.spi.WrappingSessionHandler\"> <property name=\"handler\" ref=\"securityHandler\" /> </bean> <bean id=\"helloProcessor\" class=\"org.keycloak.example.CamelHelloProcessor\" /> <camelContext id=\"blueprintContext\" trace=\"false\" xmlns=\"http://camel.apache.org/schema/blueprint\"> <route id=\"httpBridge\"> <from uri=\"jetty:http://0.0.0.0:8383/admin-camel-endpoint?handlers=sessionHandler&matchOnUriPrefix=true\" /> <process ref=\"helloProcessor\" /> <log message=\"The message from camel endpoint contains USD{body}\"/> </route> </camelContext> </blueprint>",
"javax.servlet;version=\"[3,4)\", javax.servlet.http;version=\"[3,4)\", org.apache.camel.*, org.apache.camel;version=\"[2.13,3)\", org.eclipse.jetty.security;version=\"[9,10)\", org.eclipse.jetty.server.nio;version=\"[9,10)\", org.eclipse.jetty.util.security;version=\"[9,10)\", org.keycloak.*;version=\"18.0.18.redhat-00001\", org.osgi.service.blueprint, org.osgi.service.blueprint.container, org.osgi.service.event,",
"<bean id=\"securityHandlerRest\" class=\"org.eclipse.jetty.security.ConstraintSecurityHandler\"> <property name=\"authenticator\" ref=\"keycloakAuthenticator\" /> <property name=\"constraintMappings\"> <list> <ref component-id=\"constraintMapping\" /> </list> </property> <property name=\"authMethod\" value=\"BASIC\"/> <property name=\"realmName\" value=\"does-not-matter\"/> </bean> <bean id=\"sessionHandlerRest\" class=\"org.keycloak.adapters.jetty.spi.WrappingSessionHandler\"> <property name=\"handler\" ref=\"securityHandlerRest\" /> </bean> <camelContext id=\"blueprintContext\" trace=\"false\" xmlns=\"http://camel.apache.org/schema/blueprint\"> <restConfiguration component=\"jetty\" contextPath=\"/restdsl\" port=\"8484\"> <!--the link with Keycloak security handlers happens here--> <endpointProperty key=\"handlers\" value=\"sessionHandlerRest\"></endpointProperty> <endpointProperty key=\"matchOnUriPrefix\" value=\"true\"></endpointProperty> </restConfiguration> <rest path=\"/hello\" > <description>Hello rest service</description> <get uri=\"/{id}\" outType=\"java.lang.String\"> <description>Just an helllo</description> <to uri=\"direct:justDirect\" /> </get> </rest> <route id=\"justDirect\"> <from uri=\"direct:justDirect\"/> <process ref=\"helloProcessor\" /> <log message=\"RestDSL correctly invoked USD{body}\"/> <setBody> <constant>(__This second sentence is returned from a Camel RestDSL endpoint__)</constant> </setBody> </route> </camelContext>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxws=\"http://cxf.apache.org/jaxws\" xmlns:httpj=\"http://cxf.apache.org/transports/http-jetty/configuration\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://www.springframework.org/schema/osgi http://www.springframework.org/schema/osgi/spring-osgi.xsd http://cxf.apache.org/transports/http-jetty/configuration http://cxf.apache.org/schemas/configuration/http-jetty.xsd\"> <import resource=\"classpath:META-INF/cxf/cxf.xml\" /> <bean id=\"kcAdapterConfig\" class=\"org.keycloak.representations.adapters.config.AdapterConfig\"> <property name=\"realm\" value=\"demo\"/> <property name=\"resource\" value=\"custom-cxf-endpoint\"/> <property name=\"bearerOnly\" value=\"true\"/> <property name=\"authServerUrl\" value=\"http://localhost:8080/auth\" /> <property name=\"sslRequired\" value=\"EXTERNAL\"/> </bean> <bean id=\"keycloakAuthenticator\" class=\"org.keycloak.adapters.jetty.KeycloakJettyAuthenticator\"> <property name=\"adapterConfig\"> <ref local=\"kcAdapterConfig\" /> </property> </bean> <bean id=\"constraint\" class=\"org.eclipse.jetty.util.security.Constraint\"> <property name=\"name\" value=\"Customers\"/> <property name=\"roles\"> <list> <value>user</value> </list> </property> <property name=\"authenticate\" value=\"true\"/> <property name=\"dataConstraint\" value=\"0\"/> </bean> <bean id=\"constraintMapping\" class=\"org.eclipse.jetty.security.ConstraintMapping\"> <property name=\"constraint\" ref=\"constraint\"/> <property name=\"pathSpec\" value=\"/*\"/> </bean> <bean id=\"securityHandler\" class=\"org.eclipse.jetty.security.ConstraintSecurityHandler\"> <property name=\"authenticator\" ref=\"keycloakAuthenticator\" /> <property name=\"constraintMappings\"> <list> <ref local=\"constraintMapping\" /> </list> </property> <property name=\"authMethod\" value=\"BASIC\"/> <property name=\"realmName\" value=\"does-not-matter\"/> </bean> <httpj:engine-factory bus=\"cxf\" id=\"kc-cxf-endpoint\"> <httpj:engine port=\"8282\"> <httpj:handlers> <ref local=\"securityHandler\" /> </httpj:handlers> <httpj:sessionSupport>true</httpj:sessionSupport> </httpj:engine> </httpj:engine-factory> <jaxws:endpoint implementor=\"org.keycloak.example.ws.ProductImpl\" address=\"http://localhost:8282/ProductServiceCF\" depends-on=\"kc-cxf-endpoint\" /> </beans>",
"<jaxrs:server serviceClass=\"org.keycloak.example.rs.CustomerService\" address=\"http://localhost:8282/rest\" depends-on=\"kc-cxf-endpoint\"> <jaxrs:providers> <bean class=\"com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider\" /> </jaxrs:providers> </jaxrs:server>",
"META-INF.cxf;version=\"[2.7,3.2)\", META-INF.cxf.osgi;version=\"[2.7,3.2)\";resolution:=optional, org.apache.cxf.bus;version=\"[2.7,3.2)\", org.apache.cxf.bus.spring;version=\"[2.7,3.2)\", org.apache.cxf.bus.resource;version=\"[2.7,3.2)\", org.apache.cxf.transport.http;version=\"[2.7,3.2)\", org.apache.cxf.*;version=\"[2.7,3.2)\", org.springframework.beans.factory.config, org.eclipse.jetty.security;version=\"[9,10)\", org.eclipse.jetty.util.security;version=\"[9,10)\", org.keycloak.*;version=\"18.0.18.redhat-00001\"",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://cxf.apache.org/blueprint/jaxrs http://cxf.apache.org/schemas/blueprint/jaxrs.xsd\"> <!-- JAXRS Application --> <bean id=\"customerBean\" class=\"org.keycloak.example.rs.CxfCustomerService\" /> <jaxrs:server id=\"cxfJaxrsServer\" address=\"/customerservice\"> <jaxrs:providers> <bean class=\"com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider\" /> </jaxrs:providers> <jaxrs:serviceBeans> <ref component-id=\"customerBean\" /> </jaxrs:serviceBeans> </jaxrs:server> <!-- Securing of whole /cxf context by unregister default cxf servlet from paxweb and re-register with applied security constraints --> <bean id=\"cxfConstraintMapping\" class=\"org.eclipse.jetty.security.ConstraintMapping\"> <property name=\"constraint\"> <bean class=\"org.eclipse.jetty.util.security.Constraint\"> <property name=\"name\" value=\"cst1\"/> <property name=\"roles\"> <list> <value>user</value> </list> </property> <property name=\"authenticate\" value=\"true\"/> <property name=\"dataConstraint\" value=\"0\"/> </bean> </property> <property name=\"pathSpec\" value=\"/cxf/*\"/> </bean> <bean id=\"cxfKeycloakPaxWebIntegration\" class=\"org.keycloak.adapters.osgi.PaxWebIntegrationService\" init-method=\"start\" destroy-method=\"stop\"> <property name=\"bundleContext\" ref=\"blueprintBundleContext\" /> <property name=\"jettyWebXmlLocation\" value=\"/WEB-INF/jetty-web.xml\" /> <property name=\"constraintMappings\"> <list> <ref component-id=\"cxfConstraintMapping\" /> </list> </property> </bean> <bean id=\"defaultCxfReregistration\" class=\"org.keycloak.adapters.osgi.ServletReregistrationService\" depends-on=\"cxfKeycloakPaxWebIntegration\" init-method=\"start\" destroy-method=\"stop\"> <property name=\"bundleContext\" ref=\"blueprintBundleContext\" /> <property name=\"managedServiceReference\"> <reference interface=\"org.osgi.service.cm.ManagedService\" filter=\"(service.pid=org.apache.cxf.osgi)\" timeout=\"5000\" /> </property> </bean> </blueprint>",
"META-INF.cxf;version=\"[2.7,3.2)\", META-INF.cxf.osgi;version=\"[2.7,3.2)\";resolution:=optional, org.apache.cxf.transport.http;version=\"[2.7,3.2)\", org.apache.cxf.*;version=\"[2.7,3.2)\", com.fasterxml.jackson.jaxrs.json;version=\"[2.5,3)\", org.eclipse.jetty.security;version=\"[9,10)\", org.eclipse.jetty.util.security;version=\"[9,10)\", org.keycloak.*;version=\"18.0.18.redhat-00001\", org.keycloak.adapters.jetty;version=\"18.0.18.redhat-00001\", *;resolution:=optional",
"sshRealm=keycloak",
"{ \"realm\": \"demo\", \"resource\": \"ssh-jmx-admin-client\", \"ssl-required\" : \"external\", \"auth-server-url\" : \"http://localhost:8080/auth\", \"credentials\": { \"secret\": \"password\" } }",
"features:addurl mvn:org.keycloak/keycloak-osgi-features/18.0.18.redhat-00001/xml/features features:install keycloak-jaas",
"ssh -o PubkeyAuthentication=no -p 8101 admin@localhost",
"jmxRealm=keycloak",
"service:jmx:rmi://localhost:44444/jndi/rmi://localhost:1099/karaf-root",
"hawtio.keycloakEnabled=true hawtio.realm=keycloak hawtio.keycloakClientConfig=file://USD{karaf.base}/etc/keycloak-hawtio-client.json hawtio.rolePrincipalClasses=org.keycloak.adapters.jaas.RolePrincipal,org.apache.karaf.jaas.boot.principal.RolePrincipal",
"{ \"realm\" : \"demo\", \"resource\" : \"hawtio-client\", \"auth-server-url\" : \"http://localhost:8080/auth\", \"ssl-required\" : \"external\", \"public-client\" : true }",
"{ \"realm\" : \"demo\", \"resource\" : \"jaas\", \"bearer-only\" : true, \"auth-server-url\" : \"http://localhost:8080/auth\", \"ssl-required\" : \"external\", \"use-resource-role-mappings\": false, \"principal-attribute\": \"preferred_username\" }",
"features:addurl mvn:org.keycloak/keycloak-osgi-features/18.0.18.redhat-00001/xml/features features:install keycloak",
"<extensions> </extensions> <system-properties> <property name=\"hawtio.authenticationEnabled\" value=\"true\" /> <property name=\"hawtio.realm\" value=\"hawtio\" /> <property name=\"hawtio.roles\" value=\"admin,viewer\" /> <property name=\"hawtio.rolePrincipalClasses\" value=\"org.keycloak.adapters.jaas.RolePrincipal\" /> <property name=\"hawtio.keycloakEnabled\" value=\"true\" /> <property name=\"hawtio.keycloakClientConfig\" value=\"USD{jboss.server.config.dir}/keycloak-hawtio-client.json\" /> <property name=\"hawtio.keycloakServerConfig\" value=\"USD{jboss.server.config.dir}/keycloak-hawtio.json\" /> </system-properties>",
"<security-domain name=\"hawtio\" cache-type=\"default\"> <authentication> <login-module code=\"org.keycloak.adapters.jaas.BearerTokenLoginModule\" flag=\"required\"> <module-option name=\"keycloak-config-file\" value=\"USD{hawtio.keycloakServerConfig}\"/> </login-module> </authentication> </security-domain>",
"<subsystem xmlns=\"urn:jboss:domain:keycloak:1.1\"> <secure-deployment name=\"hawtio-wildfly-1.4.0.redhat-630396.war\" /> </subsystem>",
"cd USDEAP_HOME/bin ./standalone.sh -Djboss.socket.binding.port-offset=101",
"config:edit org.ops4j.pax.url.mvn config:property-append org.ops4j.pax.url.mvn.repositories ,https://maven.repository.redhat.com/ga/@id=redhat.product.repo config:update feature:repo-refresh",
"feature:repo-add mvn:org.keycloak/keycloak-osgi-features/18.0.18.redhat-00001/xml/features feature:install keycloak-pax-http-undertow keycloak-jaas",
"feature:install pax-web-http-undertow",
"feature:list | grep keycloak",
"cd /path-to-fuse/fuse-karaf-7.z unzip -q /path-to-adapter-zip/rh-sso-7.6.11-fuse-adapter.zip",
"feature:repo-add mvn:org.keycloak/keycloak-osgi-features/18.0.18.redhat-00001/xml/features feature:install keycloak-pax-http-undertow keycloak-jaas",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\" version=\"3.0\"> <module-name>customer-portal</module-name> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK</auth-method> <realm-name>does-not-matter</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app>",
"{ \"realm\": \"demo\", \"resource\": \"customer-portal\", \"auth-server-url\": \"http://localhost:8080/auth\", \"ssl-required\" : \"external\", \"credentials\": { \"secret\": \"password\" } }",
"<context-param> <param-name>keycloak.config.resolver</param-name> <param-value>org.keycloak.adapters.osgi.PathBasedKeycloakConfigResolver</param-value> </context-param>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\"> <bean id=\"servletConstraintMapping\" class=\"org.keycloak.adapters.osgi.PaxWebSecurityConstraintMapping\"> <property name=\"roles\"> <list> <value>user</value> </list> </property> <property name=\"authentication\" value=\"true\"/> <property name=\"url\" value=\"/product-portal/*\"/> </bean> <!-- This handles the integration and setting the login-config and security-constraints parameters --> <bean id=\"keycloakPaxWebIntegration\" class=\"org.keycloak.adapters.osgi.undertow.PaxWebIntegrationService\" init-method=\"start\" destroy-method=\"stop\"> <property name=\"bundleContext\" ref=\"blueprintBundleContext\" /> <property name=\"constraintMappings\"> <list> <ref component-id=\"servletConstraintMapping\" /> </list> </property> </bean> <bean id=\"productServlet\" class=\"org.keycloak.example.ProductPortalServlet\" depends-on=\"keycloakPaxWebIntegration\" /> <service ref=\"productServlet\" interface=\"javax.servlet.Servlet\"> <service-properties> <entry key=\"alias\" value=\"/product-portal\" /> <entry key=\"servlet-name\" value=\"ProductServlet\" /> <entry key=\"keycloak.config.file\" value=\"/keycloak.json\" /> </service-properties> </service> </blueprint>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:camel=\"http://camel.apache.org/schema/blueprint\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint-2.17.1.xsd\"> <bean id=\"keycloakConfigResolver\" class=\"org.keycloak.adapters.osgi.BundleBasedKeycloakConfigResolver\" > <property name=\"bundleContext\" ref=\"blueprintBundleContext\" /> </bean> <bean id=\"helloProcessor\" class=\"org.keycloak.example.CamelHelloProcessor\" /> <camelContext id=\"blueprintContext\" trace=\"false\" xmlns=\"http://camel.apache.org/schema/blueprint\"> <route id=\"httpBridge\"> <from uri=\"undertow-keycloak:http://0.0.0.0:8383/admin-camel-endpoint?matchOnUriPrefix=true&configResolver=#keycloakConfigResolver&allowedRoles=admin\" /> <process ref=\"helloProcessor\" /> <log message=\"The message from camel endpoint contains USD{body}\"/> </route> </camelContext> </blueprint>",
"javax.servlet;version=\"[3,4)\", javax.servlet.http;version=\"[3,4)\", javax.net.ssl, org.apache.camel.*, org.apache.camel;version=\"[2.13,3)\", io.undertow.*, org.keycloak.*;version=\"18.0.18.redhat-00001\", org.osgi.service.blueprint, org.osgi.service.blueprint.container",
"<camelContext id=\"blueprintContext\" trace=\"false\" xmlns=\"http://camel.apache.org/schema/blueprint\"> <!--the link with Keycloak security handlers happens by using undertow-keycloak component --> <restConfiguration apiComponent=\"undertow-keycloak\" contextPath=\"/restdsl\" port=\"8484\"> <endpointProperty key=\"configResolver\" value=\"#keycloakConfigResolver\" /> <endpointProperty key=\"allowedRoles\" value=\"admin,superadmin\" /> </restConfiguration> <rest path=\"/hello\" > <description>Hello rest service</description> <get uri=\"/{id}\" outType=\"java.lang.String\"> <description>Just a hello</description> <to uri=\"direct:justDirect\" /> </get> </rest> <route id=\"justDirect\"> <from uri=\"direct:justDirect\"/> <process ref=\"helloProcessor\" /> <log message=\"RestDSL correctly invoked USD{body}\"/> <setBody> <constant>(__This second sentence is returned from a Camel RestDSL endpoint__)</constant> </setBody> </route> </camelContext>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxws=\"http://cxf.apache.org/blueprint/jaxws\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" xmlns:httpu=\"http://cxf.apache.org/transports/http-undertow/configuration\". xsi:schemaLocation=\" http://cxf.apache.org/transports/http-undertow/configuration http://cxf.apache.org/schemas/configuration/http-undertow.xsd http://cxf.apache.org/blueprint/core http://cxf.apache.org/schemas/blueprint/core.xsd http://cxf.apache.org/blueprint/jaxws http://cxf.apache.org/schemas/blueprint/jaxws.xsd\"> <bean id=\"keycloakConfigResolver\" class=\"org.keycloak.adapters.osgi.BundleBasedKeycloakConfigResolver\" > <property name=\"bundleContext\" ref=\"blueprintBundleContext\" /> </bean> <httpu:engine-factory bus=\"cxf\" id=\"kc-cxf-endpoint\"> <httpu:engine port=\"8282\"> <httpu:handlers> <bean class=\"org.keycloak.adapters.osgi.undertow.CxfKeycloakAuthHandler\"> <property name=\"configResolver\" ref=\"keycloakConfigResolver\" /> </bean> </httpu:handlers> </httpu:engine> </httpu:engine-factory> <jaxws:endpoint implementor=\"org.keycloak.example.ws.ProductImpl\" address=\"http://localhost:8282/ProductServiceCF\" depends-on=\"kc-cxf-endpoint\"/> </blueprint>",
"<jaxrs:server serviceClass=\"org.keycloak.example.rs.CustomerService\" address=\"http://localhost:8282/rest\" depends-on=\"kc-cxf-endpoint\"> <jaxrs:providers> <bean class=\"com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider\" /> </jaxrs:providers> </jaxrs:server>",
"META-INF.cxf;version=\"[2.7,3.3)\", META-INF.cxf.osgi;version=\"[2.7,3.3)\";resolution:=optional, org.apache.cxf.bus;version=\"[2.7,3.3)\", org.apache.cxf.bus.spring;version=\"[2.7,3.3)\", org.apache.cxf.bus.resource;version=\"[2.7,3.3)\", org.apache.cxf.transport.http;version=\"[2.7,3.3)\", org.apache.cxf.*;version=\"[2.7,3.3)\", org.springframework.beans.factory.config, org.keycloak.*;version=\"18.0.18.redhat-00001\"",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://cxf.apache.org/blueprint/jaxrs http://cxf.apache.org/schemas/blueprint/jaxrs.xsd\"> <!-- JAXRS Application --> <bean id=\"customerBean\" class=\"org.keycloak.example.rs.CxfCustomerService\" /> <jaxrs:server id=\"cxfJaxrsServer\" address=\"/customerservice\"> <jaxrs:providers> <bean class=\"com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider\" /> </jaxrs:providers> <jaxrs:serviceBeans> <ref component-id=\"customerBean\" /> </jaxrs:serviceBeans> </jaxrs:server> </blueprint>",
"bundle.symbolicName = org.apache.cxf.cxf-rt-transports-http context.id = default context.param.keycloak.config.resolver = org.keycloak.adapters.osgi.HierarchicalPathBasedKeycloakConfigResolver login.config.authMethod = KEYCLOAK security.cxf.url = /cxf/customerservice/* security.cxf.roles = admin, user",
"javax.ws.rs;version=\"[2,3)\", META-INF.cxf;version=\"[2.7,3.3)\", META-INF.cxf.osgi;version=\"[2.7,3.3)\";resolution:=optional, org.apache.cxf.transport.http;version=\"[2.7,3.3)\", org.apache.cxf.*;version=\"[2.7,3.3)\", com.fasterxml.jackson.jaxrs.json;version=\"USD{jackson.version}\"",
"sshRealm=keycloak",
"{ \"realm\": \"demo\", \"resource\": \"ssh-jmx-admin-client\", \"ssl-required\" : \"external\", \"auth-server-url\" : \"http://localhost:8080/auth\", \"credentials\": { \"secret\": \"password\" } }",
"features:addurl mvn:org.keycloak/keycloak-osgi-features/18.0.18.redhat-00001/xml/features features:install keycloak-jaas",
"ssh -o PubkeyAuthentication=no -p 8101 admin@localhost",
"jmxRealm=keycloak",
"service:jmx:rmi://localhost:44444/jndi/rmi://localhost:1099/karaf-root",
"{ \"realm\" : \"demo\", \"clientId\" : \"hawtio-client\", \"url\" : \"http://localhost:8080/auth\", \"ssl-required\" : \"external\", \"public-client\" : true }",
"{ \"realm\" : \"demo\", \"resource\" : \"ssh-jmx-admin-client\", \"auth-server-url\" : \"http://localhost:8080/auth\", \"ssl-required\" : \"external\", \"credentials\": { \"secret\": \"password\" } }",
"{ \"realm\" : \"demo\", \"resource\" : \"jaas\", \"bearer-only\" : true, \"auth-server-url\" : \"http://localhost:8080/auth\", \"ssl-required\" : \"external\", \"use-resource-role-mappings\": false, \"principal-attribute\": \"preferred_username\" }",
"system:property -p hawtio.keycloakEnabled true system:property -p hawtio.realm keycloak system:property -p hawtio.keycloakClientConfig file://\\USD{karaf.base}/etc/keycloak-hawtio-client.json system:property -p hawtio.rolePrincipalClasses org.keycloak.adapters.jaas.RolePrincipal,org.apache.karaf.jaas.boot.principal.RolePrincipal restart io.hawt.hawtio-war",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-spring-boot-starter</artifactId> </dependency>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.keycloak.bom</groupId> <artifactId>keycloak-adapter-bom</artifactId> <version>18.0.18.redhat-00001</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"keycloak.realm = demorealm keycloak.auth-server-url = http://127.0.0.1:8080/auth keycloak.ssl-required = external keycloak.resource = demoapp keycloak.credentials.secret = 11111111-1111-1111-1111-111111111111 keycloak.use-resource-role-mappings = true",
"keycloak.securityConstraints[0].authRoles[0] = admin keycloak.securityConstraints[0].authRoles[1] = user keycloak.securityConstraints[0].securityCollections[0].name = insecure stuff keycloak.securityConstraints[0].securityCollections[0].patterns[0] = /insecure keycloak.securityConstraints[1].authRoles[0] = admin keycloak.securityConstraints[1].securityCollections[0].name = admin stuff keycloak.securityConstraints[1].securityCollections[0].patterns[0] = /admin",
"<web-app xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\" version=\"3.0\"> <module-name>application</module-name> <filter> <filter-name>Keycloak Filter</filter-name> <filter-class>org.keycloak.adapters.servlet.KeycloakOIDCFilter</filter-class> </filter> <filter-mapping> <filter-name>Keycloak Filter</filter-name> <url-pattern>/keycloak/*</url-pattern> <url-pattern>/protected/*</url-pattern> </filter-mapping> </web-app>",
"<init-param> <param-name>keycloak.config.skipPattern</param-name> <param-value>^/(path1|path2|path3).*</param-value> </init-param>",
"<init-param> <param-name>keycloak.config.idMapper</param-name> <param-value>org.keycloak.adapters.spi.InMemorySessionIdMapper</param-value> </init-param>",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-servlet-filter-adapter</artifactId> <version>18.0.18.redhat-00001</version> </dependency>",
"httpServletRequest .getAttribute(KeycloakSecurityContext.class.getName());",
"httpServletRequest.getSession() .getAttribute(KeycloakSecurityContext.class.getName());",
"<error-page> <error-code>403</error-code> <location>/ErrorHandler</location> </error-page>",
"import org.keycloak.adapters.OIDCAuthenticationError; import org.keycloak.adapters.OIDCAuthenticationError.Reason; OIDCAuthenticationError error = (OIDCAuthenticationError) httpServletRequest .getAttribute('org.keycloak.adapters.spi.AuthenticationError'); Reason reason = error.getReason(); System.out.println(reason.name());",
"http://myappserver/mysecuredapp?scope=offline_access",
"\"credentials\": { \"secret\": \"19666a4f-32dd-4049-b082-684c74115f28\" }",
"\"credentials\": { \"jwt\": { \"client-keystore-file\": \"classpath:keystore-client.jks\", \"client-keystore-type\": \"JKS\", \"client-keystore-password\": \"storepass\", \"client-key-password\": \"keypass\", \"client-key-alias\": \"clientkey\", \"algorithm\": \"RS256\", \"token-expiration\": 10 } }",
"package example; import org.keycloak.adapters.KeycloakConfigResolver; import org.keycloak.adapters.KeycloakDeployment; import org.keycloak.adapters.KeycloakDeploymentBuilder; public class PathBasedKeycloakConfigResolver implements KeycloakConfigResolver { @Override public KeycloakDeployment resolve(OIDCHttpFacade.Request request) { if (path.startsWith(\"alternative\")) { KeycloakDeployment deployment = cache.get(realm); if (null == deployment) { InputStream is = getClass().getResourceAsStream(\"/tenant1-keycloak.json\"); return KeycloakDeploymentBuilder.build(is); } } else { InputStream is = getClass().getResourceAsStream(\"/default-keycloak.json\"); return KeycloakDeploymentBuilder.build(is); } } }",
"<web-app> <context-param> <param-name>keycloak.config.resolver</param-name> <param-value>example.PathBasedKeycloakConfigResolver</param-value> </context-param> </web-app>",
"\"token-store\": \"cookie\"",
"\"register-node-at-startup\": true, \"register-node-period\": 600,",
"\"always-refresh-token\": true",
"<html> <head> <script src=\"keycloak.js\"></script> <script> function initKeycloak() { const options = {}; const keycloak = new Keycloak(); keycloak.init(options) .then(function(authenticated) { console.log('keycloak:' + (authenticated ? 'authenticated' : 'not authenticated')); }).catch(function(error) { for (const property in error) { console.error(`USD{property}: USD{error[property]}`); } }); } </script> </head> <body onload=\"initKeycloak()\"> <!-- your page content goes here --> </body> </html>",
"const keycloak = new Keycloak('http://localhost:8080/myapp/keycloak.json');",
"const keycloak = new Keycloak({ url: 'http://keycloak-serverUSD/auth', realm: 'myrealm', clientId: 'myapp' });",
"keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: window.location.origin + '/silent-check-sso.html' })",
"<html> <body> <script> parent.postMessage(location.href, location.origin) </script> </body> </html>",
"keycloak.init({ onLoad: 'login-required' })",
"const loadData = function () { document.getElementById('username').innerText = keycloak.subject; const url = 'http://localhost:8080/restful-service'; const req = new XMLHttpRequest(); req.open('GET', url, true); req.setRequestHeader('Accept', 'application/json'); req.setRequestHeader('Authorization', 'Bearer ' + keycloak.token); req.onreadystatechange = function () { if (req.readyState == 4) { if (req.status == 200) { alert('Success'); } else if (req.status == 403) { alert('Forbidden'); } } } req.send(); };",
"keycloak.updateToken(30).then(function() { loadData(); }).catch(function() { alert('Failed to refresh token'); });",
"keycloak.init({ flow: 'implicit' })",
"keycloak.init({ flow: 'hybrid' })",
"keycloak.init({ adapter: 'cordova-native' })",
"<preference name=\"AndroidLaunchMode\" value=\"singleTask\" />",
"import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak(); keycloak.init({ adapter: KeycloakCapacitorAdapter, });",
"import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { login(options) { // Write your own implementation here. } // The other methods go here }; const keycloak = new Keycloak(); keycloak.init({ adapter: MyCustomAdapter, });",
"new Keycloak(); new Keycloak('http://localhost/keycloak.json'); new Keycloak({ url: 'http://localhost/auth', realm: 'myrealm', clientId: 'myApp' });",
"keycloak.loadUserProfile() .then(function(profile) { alert(JSON.stringify(profile, null, \" \")) }).catch(function() { alert('Failed to load user profile'); });",
"keycloak.updateToken(5) .then(function(refreshed) { if (refreshed) { alert('Token was successfully refreshed'); } else { alert('Token is still valid'); } }).catch(function() { alert('Failed to refresh the token, or the session has expired'); });",
"keycloak.onAuthSuccess = function() { alert('authenticated'); }",
"mkdir myapp && cd myapp",
"\"dependencies\": { \"keycloak-connect\": \"file:keycloak-connect-18.0.7.tgz\" }",
"const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore });",
"npm install express-session",
"\"scripts\": { \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\", \"start\" \"node server.js\" },",
"npm run start",
"const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080/auth', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig);",
"const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig);",
"const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore });",
"const keycloak = new Keycloak({ scope: 'offline_access' });",
"npm install express",
"const express = require('express'); const app = express();",
"app.use( keycloak.middleware() );",
"app.listen(3000, function () { console.log('App listening on port 3000'); });",
"const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() );",
"app.get( '/check-sso', keycloak.checkSso(), checkSsoHandler );",
"app.get( '/complain', keycloak.protect(), complaintHandler );",
"app.get( '/special', keycloak.protect('special'), specialHandler );",
"app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler );",
"app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler );",
"app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler);",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler);",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile });",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile });",
"keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'})",
"app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { \"http.uri\": [\"/protected/resource\"], \"user.agent\": // get user agent from request } } }), function (req, res) { // access granted",
"function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler );",
"Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\\/api\\//i; return !apiReqMatcher.test(req.originalUrl || req.url); };",
"app.use( keycloak.middleware( { logout: '/logoff' } ));",
"https://example.com/logoff?redirect_url=https%3A%2F%2Fexample.com%3A3000%2Flogged%2Fout",
"app.use( keycloak.middleware( { admin: '/callbacks' } );",
"/realms/{realm-name}/.well-known/openid-configuration",
"/realms/{realm-name}/protocol/openid-connect/auth",
"/realms/{realm-name}/protocol/openid-connect/token",
"/realms/{realm-name}/protocol/openid-connect/userinfo",
"/realms/{realm-name}/protocol/openid-connect/logout",
"/realms/{realm-name}/protocol/openid-connect/certs",
"/realms/{realm-name}/protocol/openid-connect/token/introspect",
"/realms/{realm-name}/clients-registrations/openid-connect",
"/realms/{realm-name}/protocol/openid-connect/revoke",
"/realms/{realm-name}/protocol/openid-connect/auth/device",
"/realms/{realm-name}/protocol/openid-connect/ext/ciba/auth",
"curl -d \"client_id=myclient\" -d \"client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578\" -d \"username=user\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/auth/realms/master/protocol/openid-connect/token\"",
"<server-ssl-context name=\"kcSSLContext\" want-client-auth=\"true\" protocols=\"TLSv1.2\" key-manager=\"kcKeyManager\" trust-manager=\"kcTrustManager\" cipher-suite-filter=\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384\" protocols=\"TLSv1.2\" />"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/securing_applications_and_services_guide/oidc |
1.3. LVM Architecture Overview | 1.3. LVM Architecture Overview Note LVM2 is backwards compatible with LVM1, with the exception of snapshot and cluster support. You can convert a volume group from LVM1 format to LVM2 format with the vgconvert command. For information on converting LVM metadata format, see the vgconvert (8) man page. The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. This device is initialized as an LVM physical volume (PV). To create an LVM logical volume, the physical volumes are combined into a volume group (VG). This creates a pool of disk space out of which LVM logical volumes (LVs) can be allocated. This process is analogous to the way in which disks are divided into partitions. A logical volume is used by file systems and applications (such as databases). Figure 1.1, "LVM Logical Volume Components" shows the components of a simple LVM logical volume: Figure 1.1. LVM Logical Volume Components For detailed information on the components of an LVM logical volume, see Chapter 2, LVM Components . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/lvm_definition |
4.2. Tuning Directory Server Resource Settings | 4.2. Tuning Directory Server Resource Settings You can configure several parameters to manage and improve the amount of resources Directory Server uses. 4.2.1. Updating Directory Server Resource Settings Using the Command Line To update the server resource settings using the command line: Update the performance settings: You can set the following parameters: nsslapd-threadnumber : Sets the number of worker threads. nsslapd-maxdescriptors : Sets the maximum number of file descriptors. nsslapd-timelimit : Sets the search time limit. nsslapd-sizelimit : Sets the search size limit. nsslapd-pagedsizelimit : Sets the paged search size limit. nsslapd-idletimeout : Sets the idle connection timeout. nsslapd-ioblocktimeout : Sets the input/output (I/O) block timeout. nsslapd-ndn-cache-enabled : Enables or disables the normalized DN cache. nsslapd-ndn-cache-max-size : Sets the normalized DN cache size, if nsslapd-ndn-cache-enabled is enabled. nsslapd-outbound-ldap-io-timeout : Sets the outbound I/O timeout. nsslapd-maxbersize : Sets the maximum Basic Encoding Rules (BER) size. nsslapd-maxsasliosize : Sets the maximum Simple Authentication and Security Layer (SASL) I/O size. nsslapd-listen-backlog-size : Sets the maximum number of sockets available to receive incoming connections. nsslapd-max-filter-nest-level : Sets the maximum nested filter level. nsslapd-ignore-virtual-attrs : Enables or disables virtual attribute lookups. nsslapd-connection-nocanon : Enables or disables revers DNS lookups. nsslapd-enable-turbo-mode : Enables or disables the turbo mode feature. For further details about these parameters, see their descriptions in the Red Hat Directory Server Configuration, Command, and File Reference . Restart the Directory Server instance: 4.2.2. Updating Directory Server Resource Settings Using the Web Console To update the server resource settings using the Web Console: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. Open the Server Settings menu, and select Tuning & Limits . Update the settings. Optionally, click Show Advanced Settings to display all settings. To display a tooltip and the corresponding attribute name in the cn=config entry for a parameter, hover the mouse cursor over the setting. For further details, see the parameter description in the Red Hat Directory Server Configuration, Command, and File Reference. . Click Save Configuration . Click the Actions button, and select Restart Instance . | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace parameter_name = setting",
"dsctl instance_name restart"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning-directory-server-resource-settings |
Installing on any platform | Installing on any platform OpenShift Container Platform 4.15 Installing OpenShift Container Platform on any platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_any_platform/index |
Chapter 2. Configure Red Hat Identity Management | Chapter 2. Configure Red Hat Identity Management In this example, IdM is situated externally to the OpenStack Red Hat OpenStack Platform director deployment and is the source of all user and group information. RH-SSO will be configured to use IdM as its User Federation, and will then perform LDAP searches against IdM to obtain user and group information. 2.1. Create the IdM Service Account for RH-SSO Although IdM allows anonymous binds, some information is withheld for security reasons. Some of this information withheld during anonymous binds is essential for RH-SSO user federation; consequently, RH-SSO will need to bind to the IdM LDAP server with enough privileges to successfully query the required information. As a result, you will need to create a dedicated service account for RH-SSO in IdM. IdM does not natively provide a command to do this, but you can use the ldapmodify command. For example: Note You can use the configure-federation script to perform the above step: 2.2. Create a test user You will also need a test user account in IdM. You can either use an existing user or create a new one; the examples in this guide use "John Doe" with a uid of jdoe . You can create the jdoe user in IdM: Assign a password to the user: 2.3. Create an IdM group for OpenStack Users Create the openstack-users group in IdM. Make sure the openstack-users group does not already exist: Add the openstack-users group to IdM: Add the test user to the openstack-users group: Verify that the openstack-users group exists and has the test user as a member: | [
"ldap_url=\"ldaps://USDFED_IPA_HOST\" dir_mgr_dn=\"cn=Directory Manager\" service_name=\"rhsso\" service_dn=\"uid=USDservice_name,cn=sysaccounts,cn=etc,USDFED_IPA_BASE_DN\" ldapmodify -H \"USDldap_url\" -x -D \"USDdir_mgr_dn\" -w \"USDFED_IPA_ADMIN_PASSWD\" <<EOF dn: USDservice_dn changetype: add objectclass: account objectclass: simplesecurityobject uid: USDservice_name userPassword: USDFED_IPA_RHSSO_SERVICE_PASSWD passwordExpirationTime: 20380119031407Z nsIdleTimeout: 0 EOF",
"./configure-federation create-ipa-service-account",
"ipa user-add --first John --last Doe --email [email protected] jdoe",
"ipa passwd jdoe",
"ipa group-show openstack-users ipa: ERROR: openstack-users: group not found",
"ipa group-add openstack-users",
"ipa group-add-member --users jdoe openstack-users",
"ipa group-show openstack-users Group name: openstack-users GID: 331400001 Member users: jdoe"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/federate_with_identity_service/set-up-idm |
Chapter 7. Booting hosts with the discovery image | Chapter 7. Booting hosts with the discovery image The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can boot hosts with the discovery image using three methods: USB drive Redfish virtual media iPXE 7.1. Creating an ISO image on a USB drive You can install the Assisted Installer agent using a USB drive that contains the discovery ISO image. Starting the host with the USB drive prepares the host for the software installation. Procedure On the administration host, insert a USB drive into a USB port. Copy the ISO image to the USB drive, for example: # dd if=<path_to_iso> of=<path_to_usb> status=progress where: <path_to_iso> is the relative path to the downloaded discovery ISO file, for example, discovery.iso . <path_to_usb> is the location of the connected USB drive, for example, /dev/sdb . After the ISO is copied to the USB drive, you can use the USB drive to install the Assisted Installer agent on the cluster host. 7.2. Booting with a USB drive To register nodes with the Assisted Installer using a bootable USB drive, use the following procedure. Procedure Insert the RHCOS discovery ISO USB drive into the target host. Configure the boot drive order in the server firmware settings to boot from the attached discovery ISO, and then reboot the server. Wait for the host to boot up. For web console installations, on the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. For API installations, refresh the token, check the enabled host count, and gather the host IDs: USD source refresh-token USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.enabled_host_count' USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Example output [ "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5" ] 7.3. Booting from an HTTP-hosted ISO image using the Redfish API You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API. Prerequisites Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO. Procedure Copy the ISO file to an HTTP server accessible in your network. Boot the host from the hosted ISO file, for example: Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"Image":"<hosted_iso_file>", "Inserted": true}' \ -H "Content-Type: application/json" \ -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia Where: <bmc_username>:<bmc_password> Is the username and password for the target host BMC. <hosted_iso_file> Is the URL for the hosted installation ISO, for example: https://example.com/rhcos-live-minimal.iso . The ISO must be accessible from the target host machine. <host_bmc_address> Is the BMC IP address of the target host machine. Set the host to boot from the VirtualMedia device by running the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -X PATCH -H 'Content-Type: application/json' \ -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' \ <host_bmc_address>/redfish/v1/Systems/System.Embedded.1 Reboot the host: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"ResetType": "ForceRestart"}' \ -H 'Content-type: application/json' \ -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"ResetType": "On"}' -H 'Content-type: application/json' \ -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset 7.4. Booting hosts using iPXE The Assisted Installer provides an iPXE script including all of the artifacts needed to boot the discovery image for an infrastructure environment. Due to the limitations of the current HTTPS implementation of iPXE, the recommendation is to download and expose the needed artifacts in an HTTP server. Currently, even if iPXE supports HTTPS protocol, the supported algorithms are old and not recommended. The full list of supported ciphers is in https://ipxe.org/crypto . Prerequisites You have created an infrastructure environment by using the API or you have created a cluster by using the web console. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. Note If you configure iPXE by using the web console, the USDINFRA_ENV_ID and USDAPI_TOKEN variables are preset. You have an HTTP server to host the images. Note IBM Power(R) only supports PXE, which has the following requirements: GRUB2 installed at /var/lib/tftpboot DHCP and TFTP for PXE Procedure Download the iPXE script directly from the web console, or get the iPXE script from the Assisted Installer by running the following command: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/downloads/files?file_name=ipxe-script > ipxe-script Example #!ipxe initrd --name initrd http://api.openshift.com/api/assisted-images/images/<infra_env_id>/pxe-initrd?arch=x86_64&image_token=<token_string>&version=4.10 kernel http://api.openshift.com/api/assisted-images/boot-artifacts/kernel?arch=x86_64&version=4.10 initrd=initrd coreos.live.rootfs_url=http://api.openshift.com/api/assisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.10 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" boot Download the required artifacts by extracting URLs from the ipxe-script : Download the initial RAM disk by running the following command: USD awk '/^initrd /{print USDNF}' ipxe-script \ | xargs curl -o initrd.img -L Download the Linux kernel by running the following command: USD awk '/^kernel /{print USD2}' ipxe-script | xargs curl -o kernel -L Download the root filesystem by running the following command: USD grep ^kernel ipxe_script | xargs -n1 | grep ^coreos.live.rootfs_url | cut -d = -f 2,3,4 | xargs curl -o rootfs.img -L Change the URLs to the different artifacts in the ipxe-script to match your local HTTP server. For example: #!ipxe set webserver http://192.168.0.1 initrd --name initrd USDwebserver/initrd.img kernel USDwebserver/kernel initrd=initrd coreos.live.rootfs_url=USDwebserver/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" boot Optional: When installing with RHEL KVM on IBM Z(R) you must boot the host by specifying additional kernel arguments: random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8 Note When you install with iPXE on RHEL KVM, the VMs on the VM host might not start on the first boot. You must start them manually. Optional: When installing on IBM Power(R) you must download the initramfs , kernel , and root images as follows: Copy the initrd.img and kernel.img images to the /var/lib/tftpboot/rhcos PXE directory. Copy the rootfs.img to the /var/www/html/install HTTPD directory. Add the following entry to the /var/lib/tftpboot/boot/grub2/grub.cfg directory: if [ USD{net_default_mac} == fa:1d:67:35:13:20 ]; then default=0 fallback=1 timeout=1 menuentry "CoreOS (BIOS)" { echo "Loading kernel" linux "/rhcos/kernel.img" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://9.114.98.8:8000/install/rootfs.img echo "Loading initrd" initrd "/rhcos/initrd.img" } fi | [
"dd if=<path_to_iso> of=<path_to_usb> status=progress",
"source refresh-token",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.enabled_host_count'",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'",
"[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia",
"curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/downloads/files?file_name=ipxe-script > ipxe-script",
"#!ipxe initrd --name initrd http://api.openshift.com/api/assisted-images/images/<infra_env_id>/pxe-initrd?arch=x86_64&image_token=<token_string>&version=4.10 kernel http://api.openshift.com/api/assisted-images/boot-artifacts/kernel?arch=x86_64&version=4.10 initrd=initrd coreos.live.rootfs_url=http://api.openshift.com/api/assisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.10 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\" boot",
"awk '/^initrd /{print USDNF}' ipxe-script | xargs curl -o initrd.img -L",
"awk '/^kernel /{print USD2}' ipxe-script | xargs curl -o kernel -L",
"grep ^kernel ipxe_script | xargs -n1 | grep ^coreos.live.rootfs_url | cut -d = -f 2,3,4 | xargs curl -o rootfs.img -L",
"#!ipxe set webserver http://192.168.0.1 initrd --name initrd USDwebserver/initrd.img kernel USDwebserver/kernel initrd=initrd coreos.live.rootfs_url=USDwebserver/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\" boot",
"random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8",
"if [ USD{net_default_mac} == fa:1d:67:35:13:20 ]; then default=0 fallback=1 timeout=1 menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel.img\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://9.114.98.8:8000/install/rootfs.img echo \"Loading initrd\" initrd \"/rhcos/initrd.img\" } fi"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_booting-hosts-with-the-discovery-image |
2.6.4.2. The /etc/xinetd.d/ Directory | 2.6.4.2. The /etc/xinetd.d/ Directory The /etc/xinetd.d/ directory contains the configuration files for each service managed by xinetd and the names of the files are correlated to the service. As with xinetd.conf , this directory is read only when the xinetd service is started. For any changes to take effect, the administrator must restart the xinetd service. The format of files in the /etc/xinetd.d/ directory use the same conventions as /etc/xinetd.conf . The primary reason the configuration for each service is stored in a separate file is to make customization easier and less likely to affect other services. To gain an understanding of how these files are structured, consider the /etc/xinetd.d/krb5-telnet file: These lines control various aspects of the telnet service: service - Specifies the service name, usually one of those listed in the /etc/services file. flags - Sets any of a number of attributes for the connection. REUSE instructs xinetd to reuse the socket for a Telnet connection. Note The REUSE flag is deprecated. All services now implicitly use the REUSE flag. socket_type - Sets the network socket type to stream . wait - Specifies whether the service is single-threaded ( yes ) or multi-threaded ( no ). user - Specifies which user ID the process runs under. server - Specifies which binary executable to launch. log_on_failure - Specifies logging parameters for log_on_failure in addition to those already defined in xinetd.conf . disable - Specifies whether the service is disabled ( yes ) or enabled ( no ). Refer to the xinetd.conf man page for more information about these options and their usage. | [
"service telnet { flags = REUSE socket_type = stream wait = no user = root server = /usr/kerberos/sbin/telnetd log_on_failure += USERID disable = yes }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-xinetd_Configuration_Files-The_etcxinetd.d_Directory |
20.3. Using volume_key in a Larger Organization | 20.3. Using volume_key in a Larger Organization In a larger organization, using a single password known by every system administrator and keeping track of a separate password for each system is impractical and a security risk. To counter this, volume_key can use asymmetric cryptography to minimize the number of people who know the password required to access encrypted data on any computer. This section will cover the procedures required for preparation before saving encryption keys, how to save encryption keys, restoring access to a volume, and setting up emergency passphrases. 20.3.1. Preparation for Saving Encryption Keys In order to begin saving encryption keys, some preparation is required. Procedure 20.3. Preparation Create an X509 certificate/private pair. Designate trusted users who are trusted not to compromise the private key. These users will be able to decrypt the escrow packets. Choose which systems will be used to decrypt the escrow packets. On these systems, set up an NSS database that contains the private key. If the private key was not created in an NSS database, follow these steps: Store the certificate and private key in an PKCS#12 file. Run: At this point it is possible to choose an NSS database password. Each NSS database can have a different password so the designated users do not need to share a single password if a separate NSS database is used by each user. Run: Distribute the certificate to anyone installing systems or saving keys on existing systems. For saved private keys, prepare storage that allows them to be looked up by machine and volume. For example, this can be a simple directory with one subdirectory per machine, or a database used for other system management tasks as well. 20.3.2. Saving Encryption Keys After completing the required preparation (see Section 20.3.1, "Preparation for Saving Encryption Keys" ) it is now possible to save the encryption keys using the following procedure. Note For all examples in this file, /path/to/volume is a LUKS device, not the plaintext device contained within; blkid -s type /path/to/volume should report type ="crypto_LUKS" . Procedure 20.4. Saving Encryption Keys Run: Save the generated escrow-packet file in the prepared storage, associating it with the system and the volume. These steps can be performed manually, or scripted as part of system installation. 20.3.3. Restoring Access to a Volume After the encryption keys have been saved (see Section 20.3.1, "Preparation for Saving Encryption Keys" and Section 20.3.2, "Saving Encryption Keys" ), access can be restored to a driver where needed. Procedure 20.5. Restoring Access to a Volume Get the escrow packet for the volume from the packet storage and send it to one of the designated users for decryption. The designated user runs: After providing the NSS database password, the designated user chooses a passphrase for encrypting escrow-packet-out . This passphrase can be different every time and only protects the encryption keys while they are moved from the designated user to the target system. Obtain the escrow-packet-out file and the passphrase from the designated user. Boot the target system in an environment that can run volume_key and have the escrow-packet-out file available, such as in a rescue mode. Run: A prompt will appear for the packet passphrase chosen by the designated user, and for a new passphrase for the volume. Mount the volume using the chosen volume passphrase. It is possible to remove the old passphrase that was forgotten by using cryptsetup luksKillSlot , for example, to free up the passphrase slot in the LUKS header of the encrypted volume. This is done with the command cryptsetup luksKillSlot device key-slot . For more information and examples see cryptsetup --help . 20.3.4. Setting up Emergency Passphrases In some circumstances (such as traveling for business) it is impractical for system administrators to work directly with the affected systems, but users still need access to their data. In this case, volume_key can work with passphrases as well as encryption keys. During the system installation, run: This generates a random passphrase, adds it to the specified volume, and stores it to passphrase-packet . It is also possible to combine the --create-random-passphrase and -o options to generate both packets at the same time. If a user forgets the password, the designated user runs: This shows the random passphrase. Give this passphrase to the end user. | [
"certutil -d /the/nss/directory -N",
"pk12util -d /the/nss/directory -i the-pkcs12-file",
"volume_key --save /path/to/volume -c /path/to/cert escrow-packet",
"volume_key --reencrypt -d /the/nss/directory escrow-packet-in -o escrow-packet-out",
"volume_key --restore /path/to/volume escrow-packet-out",
"volume_key --save /path/to/volume -c /path/to/ert --create-random-passphrase passphrase-packet",
"volume_key --secrets -d /your/nss/directory passphrase-packet"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/volume_key-organization |
Chapter 15. Virtual File Systems and Disk Management | Chapter 15. Virtual File Systems and Disk Management 15.1. GVFS GVFS ( GNOME Virtual File System ) is an extension of the virtual file system interface provided by the libraries the GNOME Desktop is built on. GVFS provides complete virtual file system infrastructure and handles storage in the GNOME Desktop. GVFS uses addresses for full identification based on the URI (Uniform Resource Identifier) standard, syntactically similar to URL addresses used in web browsers. These addresses in form of schema://user@server/path are the key information determining the kind of service. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/virtual-file-systems-disk-management |
21.2.2. SELinux Configuration Files | 21.2.2. SELinux Configuration Files The following sections describe SELinux configuration and policy files, and related file systems located in the /etc/ directory. 21.2.2.1. The /etc/sysconfig/selinux Configuration File There are two ways to configure SELinux under Red Hat Enterprise Linux: using the Security Level Configuration Tool ( system-config-securitylevel ), or manually editing the configuration file ( /etc/sysconfig/selinux ). The /etc/sysconfig/selinux file is the primary configuration file for enabling or disabling SELinux, as well as setting which policy to enforce on the system and how to enforce it. Note The /etc/sysconfig/selinux contains a symbolic link to the actual configuration file, /etc/selinux/config . The following explains the full subset of options available for configuration: SELINUX=< enforcing|permissive|disabled > - Defines the top-level state of SELinux on a system. enforcing - The SELinux security policy is enforced. permissive - The SELinux system prints warnings but does not enforce policy. This is useful for debugging and troubleshooting purposes. In permissive mode, more denials will be logged, as subjects will be able to continue with actions otherwise denied in enforcing mode. For example, traversing a directory tree will produce multiple avc: denied messages for every directory level read, where a kernel in enforcing mode would have stopped the initial traversal and kept further denial messages from occurring. disabled - SELinux is fully disabled. SELinux hooks are disengaged from the kernel and the pseudo-file system is unregistered. Note Actions made while SELinux is disabled may cause the file system to no longer have the proper security context as defined by the policy. Running fixfiles relabel prior to enabling SELinux will relabel the file system so that SELinux works properly when enabled. For more information, refer to the fixfiles (8) manpage. Note Additional white space at the end of a configuration line or as extra lines at the end of the file may cause unexpected behavior. To be safe, remove unnecessary white spaces. SELINUXTYPE=< targeted|strict > - Specifies which policy is currently being enforced by SELinux. targeted - Only targeted network daemons are protected. Important The following daemons are protected in the default targeted policy: dhcpd , httpd (apache.te) , named , nscd , ntpd , portmap , snmpd , squid , and syslogd . The rest of the system runs in the unconfined_t domain. The policy files for these daemons can be found in /etc/selinux/targeted/src/policy/domains/program and are subject to change, as newer versions of Red Hat Enterprise Linux are released. Policy enforcement for these daemons can be turned on or off, using Boolean values controlled by Security Level Configuration Tool ( system-config-securitylevel ). Switching a Boolean value for a targeted daemon disables the policy transition for the daemon, which prevents, for example, init from transitioning dhcpd from the unconfined_t domain to the domain specified in dhcpd.te . The domain unconfined_t allows subjects and objects with that security context to run under standard Linux security. strict - Full SELinux protection, for all daemons. Security contexts are defined for all subjects and objects, and every single action is processed by the policy enforcement server. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-selinux-files-etc |
Appendix C. Using AMQ Broker with the examples | Appendix C. Using AMQ Broker with the examples The Red Hat build of Apache Qpid ProtonJ2 examples require a running message broker with a queue named hello-world-example . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named hello-world-example . USD <broker-instance-dir> /bin/artemis queue create --name hello-world-example --address hello-world-example --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2023-12-08 12:44:53 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name hello-world-example --address hello-world-example --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_protonj2/1.0/html/using_qpid_protonj2/using_the_broker_with_the_examples |
Chapter 6. Upgrade Quay Bridge Operator | Chapter 6. Upgrade Quay Bridge Operator To upgrade the Quay Bridge Operator (QBO), change the Channel Subscription update channel in the Subscription tab to the desired channel. When upgrading QBO from version 3.5 to 3.7, a number of extra steps are required: You need to create a new QuayIntegration custom resource. This can be completed in the Web Console or from the command line. upgrade-quay-integration.yaml - apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com 1 Make sure that the clusterID matches the value for the existing QuayIntegration resource. Create the new QuayIntegration custom resource: USD oc create -f upgrade-quay-integration.yaml Delete the old QuayIntegration custom resource. Delete the old mutatingwebhookconfigurations : USD oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator | [
"- apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com",
"oc create -f upgrade-quay-integration.yaml",
"oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/upgrade_red_hat_quay/qbo-operator-upgrade |
Chapter 1. Upgrading by using the Operator | Chapter 1. Upgrading by using the Operator Upgrades through the Red Hat Advanced Cluster Security for Kubernetes (RHACS) Operator are performed automatically or manually, depending on the Update approval option you chose at installation. Follow these guidelines when upgrading: If the version for Central is earlier than 3.74, you must upgrade to 3.74 before upgrading to a 4.x version. For upgrading Central to version 3.74, see the upgrade documentation for version 3.74 . When upgrading Operator-based Central deployments from version 3.74, first ensure the Operator upgrade mode is set to Manual . Then, upgrade the Operator to version 4.0 following the procedure in the upgrade documentation for version 4.0 and ensure that Central is online. After the upgrade to version 4.0 is complete, Red Hat recommends upgrading Central to the latest version for full functionality. 1.1. Preparing to upgrade Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps: If you are upgrading from version 3.74, verify that you are running the latest patch release version of the RHACS Operator 3.74. Backup your existing Central database. If the cluster you are upgrading contains the SecuredCluster custom resource (CR), change the collection method to CORE_BPF . For more information, see "Changing the collection method". 1.1.1. Changing the collection method If the cluster that you are upgrading contains the SecuredCluster CR, you must ensure that the per node collection setting is set to CORE_BPF before you upgrade. Procedure In the OpenShift Container Platform web console, go to the RHACS Operator page. In the top navigation menu, select Secured Cluster . Click the instance name, for example, stackrox-secured-cluster-services . Use one of the following methods to change the setting: In the Form view , under Per Node Settings Collector Settings Collection , select CORE_BPF . Click YAML to open the YAML editor and locate the spec.perNode.collector.collection attribute. If the value is KernelModule or EBPF , then change it to CORE_BPF . Click Save. Additional resources Updating installed Operators Backing up Red Hat Advanced Cluster Security for Kubernetes 1.2. Modifying Central custom resource The Central DB service requires persistent storage. If you have not configured a default storage class for the Central cluster that is an SSD or is high performance, you must update the Central custom resource to configure the storage class for the Central DB persistent volume claim (PVC). Note Skip this section if you have already configured a default storage class for Central. Procedure Update the central custom resource with the following configuration: 1 You must not change the value of IsEnabled to Enabled . 2 If this claim exists, your cluster uses the existing claim, otherwise it creates a new claim. 1.3. Modifying Central custom resource for external database Prerequisites You must have a database in your database instance that supports PostgreSQL 13 and a user with the following permissions: Connection rights to the database. Usage and Create on the schema. Select , Insert , Update , and Delete on all tables in the schema. Usage on all sequences in the schema. Procedure Create a password secret in the deployed namespace by using the OpenShift Container Platform web console or the terminal. On the OpenShift Container Platform web console, go to the Workloads Secrets page. Create a Key/Value secret with the key password and the value as the path of a plain text file containing the password for the superuser of the provisioned database. Or, run the following command in your terminal: USD oc create secret generic external-db-password \ 1 --from-file=password=<password.txt> 2 1 If you use Kubernetes, enter kubectl instead of oc . 2 Replace password.txt with the path of the file which has the plain text password. Go to the Red Hat Advanced Cluster Security for Kubernetes operator page in the OpenShift Container Platform web console. Select Central in the top navigation bar and select the instance you want to connect to the database. Go to the YAML editor view. For db.passwordSecret.name specify the referenced secret that you created in earlier steps. For example, external-db-password . For db.connectionString specify the connection string in keyword=value format, for example, host=<host> port=5432 database=stackrox user=stackrox sslmode=verify-ca For db.persistence delete the entire block. If necessary, you can specify a Certificate Authority for Central to trust the database certificate by adding a TLS block under the top-level spec, as shown in the following example: Update the central custom resource with the following configuration: spec: tls: additionalCAs: - name: db-ca content: | <certificate> central: db: isEnabled: Default 1 connectionString: "host=<host> port=5432 user=<user> sslmode=verify-ca" passwordSecret: name: external-db-password 1 You must not change the value of IsEnabled to Enabled . Click Save . Additional resources Provisioning a database in your PostgreSQL instance 1.4. Changing the subscription channel You can change the update channel for the RHACS Operator by using the OpenShift Container Platform web console or by using the command line. For upgrading to RHACS 4.0 from RHACS 3.74, you must change the update channel. Important You must change the subscription channel for all clusters where you installed the RHACS Operator, including Central and all secured clusters. Prerequisites You must verify that you are using the latest RHACS 3.74 Operator and there are no pending manual Operator upgrades. You must verify that you backed up your Central database. You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Changing the subscription channel by using the web console Use the following instructions for changing the subscription channel by using the web console: Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to Operators Installed Operators . Click the RHACS Operator. Click the Subscription tab. Click the name of the update channel under Update Channel . Select stable , then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Go back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. Changing the subscription channel by using command line Use the following instructions for changing the subscription channel by using command line: Procedure Run the following command to change the subscription channel to stable : USD oc -n rhacs-operator \ 1 patch subscriptions.operators.coreos.com rhacs-operator \ --type=merge --patch='{ "spec": { "channel": "stable" }}' 1 If you use Kubernetes, enter kubectl instead of oc . During the update, the RHACS Operator provisions a new deployment called central-db and your data begins migrating. It takes around 30 minutes and happens only after you upgrade. 1.5. Rolling back an Operator upgrade To roll back an Operator upgrade, you must perform the steps described in one of the following sections. You can roll back an Operator upgrade by using the CLI or the OpenShift Container Platform web console. Note If you are rolling back from RHACS 4.0, you can only rollback to the latest patch release version of RHACS 3.74. 1.5.1. Rolling back an Operator upgrade by using the CLI You can roll back the Operator version by using CLI commands. Procedure Delete the OLM subscription by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete subscription rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete subscription rhacs-operator Delete the cluster service version (CSV) by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator Determine the version you want to roll back to by choosing one of the following options: If the current Central instance is running, query the RHACS API to get the rollback version by running the following command: USD curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo If the current Central instance is not running, perform the following steps: Note This procedure can only be used for RHACS release 3.74 and earlier when the rocksdb database is installed. Ensure the Central deployment is scaled down by running the following command: For OpenShift Container Platform, run the following command: USD oc scale -n <central namespace> -replicas=0 deploy/central For Kubernetes, run the following command: USD kubectl scale -n <central namespace> -replicas=0 deploy/central Save the following pod spec as a YAML file: apiVersion: v1 kind: Pod metadata: name: get--db-version spec: containers: - name: get--db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - "cat /var/lib/stackrox/./migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db Create a pod in your Central namespace by running the following command using the YAML file that you saved: For OpenShift Container Platform, run the following command: USD oc create -n <central namespace> -f pod.yaml For Kubernetes, run the following command: USD kubectl create -n <central namespace> -f pod.yaml After pod creation is complete, get the version by running the following command: For OpenShift Container Platform, run the following command: USD oc logs -n <central namespace> get--db-version For Kubernetes, run the following command: USD kubectl logs -n <central namespace> get--db-version Edit the central-config.yaml ConfigMap to set the maintenance.forceRollBackVersion:<version> parameter by running the following command: For OpenShift Container Platform, run the following command: USD oc get configmap -n <central namespace> central-config -o yaml | sed -e "s/forceRollbackVersion: none/forceRollbackVersion: <version>/" | oc -n <central namespace> apply -f - For Kubernetes, run the following command: USD kubectl get configmap -n <central namespace> central-config -o yaml | sed -e "s/forceRollbackVersion: none/forceRollbackVersion: <version>/" | kubectl -n <central namespace> apply -f - Set the image for the Central deployment using the version string shown in Step 3 as the image tag. For example, run the following command: For OpenShift Container Platform, run the following command: USD oc set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version> For Kubernetes, run the following command: USD kubectl set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version> Verification Ensure that the Central pod starts and has a ready status. If the pod crashes, check the logs to see if the backup was restored. A successful log message appears similar to the following example: Reinstall the Operator on the rolled back channel. For example, 3.74.2 is installed on the rhacs-3.74 channel. 1.5.2. Rolling back an Operator upgrade by using the web console You can roll back the Operator version by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Go to the Operators Installed Operators page. Click the RHACS Operator. On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates. Determine the version you want to roll back to by choosing one of the following options: If the current Central instance is running, you can query the RHACS API to get the rollback version by running the following command from a terminal window: USD curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo You can create a pod and extract the version by performing the following steps: Note This procedure can only be used for RHACS release 3.74 and earlier when the rocksdb database is installed. Go to Workloads Deployments central . Under Deployment details , click the down arrow to the pod count to scale down the pod. Go to Workloads Pods Create Pod and paste the contents of the pod spec as shown in the following example into the editor: apiVersion: v1 kind: Pod metadata: name: get--db-version spec: containers: - name: get--db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - "cat /var/lib/stackrox/./migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db Click Create . After the pod is created, click the Logs tab to get the version string. Update the rollback configuration by performing the following steps: Go to Workloads ConfigMaps central-config and select Edit ConfigMap from the Actions list. Find the forceRollbackVersion line in the value of the central-config.yaml key. Replace none with 3.73.3 , and then save the file. Update Central to the earlier version by performing the following steps: Go to Workloads Deployments central and select Edit Deployment from the Actions list. Update the image name, and then save the changes. Verification Ensure that the Central pod starts and has a ready status. If the pod crashes, check the logs to see if the backup was restored. A successful log message appears similar to the following example: Reinstall the Operator on the rolled back channel. For example, 3.74.2 is installed on the rhacs-3.74 channel. Additional resources Installing Central using the Operator method Operator Lifecycle Manager workflow Manually approving a pending Operator update 1.6. Troubleshooting Operator upgrade issues Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator. 1.6.1. Central DB cannot be scheduled Follow the instructions here to troubleshoot a failing Central DB pod during an upgrade: Check the status of the central-db pod: USD oc -n <namespace> get pod -l app=central-db 1 1 If you use Kubernetes, enter kubectl instead of oc . If the status of the pod is Pending , use the describe command to get more details: USD oc -n <namespace> describe po/<central-db-pod-name> 1 1 If you use Kubernetes, enter kubectl instead of oc . You might see the FailedScheduling warning message: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 54s default-scheduler 0/7 nodes are available: 1 Insufficient memory, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 4 Insufficient cpu. preemption: 0/7 nodes are available: 3 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod. This warning message suggests that the scheduled node had insufficient memory to accommodate the pod's resource requirements. If you have a small environment, consider increasing resources on the nodes or adding a larger node that can support the database. Otherwise, consider decreasing the resource requirements for the central-db pod in the custom resource under central db resources . However, running central with fewer resources than the recommended minimum might lead to degraded performance for RHACS. 1.6.2. Central or Secured cluster fails to deploy When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue: If the Operator fails to deploy Central or Secured Cluster If the Operator fails to apply CR changes to actual resources For Central, run the following command to check the conditions: USD oc -n rhacs-operator describe centrals.platform.stackrox.io 1 1 If you use Kubernetes, enter kubectl instead of oc . For Secured clusters, run the following command to check the conditions: USD oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1 1 If you use Kubernetes, enter kubectl instead of oc . You can identify configuration errors from the conditions output: Example output Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed Additionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs: oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1 1 If you use Kubernetes, enter kubectl instead of oc . | [
"spec: central: db: isEnabled: Default 1 persistence: persistentVolumeClaim: 2 claimName: central-db size: 100Gi storageClassName: <storage-class-name>",
"oc create secret generic external-db-password \\ 1 --from-file=password=<password.txt> 2",
"spec: tls: additionalCAs: - name: db-ca content: | <certificate> central: db: isEnabled: Default 1 connectionString: \"host=<host> port=5432 user=<user> sslmode=verify-ca\" passwordSecret: name: external-db-password",
"oc -n rhacs-operator \\ 1 patch subscriptions.operators.coreos.com rhacs-operator --type=merge --patch='{ \"spec\": { \"channel\": \"stable\" }}'",
"oc -n rhacs-operator delete subscription rhacs-operator",
"kubectl -n rhacs-operator delete subscription rhacs-operator",
"oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo",
"oc scale -n <central namespace> -replicas=0 deploy/central",
"kubectl scale -n <central namespace> -replicas=0 deploy/central",
"apiVersion: v1 kind: Pod metadata: name: get-previous-db-version spec: containers: - name: get-previous-db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - \"cat /var/lib/stackrox/.previous/migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '\" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db",
"oc create -n <central namespace> -f pod.yaml",
"kubectl create -n <central namespace> -f pod.yaml",
"oc logs -n <central namespace> get-previous-db-version",
"kubectl logs -n <central namespace> get-previous-db-version",
"oc get configmap -n <central namespace> central-config -o yaml | sed -e \"s/forceRollbackVersion: none/forceRollbackVersion: <version>/\" | oc -n <central namespace> apply -f -",
"kubectl get configmap -n <central namespace> central-config -o yaml | sed -e \"s/forceRollbackVersion: none/forceRollbackVersion: <version>/\" | kubectl -n <central namespace> apply -f -",
"oc set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version>",
"kubectl set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version>",
"Clone to Migrate \".previous\", \"\"",
"curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo",
"apiVersion: v1 kind: Pod metadata: name: get-previous-db-version spec: containers: - name: get-previous-db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - \"cat /var/lib/stackrox/.previous/migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '\" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db",
"Clone to Migrate \".previous\", \"\"",
"oc -n <namespace> get pod -l app=central-db 1",
"oc -n <namespace> describe po/<central-db-pod-name> 1",
"Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 54s default-scheduler 0/7 nodes are available: 1 Insufficient memory, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 4 Insufficient cpu. preemption: 0/7 nodes are available: 3 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.",
"oc -n rhacs-operator describe centrals.platform.stackrox.io 1",
"oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1",
"Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed",
"-n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/upgrading/upgrade-operator |
Chapter 1. OpenShift Container Platform CLI tools overview | Chapter 1. OpenShift Container Platform CLI tools overview A user performs a range of operations while working on OpenShift Container Platform such as the following: Managing clusters Building, deploying, and managing applications Managing deployment processes Developing Operators Creating and maintaining Operator catalogs OpenShift Container Platform offers a set of command-line interface (CLI) tools that simplify these tasks by enabling users to perform various administration and development operations from the terminal. These tools expose simple commands to manage the applications, as well as interact with each component of the system. 1.1. List of CLI tools The following set of CLI tools are available in OpenShift Container Platform: OpenShift CLI ( oc ) : This is the most commonly used CLI tool by OpenShift Container Platform users. It helps both cluster administrators and developers to perform end-to-end operations across OpenShift Container Platform using the terminal. Unlike the web console, it allows the user to work directly with the project source code using command scripts. Knative CLI (kn) : The Knative ( kn ) CLI tool provides simple and intuitive terminal commands that can be used to interact with OpenShift Serverless components, such as Knative Serving and Eventing. Pipelines CLI (tkn) : OpenShift Pipelines is a continuous integration and continuous delivery (CI/CD) solution in OpenShift Container Platform, which internally uses Tekton. The tkn CLI tool provides simple and intuitive commands to interact with OpenShift Pipelines using the terminal. opm CLI : The opm CLI tool helps the Operator developers and cluster administrators to create and maintain the catalogs of Operators from the terminal. Operator SDK : The Operator SDK, a component of the Operator Framework, provides a CLI tool that Operator developers can use to build, test, and deploy an Operator from the terminal. It simplifies the process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/cli_tools/cli-tools-overview |
17.2. Using IdM and DNS Service Discovery with an Existing DNS Configuration | 17.2. Using IdM and DNS Service Discovery with an Existing DNS Configuration To help create and configure a suitable DNS setup, the IdM installation script creates a sample zone file. During the installation, IdM displays a message similar to the following: If a DNS server is already configured in the network, then the configuration in the IdM-generated file can be added to the existing DNS zone file. This allows IdM clients to find . For example, this DNS zone configuration is created for an IdM server with the KDC and DNS servers all on the same machine in the EXAMPLE.COM realm: Example 17.1. Default IdM DNS File Note If DNS services are hosted by a server outside the IdM domain, then an administrator can add the lines in Example 17.1, "Default IdM DNS File" to the existing DNS zone file. This allows IdM clients and servers to continue to use DNS service discovery to find the LDAP and Kerberos servers (meaning, the IdM servers) that are required for them to participate in the IdM domain. | [
"Sample zone file for bind has been created in /tmp/sample.zone.F_uMf4.db",
"; ldap servers _ldap._tcp IN SRV 0 100 389 ipaserver.example.com. ;kerberos realm _kerberos IN TXT EXAMPLE.COM ; kerberos servers _kerberos._tcp IN SRV 0 100 88 ipaserver.example.com. _kerberos._udp IN SRV 0 100 88 ipaserver.example.com. _kerberos-master._tcp IN SRV 0 100 88 ipaserver.example.com. _kerberos-master._udp IN SRV 0 100 88 ipaserver.example.com. _kpasswd._tcp IN SRV 0 100 464 ipaserver.example.com. _kpasswd._udp IN SRV 0 100 464 ipaserver.example.com."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/dns-file |
Troubleshooting OpenShift Data Foundation | Troubleshooting OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.16 Instructions on troubleshooting OpenShift Data Foundation Red Hat Storage Documentation Team Abstract Read this document for instructions on troubleshooting Red Hat OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/index |
Part III. Device Drivers | Part III. Device Drivers This chapter provides a comprehensive listing of all device drivers which were updated in Red Hat Enterprise Linux 7.1. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/part-red_hat_enterprise_linux-7.1_release_notes-device_drivers |
Chapter 35. Jira Update Issue Sink | Chapter 35. Jira Update Issue Sink Update fields of an existing issue in Jira. The Kamelet expects the following headers to be set: issueKey / ce-issueKey : as the issue code in Jira. issueTypeName / ce-issueTypeName : as the name of the issue type (example: Bug, Enhancement). issueSummary / ce-issueSummary : as the title or summary of the issue. issueAssignee / ce-issueAssignee : as the user assigned to the issue (Optional). issuePriorityName / ce-issuePriorityName : as the priority name of the issue (example: Critical, Blocker, Trivial) (Optional). issueComponents / ce-issueComponents : as list of string with the valid component names (Optional). issueDescription / ce-issueDescription : as the issue description (Optional). The issue description can be set from the body of the message or the issueDescription / ce-issueDescription in the header, however the body takes precedence. 35.1. Configuration Options The following table summarizes the configuration options available for the jira-update-issue-sink Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password or the API Token to access Jira string username * Username The username to access Jira string Note Fields marked with an asterisk (*) are mandatory. 35.2. Dependencies At runtime, the jira-update-issue-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:jackson camel:jira camel:kamelet mvn:com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4.redhat-00001 35.3. Usage This section describes how you can use the jira-update-issue-sink . 35.3.1. Knative Sink You can use the jira-update-issue-sink Kamelet as a Knative sink by binding it to a Knative object. jira-update-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-163" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password" 35.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 35.3.1.2. Procedure for using the cluster CLI Save the jira-update-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-update-issue-sink-binding.yaml 35.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-update-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a story 123" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 35.3.2. Kafka Sink You can use the jira-update-issue-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jira-update-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-163" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-update-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password" 35.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 35.3.2.2. Procedure for using the cluster CLI Save the jira-update-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-update-issue-sink-binding.yaml 35.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-update-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a story 123" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 35.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-update-issue-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-163\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTypeName\" value: \"Bug\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueSummary\" value: \"The issue summary\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issuePriorityName\" value: \"Low\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-update-issue-sink-binding.yaml",
"kamel bind --name jira-update-issue-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value=\"This is a story 123\" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-163\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTypeName\" value: \"Bug\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueSummary\" value: \"The issue summary\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issuePriorityName\" value: \"Low\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-update-issue-sink properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-update-issue-sink-binding.yaml",
"kamel bind --name jira-update-issue-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value=\"This is a story 123\" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/jira-update-issue-sink |
Chapter 14. Managing guest virtual machines with virsh | Chapter 14. Managing guest virtual machines with virsh virsh is a command line interface tool for managing guest virtual machines and the hypervisor. The virsh command-line tool is built on the libvirt management API and operates as an alternative to the qemu-kvm command and the graphical virt-manager application. The virsh command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration. 14.1. Generic Commands The commands in this section are generic because they are not specific to any domain. 14.1.1. help USD virsh help [command|group] The help command can be used with or without options. When used without options, all commands are listed, one per line. When used with an option, it is grouped into categories, displaying the keyword for each group. To display the commands that are only for a specific option, you need to give the keyword for that group as an option. For example: Using the same command with a command option, gives the help information on that one specific command. For example: 14.1.2. quit and exit The quit command and the exit command will close the terminal. For example: 14.1.3. version The version command displays the current libvirt version and displays information about where the build is from. For example: 14.1.4. Argument Display The virsh echo [--shell][--xml][arg] command echos or displays the specified argument. Each argument echoed will be separated by a space. by using the --shell option, the output will be single quoted where needed so that it is suitable for reusing in a shell command. If the --xml option is used the output will be made suitable for use in an XML file. For example, the command virsh echo --shell "hello world" will send the output 'hello world' . 14.1.5. connect Connects to a hypervisor session. When the shell is first started this command runs automatically when the URI parameter is requested by the -c command. The URI specifies how to connect to the hypervisor. The most commonly used URIs are: xen:/// - connects to the local Xen hypervisor. qemu:///system - connects locally as root to the daemon supervising QEMU and KVM domains. xen:///session - connects locally as a user to the user's set of QEMU and KVM domains. lxc:/// - connects to a local Linux container. Additional values are available on libvirt's website http://libvirt.org/uri.html . The command can be run as follows: Where {name} is the machine name (host name) or URL (the output of the virsh uri command) of the hypervisor. To initiate a read-only connection, append the above command with --readonly . For more information on URIs refer to Remote URIs . If you are unsure of the URI, the virsh uri command will display it: 14.1.6. Displaying Basic Information The following commands may be used to display basic information: USD hostname - displays the hypervisor's host name USD sysinfo - displays the XML representation of the hypervisor's system information, if available 14.1.7. Injecting NMI The USD virsh inject-nmi [domain] injects NMI (non-maskable interrupt) message to the guest virtual machine. This is used when response time is critical, such as non-recoverable hardware errors. To run this command: | [
"virsh help pool Storage Pool (help keyword 'pool'): find-storage-pool-sources-as find potential storage pool sources find-storage-pool-sources discover potential storage pool sources pool-autostart autostart a pool pool-build build a pool pool-create-as create a pool from a set of args pool-create create a pool from an XML file pool-define-as define a pool from a set of args pool-define define (but don't start) a pool from an XML file pool-delete delete a pool pool-destroy destroy (stop) a pool pool-dumpxml pool information in XML pool-edit edit XML configuration for a storage pool pool-info storage pool information pool-list list pools pool-name convert a pool UUID to pool name pool-refresh refresh a pool pool-start start a (previously defined) inactive pool pool-undefine undefine an inactive pool pool-uuid convert a pool name to pool UUID",
"virsh help vol-path NAME vol-path - returns the volume path for a given volume name or key SYNOPSIS vol-path <vol> [--pool <string>] OPTIONS [--vol] <string> volume name or key --pool <string> pool name or uuid",
"virsh exit",
"virsh quit",
"virsh version Compiled against library: libvirt 1.1.1 Using library: libvirt 1.1.1 Using API: QEMU 1.1.1 Running hypervisor: QEMU 1.5.3",
"virsh connect {name|URI}",
"virsh uri qemu:///session",
"virsh inject-nmi guest-1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-Virtualization_Administration_Guide-Managing_guests_with_virsh |
Chapter 3. Configuring Ansible Automation Platform to use egress proxy | Chapter 3. Configuring Ansible Automation Platform to use egress proxy You can deploy Ansible Automation Platform so that egress from the platform for various purposes functions properly through proxy servers. Egress proxy allows clients to make indirect (through a proxy server) requests to network services. The client first connects to the proxy server and requests some resource, for example, email, located on another server. The proxy server then connects to the specified server and retrieves the resource from it. 3.1. Overview The egress proxy should be configured on the system and component level of Ansible Automation Platform, for all the RPM and containerized installation methods. For containerized installers, the system proxy configuration for podman on the nodes solves most of the problems with access through the proxy. For RPM installation, both system and component configurations are needed. 3.1.1. Proxy backends For HTTP and HTTPS proxies you can use a squid server. Squid is a forward proxy for the Web supporting HTTP, HTTPS, and FTP, reducing bandwidth and improving response times by caching and reusing frequently-requested web pages. It is licensed under the GNU GPL. Forward proxies are systems that intercept network traffic going to another network (typically the internet) and send it on the behalf of the internal systems. The squid proxy enables all required communication to pass through it. Make sure all the required Ansible Automation Platform control plane ports are opened on the squid proxy backend. Ansible Automation Platform-specific ports: acl Safe_ports port 81 acl Safe_ports port 82 acl Safe_ports port 389 acl Safe_ports port 444 acl Safe_ports port 445 acl SSL_ports port 22 The following ports are for containerized installations: acl SSL_ports port 444 acl SSL_ports port 445 acl SSL_ports port 8443 acl SSL_ports port 8444 acl SSL_ports port 8445 acl SSL_ports port 8446 acl SSL_ports port 44321 acl SSL_ports port 44322 http_access deny !Safe_ports http_access deny CONNECT !SSL_ports 3.2. System proxy configuration The outbound proxy is configured on the system level for all the nodes in the control plane. The following environment variables must be set: http_proxy="http://external-proxy_0:3128" https_proxy="http://external-proxy_0:3128" no_proxy="localhost,127.0.0.0/8,10.0.0.0/8" You can also add those variables to the /etc/environment file to make them permanent. The installation program ensures that all external communication during the installation goes through the proxy. For containerized installation, those variables ensure that the podman uses the egress proxy. 3.3. Automation controller settings After using the RPM installation program, you must configure automation controller to use egress proxy. Note This is not required for containerized installers because podman uses system configured proxy and redirects all the container traffic to the proxy. For automation controller, set the AWX_TASK_ENV variable in /api/v2/settings/ . To do this through the UI use the following procedure: Procedure From the navigation panel, select Settings Job . Click Edit . Add the variables to the Extra Environment Variables field and set: "AWX_TASK_ENV": { "http_proxy": "http://external-proxy_0:3128", "https_proxy": "http://external-proxy_0:3128", "no_proxy": "localhost,127.0.0.0/8" } 3.4. Enabling a configurable proxy environment for AWS inventory synchronization To enable a configurable proxy environment for AWS inventory synchronization, you can manually edit the override configuration file or set the configuration in the platform UI: Manually edit /usr/lib/systemd/system/receptor.service.d/override.conf and add the following http_proxy environment variables there: http_proxy:<value> https_proxy:<value> proxy_username:<value> Proxy_password:<value> Or To do this through the UI use the following procedure: Procedure From the navigation panel, select Settings Job . Click Edit . Add the variables to the Extra Environment Variables field For example: * "AWX_TASK_ENV": { "no_proxy": "localhost,127.0.0.0/8,10.0.0.0/8", "http_proxy": "http://proxy_host:3128/", "https_proxy": "http://proxy_host:3128/" }, 3.5. Configuring Proxy settings on automation hub If your private automation hub is behind a network proxy, you can configure proxy settings on the remote to sync content located outside of your local network. Prerequisites You have Modify Ansible repo content permissions. For more information on permissions, see Access management and authentication . You have a requirements.yml file that identifies those collections to synchronize from Ansible Galaxy as in the following example: Requirements.yml example Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Content Remotes . In the Details tab in the Community remote, click Edit remote . In the YAML requirements field, paste the contents of your requirements.yml file. Click Save remote . You can now synchronize collections identified in your requirements.yml file from Ansible Galaxy to your private automation hub. From the navigation panel, select Automation Content Repositories . to the community repository, click the More Actions icon ... and select Sync repository to sync collections between Ansible Galaxy and Ansible automation hub. On the modal that appears, you can toggle the following options: Mirror : Select if you want your repository content to mirror the remote repository's content. Optimize : Select if you want to sync only when no changes are reported by the remote server. Click Sync to complete the sync. Verification The Sync status column updates to notify you whether the Ansible Galaxy collections synchronization to your Ansible automation hub is successful. Navigate to Automation Content Collections and select Community to confirm successful synchronization. 3.6. Configuring proxy settings on Event-Driven Ansible For Event-Driven Ansible, there are no global settings to set a proxy. You must specify the proxy for every project. Procedure From the navigation panel, select Automation Decisions Projects . Click Create Project Use the Proxy field. 3.7. Configuring proxy settings for automation mesh You can route outbound communication from the receptor on an automation mesh node through a proxy server. If your proxy does not strip out TLS certificates then an installation of Ansible Automation Platform automatically supports the use of a proxy server. Every node on the mesh must have a Certifying Authority that the installer creates on your behalf. The default install location for the Certifying Authority is: /etc/receptor/tls/ca/mesh-CA.crt The certificates and keys created on your behalf use the nodeID for their names: For the certificate: /etc/receptor/tls/NODEID.crt For the key: /etc/receptor/tls/NODEID.key | [
"acl Safe_ports port 81 acl Safe_ports port 82 acl Safe_ports port 389 acl Safe_ports port 444 acl Safe_ports port 445 acl SSL_ports port 22",
"acl SSL_ports port 444 acl SSL_ports port 445 acl SSL_ports port 8443 acl SSL_ports port 8444 acl SSL_ports port 8445 acl SSL_ports port 8446 acl SSL_ports port 44321 acl SSL_ports port 44322 http_access deny !Safe_ports http_access deny CONNECT !SSL_ports",
"http_proxy=\"http://external-proxy_0:3128\" https_proxy=\"http://external-proxy_0:3128\" no_proxy=\"localhost,127.0.0.0/8,10.0.0.0/8\"",
"\"AWX_TASK_ENV\": { \"http_proxy\": \"http://external-proxy_0:3128\", \"https_proxy\": \"http://external-proxy_0:3128\", \"no_proxy\": \"localhost,127.0.0.0/8\" }",
"http_proxy:<value> https_proxy:<value> proxy_username:<value> Proxy_password:<value>",
"\"AWX_TASK_ENV\": { \"no_proxy\": \"localhost,127.0.0.0/8,10.0.0.0/8\", \"http_proxy\": \"http://proxy_host:3128/\", \"https_proxy\": \"http://proxy_host:3128/\" },",
"collections: # Install a collection from Ansible Galaxy. - name: community.aws version: 5.2.0 source: https://galaxy.ansible.com"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/operating_ansible_automation_platform/assembly-configure-egress-proxy |
Chapter 203. Kubernetes Persistent Volume Claim Component | Chapter 203. Kubernetes Persistent Volume Claim Component Available as of Camel version 2.17 The Kubernetes Persistent Volume Claim component is one of Kubernetes Components which provides a producer to execute kubernetes persistent volume claim operations. 203.1. Component Options The Kubernetes Persistent Volume Claim component has no options. 203.2. Endpoint Options The Kubernetes Persistent Volume Claim endpoint is configured using URI syntax: with the following path and query parameters: 203.2.1. Path Parameters (1 parameters): Name Description Default Type masterUrl Required Kubernetes API server URL String 203.2.2. Query Parameters (20 parameters): Name Description Default Type apiVersion (producer) The Kubernetes API Version to use String dnsDomain (producer) The dns domain, used for ServiceCall EIP String kubernetesClient (producer) Default KubernetesClient to use if provided KubernetesClient operation (producer) Producer operation to do on Kubernetes String portName (producer) The port name, used for ServiceCall EIP String portProtocol (producer) The port protocol, used for ServiceCall EIP tcp String connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean caCertData (security) The CA Cert Data String caCertFile (security) The CA Cert File String clientCertData (security) The Client Cert Data String clientCertFile (security) The Client Cert File String clientKeyAlgo (security) The Key Algorithm used by the client String clientKeyData (security) The Client Key data String clientKeyFile (security) The Client Key file String clientKeyPassphrase (security) The Client Key Passphrase String oauthToken (security) The Auth Token String password (security) Password to connect to Kubernetes String trustCerts (security) Define if the certs we used are trusted anyway or not Boolean username (security) Username to connect to Kubernetes String 203.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean | [
"kubernetes-persistent-volumes-claims:masterUrl"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kubernetes-persistent-volumes-claims-component |
7.5. Defining Audit Rules | 7.5. Defining Audit Rules The Audit system operates on a set of rules that define what is to be captured in the log files. The following types of Audit rules can be specified: Control rules Allow the Audit system's behavior and some of its configuration to be modified. File system rules Also known as file watches, allow the auditing of access to a particular file or a directory. System call rules Allow logging of system calls that any specified program makes. Audit rules can be set: on the command line using the auditctl utility. Note that these rules are not persistent across reboots. For details, see Section 7.5.1, "Defining Audit Rules with auditctl " in the /etc/audit/audit.rules file. For details, see Section 7.5.3, "Defining Persistent Audit Rules and Controls in the /etc/audit/audit.rules File" 7.5.1. Defining Audit Rules with auditctl The auditctl command allows you to control the basic functionality of the Audit system and to define rules that decide which Audit events are logged. Note All commands which interact with the Audit service and the Audit log files require root privileges. Ensure you execute these commands as the root user. Additionally, the CAP_AUDIT_CONTROL capability is required to set up audit services and the CAP_AUDIT_WRITE capabilityis required to log user messages. Defining Control Rules The following are some of the control rules that allow you to modify the behavior of the Audit system: -b sets the maximum amount of existing Audit buffers in the kernel, for example: -f sets the action that is performed when a critical error is detected, for example: The above configuration triggers a kernel panic in case of a critical error. -e enables and disables the Audit system or locks its configuration, for example: The above command locks the Audit configuration. -r sets the rate of generated messages per second, for example: The above configuration sets no rate limit on generated messages. -s reports the status of the Audit system, for example: -l lists all currently loaded Audit rules, for example: -D deletes all currently loaded Audit rules, for example: Defining File System Rules To define a file system rule, use the following syntax: where: path_to_file is the file or directory that is audited. permissions are the permissions that are logged: r - read access to a file or a directory. w - write access to a file or a directory. x - execute access to a file or a directory. a - change in the file's or directory's attribute. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.1. File System Rules To define a rule that logs all write access to, and every attribute change of, the /etc/passwd file, execute the following command: Note that the string following the -k option is arbitrary. To define a rule that logs all write access to, and every attribute change of, all the files in the /etc/selinux/ directory, execute the following command: To define a rule that logs the execution of the /sbin/insmod command, which inserts a module into the Linux kernel, execute the following command: Defining System Call Rules To define a system call rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after its own -S option. field = value specifies additional options that further modify the rule to match events based on a specified architecture, group ID, process ID, and others. For a full listing of all available field types and their values, see the auditctl (8) man page. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.2. System Call Rules To define a rule that creates a log entry every time the adjtimex or settimeofday system calls are used by a program, and the system uses the 64-bit architecture, use the following command: To define a rule that creates a log entry every time a file is deleted or renamed by a system user whose ID is 1000 or larger, use the following command: Note that the -F auid!=4294967295 option is used to exclude users whose login UID is not set. It is also possible to define a file system rule using the system call rule syntax. The following command creates a rule for system calls that is analogous to the -w /etc/shadow -p wa file system rule: 7.5.2. Defining Executable File Rules To define an executable file rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after its own -S option. path_to_executable_file is the absolute path to the executable file that is audited. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.3. Executable File Rules To define a rule that logs all execution of the /bin/id program, execute the following command: 7.5.3. Defining Persistent Audit Rules and Controls in the /etc/audit/audit.rules File To define Audit rules that are persistent across reboots, you must either directly include them in the /etc/audit/audit.rules file or use the augenrules program that reads rules located in the /etc/audit/rules.d/ directory. The /etc/audit/audit.rules file uses the same auditctl command line syntax to specify the rules. Empty lines and text following a hash sign ( # ) are ignored. The auditctl command can also be used to read rules from a specified file using the -R option, for example: Defining Control Rules A file can contain only the following control rules that modify the behavior of the Audit system: -b , -D , -e , -f , -r , --loginuid-immutable , and --backlog_wait_time . For more information on these options, see the section called "Defining Control Rules" . Example 7.4. Control Rules in audit.rules Defining File System and System Call Rules File system and system call rules are defined using the auditctl syntax. The examples in Section 7.5.1, "Defining Audit Rules with auditctl " can be represented with the following rules file: Example 7.5. File System and System Call Rules in audit.rules Preconfigured Rules Files In the /usr/share/doc/audit/rules/ directory, the audit package provides a set of pre-configured rules files according to various certification standards: 30-nispom.rules - Audit rule configuration that meets the requirements specified in the Information System Security chapter of the National Industrial Security Program Operating Manual. 30-pci-dss-v31.rules - Audit rule configuration that meets the requirements set by Payment Card Industry Data Security Standard (PCI DSS) v3.1. 30-stig.rules - Audit rule configuration that meets the requirements set by Security Technical Implementation Guides (STIG). To use these configuration files, create a backup of your original /etc/audit/audit.rules file and copy the configuration file of your choice over the /etc/audit/audit.rules file: Note The Audit rules have a numbering scheme that allows them to be ordered. To learn more about the naming scheme, see the /usr/share/doc/audit/rules/README-rules file. Using augenrules to Define Persistent Rules The augenrules script reads rules located in the /etc/audit/rules.d/ directory and compiles them into an audit.rules file. This script processes all files that ends in .rules in a specific order based on their natural sort order. The files in this directory are organized into groups with following meanings: 10 - Kernel and auditctl configuration 20 - Rules that could match general rules but you want a different match 30 - Main rules 40 - Optional rules 50 - Server-specific rules 70 - System local rules 90 - Finalize (immutable) The rules are not meant to be used all at once. They are pieces of a policy that should be thought out and individual files copied to /etc/audit/rules.d/ . For example, to set a system up in the STIG configuration, copy rules 10-base-config, 30-stig, 31-privileged, and 99-finalize. Once you have the rules in the /etc/audit/rules.d/ directory, load them by running the augenrules script with the --load directive: For more information on the Audit rules and the augenrules script, see the audit.rules(8) and augenrules(8) man pages. | [
"~]# auditctl -b 8192",
"~]# auditctl -f 2",
"~]# auditctl -e 2",
"~]# auditctl -r 0",
"~]# auditctl -s AUDIT_STATUS: enabled=1 flag=2 pid=0 rate_limit=0 backlog_limit=8192 lost=259 backlog=0",
"~]# auditctl -l -w /etc/passwd -p wa -k passwd_changes -w /etc/selinux -p wa -k selinux_changes -w /sbin/insmod -p x -k module_insertion ...",
"~]# auditctl -D No rules",
"auditctl -w path_to_file -p permissions -k key_name",
"~]# auditctl -w /etc/passwd -p wa -k passwd_changes",
"~]# auditctl -w /etc/selinux/ -p wa -k selinux_changes",
"~]# auditctl -w /sbin/insmod -p x -k module_insertion",
"auditctl -a action , filter -S system_call -F field = value -k key_name",
"~]# auditctl -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change",
"~]# auditctl -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete",
"~]# auditctl -a always,exit -F path=/etc/shadow -F perm=wa",
"auditctl -a action , filter [ -F arch=cpu -S system_call ] -F exe= path_to_executable_file -k key_name",
"~]# auditctl -a always,exit -F exe=/bin/id -F arch=b64 -S execve -k execution_bin_id",
"~]# auditctl -R /usr/share/doc/audit/rules/30-stig.rules",
"Delete all previous rules -D Set buffer size -b 8192 Make the configuration immutable -- reboot is required to change audit rules -e 2 Panic when a failure occurs -f 2 Generate at most 100 audit messages per second -r 100 Make login UID immutable once it is set (may break containers) --loginuid-immutable 1",
"-w /etc/passwd -p wa -k passwd_changes -w /etc/selinux/ -p wa -k selinux_changes -w /sbin/insmod -p x -k module_insertion -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete",
"~]# cp /etc/audit/audit.rules /etc/audit/audit.rules_backup ~]# cp /usr/share/doc/audit/rules/30-stig.rules /etc/audit/audit.rules",
"~]# augenrules --load augenrules --load No rules enabled 1 failure 1 pid 634 rate_limit 0 backlog_limit 8192 lost 0 backlog 0 enabled 1 failure 1 pid 634 rate_limit 0 backlog_limit 8192 lost 0 backlog 1"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Defining_Audit_Rules_and_Controls |
Chapter 3. Administer MicroProfile in JBoss EAP | Chapter 3. Administer MicroProfile in JBoss EAP 3.1. MicroProfile OpenTracing administration Important If you see duplicate traces exported for REST calls, disable the microprofile-opentracing-smallrye subsystem. For information about disabling the microprofile-opentracing-smallrye , see Removing the microprofile-opentracing-smallrye subsystem . 3.1.1. Enabling MicroProfile Open Tracing Use the following management CLI commands to enable the MicroProfile Open Tracing feature globally for the server instance by adding the subsystem to the server configuration. Procedure Enable the microprofile-opentracing-smallrye subsystem using the following management command: Reload the server for the changes to take effect. 3.1.2. Removing the microprofile-opentracing-smallrye subsystem The microprofile-opentracing-smallrye subsystem is included in the default JBoss EAP 7.4 configuration. This subsystem provides MicroProfile OpenTracing functionality for JBoss EAP 7.4. If you experience system memory or performance degradation with MicroProfile OpenTracing enabled, you might want to disable the microprofile-opentracing-smallrye subsystem. You can use the remove operation in the management CLI to disable the MicroProfile OpenTracing feature globally for a given server. Procedure Remove the microprofile-opentracing-smallrye subsystem. Reload the server for the changes to take effect. 3.1.3. Installing Jaeger Install Jaeger using docker . Prerequisites docker is installed. Procedure Install Jaeger using docker by issuing the following command in CLI: 3.2. MicroProfile Config configuration 3.2.1. Adding properties in a ConfigSource management resource You can store properties directly in a config-source subsystem as a management resource. Procedure Create a ConfigSource and add a property: 3.2.2. Configuring directories as ConfigSources When a property is stored in a directory as a file, the file-name is the name of a property and the file content is the value of the property. Procedure Create a directory where you want to store the files: Navigate to the directory: Create a file name to store the value for the property name : Add the value of the property to the file: Create a ConfigSource in which the file name is the property and the file contents the value of the property: This results in the following XML configuration: <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source name="file-props"> <dir path="/etc/config/prop-files"/> </config-source> </subsystem> 3.2.3. Obtaining ConfigSource from a ConfigSource class You can create and configure a custom org.eclipse.microprofile.config.spi.ConfigSource implementation class to provide a source for the configuration values. Procedure The following management CLI command creates a ConfigSource for the implementation class named org.example.MyConfigSource that is provided by a JBoss module named org.example . If you want to use a ConfigSource from the org.example module, add the <module name="org.eclipse.microprofile.config.api"/> dependency to the path/to/org/example/main/module.xml file. This command results in the following XML configuration for the microprofile-config-smallrye subsystem. <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source name="my-config-source"> <class name="org.example.MyConfigSource" module="org.example"/> </config-source> </subsystem> Properties provided by the custom org.eclipse.microprofile.config.spi.ConfigSource implementation class are available to any JBoss EAP deployment. 3.2.4. Obtaining ConfigSource configuration from a ConfigSourceProvider class You can create and configure a custom org.eclipse.microprofile.config.spi.ConfigSourceProvider implementation class that registers implementations for multiple ConfigSource instances. Procedure Create a config-source-provider : The command creates a config-source-provider for the implementation class named org.example.MyConfigSourceProvider that is provided by a JBoss Module named org.example . If you want to use a config-source-provider from the org.example module, add the <module name="org.eclipse.microprofile.config.api"/> dependency to the path/to/org/example/main/module.xml file. This command results in the following XML configuration for the microprofile-config-smallrye subsystem: <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source-provider name="my-config-source-provider"> <class name="org.example.MyConfigSourceProvider" module="org.example"/> </config-source-provider> </subsystem> Properties provided by the ConfigSourceProvider implementation are available to any JBoss EAP deployment. Additional resources For information about how to add a global module to the JBoss EAP server, see Define Global Modules in the Configuration Guide for JBoss EAP. 3.3. MicroProfile Fault Tolerance configuration 3.3.1. Adding the MicroProfile Fault Tolerance extension The MicroProfile Fault Tolerance extension is included in standalone-microprofile.xml and standalone-microprofile-ha.xml configurations that are provided as part of JBoss EAP XP. The extension is not included in the standard standalone.xml configuration. To use the extension, you must manually enable it. Prerequisites EAP XP pack is installed. Procedure Add the MicroProfile Fault Tolerance extension using the following management CLI command: Enable the microprofile-fault-tolerance-smallrye subsystem using the following managenent command: Reload the server with the following management command: 3.4. MicroProfile Health configuration 3.4.1. Examining health using the management CLI You can check system health using the management CLI. Procedure Examine health: 3.4.2. Examining health using the management console You can check system health using the management console. A check runtime operation shows the health checks and the global outcome as boolean value. Procedure Navigate to the Runtime tab and select the server. In the Monitor column, click MicroProfile Health View . 3.4.3. Examining health using the HTTP endpoint Health check is automatically deployed to the health context on JBoss EAP, so you can obtain the current health using the HTTP endpoint. The default address for the /health endpoint, accessible from the management interface, is http://127.0.0.1:9990/health . Procedure To obtain the current health of the server using the HTTP endpoint, use the following URL:. Accessing this context displays the health check in JSON format, indicating if the server is healthy. 3.4.4. Enabling authentication for MicroProfile Health You can configure the health context to require authentication for access. Procedure Set the security-enabled attribute to true on the microprofile-health-smallrye subsystem. Reload the server for the changes to take effect. Any subsequent attempt to access the /health endpoint triggers an authentication prompt. 3.4.5. Readiness probes that determine server health and readiness JBoss EAP XP 4.0.0 supports three readiness probes to determine server health and readiness. server-status - returns UP when the server-state is running . boot-errors - returns UP when the probe detects no boot errors. deployment-status - returns UP when the status for all deployments is OK . These readiness probes are enabled by default. You can disable the probes using the MicroProfile Config property mp.health.disable-default-procedures . The following example illustrates the use of the three probes with the check operation: Additional resources MicroProfile Health in JBoss EAP Global status when probes are not defined 3.4.6. Global status when probes are not defined The :empty-readiness-checks-status , :empty-liveness-checks-status , and :empty-startup-checks-status management attributes specify the global status when no readiness , liveness , or startup probes are defined. These attributes allow applications to report 'DOWN' until their probes verify that the application is ready, live, or started up. By default, applications report 'UP'. The :empty-readiness-checks-status attribute specifies the global status for readiness probes if no readiness probes have been defined: The :empty-liveness-checks-status attribute specifies the global status for liveness probes if no liveness probes have been defined: The :empty-startup-checks-status attribute specifies the global status for startup probes if no startup probes have been defined: The /health HTTP endpoint and the :check operation that check readiness , liveness , and startup probes also take into account these attributes. You can also modify these attributes as shown in the following example: 3.5. MicroProfile JWT configuration 3.5.1. Enabling microprofile-jwt-smallrye subsystem The MicroProfile JWT integration is provided by the microprofile-jwt-smallrye subsystem and is included in the default configuration. If the subsystem is not present in the default configuration, you can add it as follows. Prerequisites EAP XP is installed. Procedure Enable the MicroProfile JWT smallrye extension in JBoss EAP: Enable the microprofile-jwt-smallrye subsystem: Reload the server: The microprofile-jwt-smallrye subsystem is enabled. 3.6. MicroProfile Metrics administration 3.6.1. Metrics available on the management interface The JBoss EAP subsystem metrics are exposed in Prometheus format. Metrics are automatically available on the JBoss EAP management interface, with the following contexts: /metrics/ - Contains metrics specified in the MicroProfile 3.0 specification. /metrics/vendor - Contains vendor-specific metrics, such as memory pools. /metrics/application - Contains metrics from deployed applications and subsystems that use the MicroProfile Metrics API. The metric names are based on subsystem and attribute names. For example, the subsystem undertow exposes a metric attribute request-count for every servlet in an application deployment. The name of this metric is jboss_undertow_request_count . The prefix jboss identifies JBoss EAP as the source of the metrics. 3.6.2. Examining metrics using the HTTP endpoint Examine the metrics that are available on the JBoss EAP management interface using the HTTP endpoint. Procedure Use the curl command: 3.6.3. Enabling Authentication for the MicroProfile Metrics HTTP Endpoint Configure the metrics context to require users to be authorized to access the context. This configuration extends to all the subcontexts of the metrics context. Procedure Set the security-enabled attribute to true on the microprofile-metrics-smallrye subsystem. Reload the server for the changes to take effect. Any subsequent attempt to access the metrics endpoint results in an authentication prompt. 3.6.4. Obtaining the request count for a web service Obtain the request count for a web service that exposes its request count metric. The following procedure uses helloworld-rs quickstart as the web service for obtaining request count. The quickstart is available at Download the quickstart from: jboss-eap-quickstarts . Prerequsites The web service exposes request count. Procedure Enable statistics for the undertow subsystem: Start the standalone server with statistics enabled: For an already running server, enable the statistics for the undertow subsystem: Deploy the helloworld-rs quickstart: In the root directory of the quickstart, deploy the web application using Maven: Query the HTTP endpoint in the CLI using the curl command and filter for request_count : Expected output: The attribute value returned is 0.0 . Access the quickstart, located at http://localhost:8080/helloworld-rs/, in a web browser and click any of the links. Query the HTTP endpoint from the CLI again: Expected output: The value is updated to 1.0 . Repeat the last two steps to verify that the request count is updated. 3.7. MicroProfile OpenAPI administration 3.7.1. Enabling MicroProfile OpenAPI The microprofile-openapi-smallrye subsystem is provided in the standalone-microprofile.xml configuration. However, JBoss EAP XP uses the standalone.xml by default. You must include the subsystem in standalone.xml to use it. Alternatively, you can follow the procedure Updating standalone configurations with MicroProfile subsystems and extensions to update the standalone.xml configuration file. Procedure Enable the MicroProfile OpenAPI smallrye extension in JBoss EAP: Enable the microprofile-openapi-smallrye subsystem using the following management command: Reload the server. The microprofile-openapi-smallrye subsystem is enabled. 3.7.2. Requesting an MicroProfile OpenAPI document using Accept HTTP header Request an MicroProfile OpenAPI document, in the JSON format, from a deployment using an Accept HTTP header. By default, the OpenAPI endpoint returns a YAML document. Prerequisites The deployment being queried is configured to return an MicroProfile OpenAPI document. Procedure Issue the following curl command to query the /openapi endpoint of the deployment: Replace http://localhost:8080 with the URL and port of the deployment. The Accept header indicates that the JSON document is to be returned using the application/json string. 3.7.3. Requesting an MicroProfile OpenAPI document using an HTTP parameter Request an MicroProfile OpenAPI document, in the JSON format, from a deployment using a query parameter in an HTTP request. By default, the OpenAPI endpoint returns a YAML document. Prerequisites The deployment being queried is configured to return an MicroProfile OpenAPI document. Procedure Issue the following curl command to query the /openapi endpoint of the deployment: Replace http://localhost:8080 with the URL and port of the deployment. The HTTP parameter format=JSON indicates that JSON document is to be returned. 3.7.4. Configuring JBoss EAP to serve a static OpenAPI document Configure JBoss EAP to serve a static OpenAPI document that describes the REST services for the host. When JBoss EAP is configured to serve a static OpenAPI document, the static OpenAPI document is processed before any Jakarta RESTful Web Services and MicroProfile OpenAPI annotations. In a production environment, disable annotation processing when serving a static document. Disabling annotation processing ensures that an immutable and versioned API contract is available for clients. Procedure Create a directory in the application source tree: APPLICATION_ROOT is the directory containing the pom.xml configuration file for the application. Query the OpenAPI endpoint, redirecting the output to a file: By default, the endpoint serves a YAML document, format=JSON specifies that a JSON document is returned. Configure the application to skip annotation scanning when processing the OpenAPI document model: Rebuild the application: Deploy the application again using the following management CLI commands: Undeploy the application: Deploy the application: JBoss EAP now serves a static OpenAPI document at the OpenAPI endpoint. 3.7.5. Disabling microprofile-openapi-smallrye You can disable the microprofile-openapi-smallrye subsystem in JBoss EAP XP using the management CLI. Procedure Disable the microprofile-openapi-smallrye subsystem: 3.8. MicroProfile Reactive Messaging administration 3.8.1. Configuring the required MicroProfile reactive messaging extension and subsystem for JBoss EAP If you want to enable asynchronous reactive messaging to your instance of JBoss EAP, you must add its extension through the JBoss EAP management CLI. Prerequisites You added the Reactive Streams Operators with SmallRye extension and subsystem. For more information, see MicroProfile Reactive Streams Operators Subsystem Configuration: Required Extension . You added the Reactive Messaging with SmallRye extension and subsystem. Procedure Open the JBoss EAP management CLI. Enter the following code: Note If you provision a server using Galleon, either on OpenShift or not, make sure you include the microprofile-reactive-messaging Galleon layer to get the core MicroProfile 2.0.1 and reactive messaging functionality, and to enable the required subsystems and extensions. Note that this configuration does not contain the JBoss EAP modules you need to enable Kafka connector functionality. To do this, use the microprofile-reactive-messaging-kafka layer. Verification You have successfully added the required MicroProfile Reactive Messaging extension and subsystem for JBoss EAP if you see success in two places in the resulting code in the management CLI. Tip If the resulting code says reload-required , you have to reload your server configuration to completely apply all of your changes. To reload, in a standalone server CLI, enter reload . 3.9. Standalone server configuration 3.9.1. Standalone server configuration files The JBoss EAP XP includes additional standalone server configuration files, standalone-microprofile.xml and standalone-microprofile-ha.xml . Standard configuration files that are included with JBoss EAP remain unchanged. Note that JBoss EAP XP 4.0.0 does not support the use of domain.xml files or domain mode. Table 3.1. Standalone configuration files available in JBoss EAP XP Configuration File Purpose Included capabilities Excluded capabilities standalone.xml This is the default configuration that is used when you start your standalone server. Includes information about the server, including subsystems, networking, deployments, socket bindings, and other configurable details. Excludes subsystems necessary for messaging or high availability. standalone-microprofile.xml This configuration file supports applications that use MicroProfile. Includes information about the server, including subsystems, networking, deployments, socket bindings, and other configurable details. Excludes the following capabilities: Jakarta Enterprise Beans Messaging Jakarta EE Batch Jakarta Server Faces Jakarta Enterprise Beans timers standalone-ha.xml Includes default subsystems and adds the modcluster and jgroups subsystems for high availability. Excludes subsystems necessary for messaging. standalone-microprofile-ha.xml This standalone file supports applications that use MicroProfile. Includes the modcluster and jgroups subsystems for high availability in addition to default subsystems. Excludes subsystems necessary for messaging. standalone-full.xml Includes the messaging-activemq and iiop-openjdk subsystems in addition to default subsystems. standalone-full-ha.xml Support for every possible subsystem. Includes subsystems for messaging and high availability in addition to default subsystems. standalone-load-balancer.xml Support for the minimum subsystems necessary to use the built-in mod_cluster front-end load balancer to load balance other JBoss EAP instances. By default, starting JBoss EAP as a standalone server uses the standalone.xml file. To start JBoss EAP with a standalone MicroProfile configuration, use the -c argument. For example, Additional Resources Starting and Stopping JBoss EAP Configuration Data 3.9.2. Updating standalone configurations with MicroProfile subsystems and extensions You can update standard standalone server configuration files with MicroProfile subsystems and extensions using the docs/examples/enable-microprofile.cli script. The enable-microprofile.cli script is intended as an example script for updating standard standalone server configuration files, not custom configurations. The enable-microprofile.cli script modifies the existing standalone server configuration and adds the following MicroProfile subsystems and extensions if they do not exist in the standalone configuration file: microprofile-config-smallrye microprofile-fault-tolerance-smallrye microprofile-health-smallrye microprofile-jwt-smallrye microprofile-metrics-smallrye microprofile-openapi-smallrye microprofile-opentracing-smallrye The enable-microprofile.cli script outputs a high-level description of the modifications. The configuration is secured using the elytron subsystem. The security subsystem, if present, is removed from the configuration. Prerequisites JBoss EAP XP is installed. Procedure Run the following CLI script to update the default standalone.xml server configuration file: Select a standalone server configuration other than the default standalone.xml server configuration file using the following command: The specified configuration file now includes MicroProfile subsystems and extensions. | [
"/subsystem=microprofile-opentracing-smallrye:add()",
"reload",
"/subsystem=microprofile-opentracing-smallrye:remove()",
"reload",
"docker run -d --name jaeger -p 6831:6831/udp -p 5778:5778 -p 14268:14268 -p 16686:16686 jaegertracing/all-in-one:1.16",
"/subsystem=microprofile-config-smallrye/config-source=props:add(properties={\"name\" = \"jim\"})",
"mkdir -p ~/config/prop-files/",
"cd ~/config/prop-files/",
"touch name",
"echo \"jim\" > name",
"/subsystem=microprofile-config-smallrye/config-source=file-props:add(dir={path=~/config/prop-files})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source name=\"file-props\"> <dir path=\"/etc/config/prop-files\"/> </config-source> </subsystem>",
"/subsystem=microprofile-config-smallrye/config-source=my-config-source:add(class={name=org.example.MyConfigSource, module=org.example})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source name=\"my-config-source\"> <class name=\"org.example.MyConfigSource\" module=\"org.example\"/> </config-source> </subsystem>",
"/subsystem=microprofile-config-smallrye/config-source-provider=my-config-source-provider:add(class={name=org.example.MyConfigSourceProvider, module=org.example})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source-provider name=\"my-config-source-provider\"> <class name=\"org.example.MyConfigSourceProvider\" module=\"org.example\"/> </config-source-provider> </subsystem>",
"/extension=org.wildfly.extension.microprofile.fault-tolerance-smallrye:add",
"/subsystem=microprofile-fault-tolerance-smallrye:add",
"reload",
"/subsystem=microprofile-health-smallrye:check { \"outcome\" => \"success\", \"result\" => { \"status\" => \"UP\", \"checks\" => [] } }",
"http:// <host> : <port> /health",
"/subsystem=microprofile-health-smallrye:write-attribute(name=security-enabled,value=true)",
"reload",
"[standalone@localhost:9990 /] /subsystem=microprofile-health-smallrye:check { \"outcome\" => \"success\", \"result\" => { \"status\" => \"UP\", \"checks\" => [ { \"name\" => \"boot-errors\", \"status\" => \"UP\" }, { \"name\" => \"server-state\", \"status\" => \"UP\", \"data\" => {\"value\" => \"running\"} }, { \"name\" => \"empty-readiness-checks\", \"status\" => \"UP\" }, { \"name\" => \"deployments-status\", \"status\" => \"UP\" }, { \"name\" => \"empty-liveness-checks\", \"status\" => \"UP\" }, { \"name\" => \"empty-startup-checks\", \"status\" => \"UP\" } ] } }",
"/subsystem=microprofile-health-smallrye:read-attribute(name=empty-readiness-checks-status) { \"outcome\" => \"success\", \"result\" => expression \"USD{env.MP_HEALTH_EMPTY_READINESS_CHECKS_STATUS:UP}\" }",
"/subsystem=microprofile-health-smallrye:read-attribute(name=empty-liveness-checks-status) { \"outcome\" => \"success\", \"result\" => expression \"USD{env.MP_HEALTH_EMPTY_LIVENESS_CHECKS_STATUS:UP}\" }",
"/subsystem=microprofile-health-smallrye:read-attribute(name=empty-startup-checks-status) { \"outcome\" => \"success\", \"result\" => expression \"USD{env.MP_HEALTH_EMPTY_STARTUP_CHECKS_STATUS:UP}\" }",
"/subsystem=microprofile-health-smallrye:write-attribute(name=empty-readiness-checks-status,value=DOWN) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"/extension=org.wildfly.extension.microprofile.jwt-smallrye:add",
"/subsystem=microprofile-jwt-smallrye:add",
"reload",
"curl -v http://localhost:9990/metrics | grep -i type",
"/subsystem=microprofile-metrics-smallrye:write-attribute(name=security-enabled,value=true)",
"reload",
"./standalone.sh -Dwildfly.statistics-enabled=true",
"/subsystem=undertow:write-attribute(name=statistics-enabled,value=true)",
"mvn clean install wildfly:deploy",
"curl -v http://localhost:9990/metrics | grep request_count",
"jboss_undertow_request_count_total{server=\"default-server\",http_listener=\"default\",} 0.0",
"curl -v http://localhost:9990/metrics | grep request_count",
"jboss_undertow_request_count_total{server=\"default-server\",http_listener=\"default\",} 1.0",
"/extension=org.wildfly.extension.microprofile.openapi-smallrye:add()",
"/subsystem=microprofile-openapi-smallrye:add()",
"reload",
"curl -v -H'Accept: application/json' http://localhost:8080 /openapi < HTTP/1.1 200 OK {\"openapi\": \"3.0.1\" ... }",
"curl -v http://localhost:8080 /openapi?format=JSON < HTTP/1.1 200 OK",
"mkdir APPLICATION_ROOT /src/main/webapp/META-INF",
"curl http://localhost:8080/openapi?format=JSON > src/main/webapp/META-INF/openapi.json",
"echo \"mp.openapi.scan.disable=true\" > APPLICATION_ROOT /src/main/webapp/META-INF/microprofile-config.properties",
"mvn clean install",
"undeploy microprofile-openapi.war",
"deploy APPLICATION_ROOT /target/microprofile-openapi.war",
"/subsystem=microprofile-openapi-smallrye:remove()",
"[standalone@localhost:9990 /] /extension=org.wildfly.extension.microprofile.reactive-messaging-smallrye:add {\"outcome\" => \"success\"} [standalone@localhost:9990 /] /subsystem=microprofile-reactive-messaging-smallrye:add { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"EAP_HOME /bin/standalone.sh -c=standalone-microprofile.xml",
"EAP_HOME /bin/jboss-cli.sh --file=docs/examples/enable-microprofile.cli",
"EAP_HOME /bin/jboss-cli.sh --file=docs/examples/enable-microprofile.cli -Dconfig=<standalone-full.xml|standalone-ha.xml|standalone-full-ha.xml>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_4.0.0/administer_microprofile_in_jboss_eap |
Chapter 3. Configuring build runs | Chapter 3. Configuring build runs In a BuildRun custom resource (CR), you can define the build reference, build specification, parameter values, service account, output, retention parameters, and volumes to configure a build run. A BuildRun resource is available for use within a namespace. For configuring a build run, create a BuildRun resource YAML file and apply it to the OpenShift Container Platform cluster. 3.1. Configurable fields in build run You can use the following fields in your BuildRun custom resource (CR): Table 3.1. Fields in the BuildRun CR Field Presence Description apiVersion Required Specifies the API version of the resource. For example, shipwright.io/v1beta1 . kind Required Specifies the type of the resource. For example, BuildRun . metadata Required Indicates the metadata that identifies the custom resource definition instance. For example, the name of the BuildRun resource. spec.build.name Optional Specifies an existing Build resource instance to use. You cannot use this field with the spec.build.spec field. spec.build.spec Optional Specifies an embedded Build resource instance to use. You cannot use this field with the spec.build.name field. spec.serviceAccount Optional Indicates the service account to use when building the image. spec.timeout Optional Defines a custom timeout. This field value overwrites the value of the spec.timeout field defined in your Build resource. spec.paramValues Optional Indicates a name-value list to specify values for parameters defined in the build strategy. The parameter value overwrites the value of the parameter that is defined with the same name in your Build resource. spec.output.image Optional Indicates a custom location where the generated image will be pushed. This field value overwrites the value of the output.image field defined in your Build resource. spec.output.pushSecret Optional Indicates an existing secret to get access to the container registry. This secret will be added to the service account along with other secrets requested by the Build resource. spec.env Optional Defines additional environment variables that you can pass to the build container. This field value overrides any environment variables that are specified in the Build resource. The available variables depend on the tool that is used by your build strategy. Note You cannot use the spec.build.name and spec.build.spec fields together in the same CR because they are mutually exclusive. 3.2. Build reference definition You can configure the spec.build.name field in your BuildRun resource to reference a Build resource that indicates an image to build. The following example shows a BuildRun CR that configures the spec.build.name field: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build 3.3. Build specification definition You can embed a complete build specification into your BuildRun resource using the spec.build.spec field. By embedding specifications, you can build an image without creating and maintaining a dedicated Build custom resource. The following example shows a BuildRun CR that configures the spec.build.spec field: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: standalone-buildrun spec: build: spec: source: git: url: https://github.com/shipwright-io/sample-go.git contextDir: source-build strategy: kind: ClusterBuildStrategy name: buildah output: image: <path_to_image> Note You cannot use the spec.build.name and spec.build.spec fields together in the same CR because they are mutually exclusive. 3.4. Parameter values definition for a build run You can specify values for the build strategy parameters in your BuildRun CR. If you have provided a value for a parameter that is also defined in the Build resource with the same name, then the value defined in the BuildRun resource takes priority. In the following example, the value of the cache parameter in the BuildRun resource overrides the value of the cache parameter, which is defined in the Build resource: apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: cache value: disabled strategy: name: <your_strategy> kind: ClusterBuildStrategy source: # ... output: # ... apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <your_buildrun> namespace: <your_namespace> spec: build: name: <your_build> paramValues: - name: cache value: registry 3.5. Service account definition You can define a service account in your BuildRun resource. The service account hosts all secrets referenced in your Build resource, as shown in the following example: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build serviceAccount: pipeline 1 1 You can also set the value of the spec.serviceAccount field to ".generate" to generate the service account during runtime. The name of the generated service account corresponds with the name of the BuildRun resource. Note When you do not define the service account, the BuildRun resource uses the pipeline service account if it exists in the namespace. Otherwise, the BuildRun resource uses the default service account. 3.6. Retention parameters definition for a build run You can specify the duration for which a completed build run can exist in your BuildRun resource. Retention parameters provide a way to clean your BuildRun instances automatically. You can set the value of the following retention parameters in your BuildRun CR: retention.ttlAfterFailed : Specifies the duration for which a failed build run can exist retention.ttlAfterSucceeded : Specifies the duration for which a successful build run can exist The following example shows how to define retention parameters in your BuildRun CR: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buidrun-retention-ttl spec: build: name: build-retention-ttl retention: ttlAfterFailed: 10m ttlAfterSucceeded: 10m Note If you have defined a retention parameter in both BuildRun and Build CRs, the value defined in the BuildRun CR overrides the value of the retention parameter defined in the Build CR. 3.7. Volumes definition for a build run You can define volumes in your BuildRun CR. The defined volumes override the volumes specified in the BuildStrategy resource. If a volume is not overridden, then the build run fails. In case the Build and BuildRun resources override the same volume, the volume defined in the BuildRun resource is used for overriding. The following example shows a BuildRun CR that uses the volumes field: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <buildrun_name> spec: build: name: <build_name> volumes: - name: <volume_name> configMap: name: <configmap_name> 3.8. Environment variables definition You can use environment variables in your BuildRun CR based on your needs. The following example shows how to define environment variables: Example: Defining a BuildRun resource with environment variables apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <example_var_1> value: "<example_value_1>" - name: <example_var_2> value: "<example_value_2>" The following example shows a BuildRun resource that uses the Kubernetes downward API to expose a pod as an environment variable: Example: Defining a BuildRun resource to expose a pod as an environment variable apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <pod_name> valueFrom: fieldRef: fieldPath: metadata.name The following example shows a BuildRun resource that uses the Kubernetes downward API to expose a container as an environment variable: Example: Defining a BuildRun resource to expose a container as an environment variable apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: MEMORY_LIMIT valueFrom: resourceFieldRef: containerName: <my_container> resource: limits.memory 3.9. Build run status The BuildRun resource updates whenever the image building status changes, as shown in the following examples: Example: BuildRun with Unknown status USD oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r Unknown Unknown 1s Example: BuildRun with True status USD oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r True Succeeded 29m 20m A BuildRun resource stores the status-related information in the status.conditions field. For example, a condition with the type Succeeded indicates that resources have successfully completed their operation. The status.conditions field includes significant information like status, reason, and message for the BuildRun resource. 3.9.1. Build run statuses description A BuildRun custom resource (CR) can have different statuses during the image building process. The following table covers the different statuses of a build run: Table 3.2. Statuses of a build run Status Cause Description Unknown Pending The BuildRun resource waits for a pod in status Pending . Unknown Running The BuildRun resource has been validated and started to perform its work. Unknown BuildRunCanceled The user has requested to cancel the build run. This request triggers the build run controller to make a request for canceling the related task runs. Cancellation is still under process when this status is present. True Succeeded The pod for the BuildRun resource is created. False Failed The BuildRun resource is failed in one of the steps. False BuildRunTimeout The execution of the BuildRun resource is timed out. False UnknownStrategyKind The strategy type defined in the Kind field is unknown. You can define these strategy types: ClusterBuildStrategy and BuildStrategy . False ClusterBuildStrategyNotFound The referenced cluster-scoped strategy was not found in the cluster. False BuildStrategyNotFound The referenced namespace-scoped strategy was not found in the cluster. False SetOwnerReferenceFailed Setting the ownerReferences field from the BuildRun resource to the related TaskRun resource failed. False TaskRunIsMissing The TaskRun resource related to the BuildRun resource was not found. False TaskRunGenerationFailed The generation of a TaskRun specification has failed. False MissingParameterValues You have not provided any value for some parameters that are defined in the build strategy without any default. You must provide the values for those parameters in the Build or the BuildRun CR. False RestrictedParametersInUse A value for a system parameter was provided, which is not allowed. False UndefinedParameter A value for a parameter was provided that is not defined in the build strategy. False WrongParameterValueType A value was provided for a build strategy parameter with the wrong type. For example, if the parameter is defined as an array or a string in the build strategy, you must provide a set of values or a direct value accordingly. False InconsistentParameterValues A value for a parameter contained more than one of these values: value , configMapValue , and secretValue . You must provide only one of the mentioned values to maintain consistency. False EmptyArrayItemParameterValues An item inside the values of an array parameter contained none of these values: value , configMapValue , and secretValue . You must provide only one of the mentioned values as null array items are not allowed. False IncompleteConfigMapValueParameterValues A value for a parameter contained a configMapValue value where the name or the value field was empty. You must specify the empty field to point to an existing config map key in your namespace. False IncompleteSecretValueParameterValues A value for a parameter contained a secretValue value where the name or the value field was empty. You must specify the empty field to point to an existing secret key in your namespace. False ServiceAccountNotFound The referenced service account was not found in the cluster. False BuildRegistrationFailed The referenced build in the BuildRun resource is in a Failed state. False BuildNotFound The referenced build in the BuildRun resource was not found. False BuildRunCanceled The BuildRun and related TaskRun resources were canceled successfully. False BuildRunNameInvalid The defined build run name in the metadata.name field is invalid. You must provide a valid label value for the build run name in your BuildRun CR. False BuildRunNoRefOrSpec The BuildRun resource does not have either the spec.build.name or spec.build.spec field defined. False BuildRunAmbiguousBuild The defined BuildRun resource uses both the spec.build.name and spec.build.spec fields. Only one of the parameters is allowed at a time. False BuildRunBuildFieldOverrideForbidden The defined spec.build.name field uses an override in combination with the spec.build.spec field, which is not allowed. Use the spec.build.spec field to directly specify the respective value. False PodEvicted The build run pod was evicted from the node it was running on. 3.9.2. Failed build runs When a build run fails, you can check the status.failureDetails field in your BuildRun CR to identify the exact point where the failure happened in the pod or container. The status.failureDetails field includes an error message and a reason for the failure. You only see the message and reason for failure if they are defined in your build strategy. The following example shows a failed build run: # ... status: # ... failureDetails: location: container: step-source-default pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr message: The source repository does not exist, or you have insufficient permission to access it. reason: GitRemotePrivate Note The status.failureDetails field also provides error details for all operations related to Git. 3.9.3. Step results in build run status After a BuildRun resource completes its execution, the .status field contains the .status.taskResults result emitted from the steps generated by the build run controller. The result includes the image digest or the commit SHA of the source code that is used for building the image. In a BuildRun resource, the .status.sources field contains the result from the execution of source steps and the .status.output field contains the result from the execution of output steps. The following example shows a BuildRun resource with step results for a Git source: Example: A BuildRun resource with step results for a Git source # ... status: buildSpec: # ... output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default git: commitAuthor: xxx xxxxxx commitSha: f25822b85021d02059c9ac8a211ef3804ea8fdde branchName: main The following example shows a BuildRun resource with step results for a local source code: Example: A BuildRun resource with step results for a local source code # ... status: buildSpec: # ... output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default bundle: digest: sha256:0f5e2070b534f9b880ed093a537626e3c7fdd28d5328a8d6df8d29cd3da760c7 Note You get to see the digest and size of the output image only if it is defined in your build strategy. 3.9.4. Build snapshot For each build run reconciliation, the buildSpec field in the status of the BuildRun resource updates if an existing task run is part of that build run. During this update, a Build resource snapshot generates and embeds into the status.buildSpec field of the BuildRun resource. Due to this, the buildSpec field contains an exact copy of the original Build specification, which was used to execute a particular image build. By using the build snapshot, you can see the original Build resource configuration. 3.10. Relationship of build run with Tekton tasks The BuildRun resource delegates the task of image construction to the Tekton TaskRun resource, which runs all steps until either the completion of the task, or a failure occurs in the task. During the build run reconciliation, the build run controller generates a new TaskRun resource. The controller embeds the required steps for a build run execution in the TaskRun resource. The embedded steps are defined in your build strategy. 3.11. Build run cancellation You can cancel an active BuildRun instance by setting its state to BuildRunCanceled . When you cancel a BuildRun instance, the underlying TaskRun resource is also marked as canceled. The following example shows a canceled build run for a BuildRun resource: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: # [...] state: "BuildRunCanceled" 3.12. Automatic build run deletion To automatically delete a build run, you can add the following retention parameters in the build or buildrun specification: buildrun TTL parameters: Ensures that build runs only exist for a defined duration of time after completion. buildrun.spec.retention.ttlAfterFailed : The build run is deleted if the specified time has passed and the build run has failed. buildrun.spec.retention.ttlAfterSucceeded : The build run is deleted if the specified time has passed and the build run has succeeded. build TTL parameters: Ensures that build runs for a build only exist for a defined duration of time after completion. build.spec.retention.ttlAfterFailed : The build run is deleted if the specified time has passed and the build run has failed for the build. build.spec.retention.ttlAfterSucceeded : The build run is deleted if the specified time has passed and the build run has succeeded for the build. build limit parameters: Ensures that only a limited number of succeeded or failed build runs can exist for a build. build.spec.retention.succeededLimit : Defines the number of succeeded build runs that can exist for the build. build.spec.retention.failedLimit : Defines the number of failed build runs that can exist for the build. | [
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: standalone-buildrun spec: build: spec: source: git: url: https://github.com/shipwright-io/sample-go.git contextDir: source-build strategy: kind: ClusterBuildStrategy name: buildah output: image: <path_to_image>",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: cache value: disabled strategy: name: <your_strategy> kind: ClusterBuildStrategy source: # output: #",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <your_buildrun> namespace: <your_namespace> spec: build: name: <your_build> paramValues: - name: cache value: registry",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build serviceAccount: pipeline 1",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buidrun-retention-ttl spec: build: name: build-retention-ttl retention: ttlAfterFailed: 10m ttlAfterSucceeded: 10m",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <buildrun_name> spec: build: name: <build_name> volumes: - name: <volume_name> configMap: name: <configmap_name>",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <example_var_1> value: \"<example_value_1>\" - name: <example_var_2> value: \"<example_value_2>\"",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <pod_name> valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: MEMORY_LIMIT valueFrom: resourceFieldRef: containerName: <my_container> resource: limits.memory",
"oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r Unknown Unknown 1s",
"oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r True Succeeded 29m 20m",
"status: # failureDetails: location: container: step-source-default pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr message: The source repository does not exist, or you have insufficient permission to access it. reason: GitRemotePrivate",
"status: buildSpec: # output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default git: commitAuthor: xxx xxxxxx commitSha: f25822b85021d02059c9ac8a211ef3804ea8fdde branchName: main",
"status: buildSpec: # output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default bundle: digest: sha256:0f5e2070b534f9b880ed093a537626e3c7fdd28d5328a8d6df8d29cd3da760c7",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: # [...] state: \"BuildRunCanceled\""
] | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.3/html/configure/configuring-build-runs |
Chapter 259. Paho Component | Chapter 259. Paho Component Available as of Camel version 2.16 Paho component provides connector for the MQTT messaging protocol using the Eclipse Paho library. Paho is one of the most popular MQTT libraries, so if you would like to integrate it with your Java project - Camel Paho connector is a way to go. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-paho</artifactId> <version>x.y.z</version> <!-- use the same version as your Camel core version --> </dependency> Keep in mind that Paho artifacts are not hosted in the Maven Central, so you need to add Eclipse Paho repository to your POM xml file: <repositories> <repository> <id>eclipse-paho</id> <url>https://repo.eclipse.org/content/repositories/paho-releases</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> 259.1. URI format Where topic is the name of the topic. 259.2. Options The Paho component supports 4 options, which are listed below. Name Description Default Type brokerUrl (common) The URL of the MQTT broker. String clientId (common) MQTT client identifier. String connectOptions (advanced) Client connection options MqttConnectOptions resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Paho endpoint is configured using URI syntax: with the following path and query parameters: 259.2.1. Path Parameters (1 parameters): Name Description Default Type topic Required Name of the topic String 259.2.2. Query Parameters (15 parameters): Name Description Default Type autoReconnect (common) Client will automatically attempt to reconnect to the server if the connection is lost true boolean brokerUrl (common) The URL of the MQTT broker. tcp://localhost:1883 String clientId (common) MQTT client identifier. String connectOptions (common) Client connection options MqttConnectOptions filePersistenceDirectory (common) Base directory used by the file persistence provider. String password (common) Password to be used for authentication against the MQTT broker String persistence (common) Client persistence to be used - memory or file. MEMORY PahoPersistence qos (common) Client quality of service level (0-2). 2 int resolveMqttConnectOptions (common) Define if you don't want to resolve the MQTT Connect Options from registry true boolean retained (common) Retain option false boolean userName (common) Username to be used for authentication against the MQTT broker String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 259.3. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.paho.broker-url The URL of the MQTT broker. String camel.component.paho.client-id MQTT client identifier. String camel.component.paho.connect-options Client connection options. The option is a org.eclipse.paho.client.mqttv3.MqttConnectOptions type. String camel.component.paho.enabled Enable paho component true Boolean camel.component.paho.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 259.4. Headers The following headers are recognized by the Paho component: Header Java constant Endpoint type Value type Description CamelMqttTopic PahoConstants.MQTT_TOPIC Consumer String The name of the topic CamelPahoOverrideTopic PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC Producer String Name of topic to override and send to instead of topic specified on endpoint 259.5. Default payload type By default Camel Paho component operates on the binary payloads extracted out of (or put into) the MQTT message: // Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody("paho:topic"); // Send payload byte[] payload = "message".getBytes(); producerTemplate.sendBody("paho:topic", payload); But of course Camel build-in type conversion API can perform the automatic data type transformations for you. In the example below Camel automatically converts binary payload into String (and conversely): // Receive payload String payload = consumerTemplate.receiveBody("paho:topic", String.class); // Send payload String payload = "message"; producerTemplate.sendBody("paho:topic", payload); 259.6. Samples For example the following snippet reads messages from the MQTT broker installed on the same host as the Camel router: from("paho:some/queue") .to("mock:test"); While the snippet below sends message to the MQTT broker: from("direct:test") .to("paho:some/target/queue"); For example this is how to read messages from the remote MQTT broker: And here we override the default topic and set to a dynamic topic from("direct:test") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("USD{header.customerId}")) .to("paho:some/target/queue"); | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-paho</artifactId> <version>x.y.z</version> <!-- use the same version as your Camel core version --> </dependency>",
"<repositories> <repository> <id>eclipse-paho</id> <url>https://repo.eclipse.org/content/repositories/paho-releases</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories>",
"paho:topic[?options]",
"paho:topic",
"// Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody(\"paho:topic\"); // Send payload byte[] payload = \"message\".getBytes(); producerTemplate.sendBody(\"paho:topic\", payload);",
"// Receive payload String payload = consumerTemplate.receiveBody(\"paho:topic\", String.class); // Send payload String payload = \"message\"; producerTemplate.sendBody(\"paho:topic\", payload);",
"from(\"paho:some/queue\") .to(\"mock:test\");",
"from(\"direct:test\") .to(\"paho:some/target/queue\");",
"from(\"paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883\") .to(\"mock:test\");",
"from(\"direct:test\") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple(\"USD{header.customerId}\")) .to(\"paho:some/target/queue\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/paho-component |
Chapter 5. Improving search performance through resource limits | Chapter 5. Improving search performance through resource limits Searching through every entry in a database can have a negative impact on a server performance for larger directories. In large databases, effective indexing might not sufficiently reduce the search scope to improve the performance. You can set limits on user and client accounts to reduce the total number of entries or the total amount of time spent in an individual search. This makes searches more responsive and improves overall server performance. 5.1. Search operation limits for large directories You can control server limits for search operations by using special operational attribute values on the client application binding to the directory. You can set the following search operation limits: The Look through limit specifies how many entries you can examine for a search operation. The Size limit specifies maximum number of entries the server returns to a client application in response to the search operation. The Time limit specifies maximum time the server can spend processing a search operation. The Idle timeout limit specifies the time when connection to the server can be idle before the connection is dropped. The Range timeout limit specifies a separate look-through limit specifically for searches by using a range. In the global server configuration, the resource limits set for the client application take precedence over the default resource limits set. Note The Directory Manager receives unlimited resources by default, with the exception of range searches. 5.2. Search performance improvement with index scan limits For large indexes, it is efficient to treat any search which matches the index as an unindexed search. The search operation has to look in the entire directory to process results rather than searching through an index that is nearly the size of a directory in addition to the directory itself. Additional resources Setting an index scan limit 5.3. Fine grained ID list size In large databases, some queries can consume a large number of CPU and RAM resources. To improve the performance, you can set a default ID scan limit that applies to all indexes in the database by using the nsslapd-idlistscanlimit attribute. However, it is useful to either define a limit for certain indexes or use the list with no IDs defined. You can set individual settings for ID list scan limits for different types of search filters by using the nsIndexIDListScanLimit attribute. Additional resources Setting an index scan limit to improve performance when loading long list of ids 5.4. Setting user and global resource limits by using the command line You can set user-level resource limits, global resource limits, and limits for specific types of searches, such as simple paged and range searches , by using the command line. You can set user-level attributes on the individual entries and global configuration attributes are set in the appropriate server configuration area. You can set the following mentioned operational attributes for each entry by using the ldapmodify command: look-through You can specify the number of entries to examine for a search operation by using the look-through limit attribute. Setting the attribute's value to -1 indicates that there is no limit. User-level attribute: nsLookThroughLimit Global configuration: Attribute: nsslapd-lookthroughlimit Entry: cn=config,cn=ldbm database,cn=plugins,cn=config paged look-through You can specify the number of entries to examine for simple paged search operations by using the paged look-through limit attribute. Setting the attribute's value to -1 indicates that there is no limit. User-level attribute: nsPagedLookThroughLimit Global configuration: Attribute: nsslapd-pagedlookthroughlimit Entry: cn=config,cn=ldbm database,cn=plugins,cn=config size You can specify the maximum number of entries the server returns to a client application in response to a search operation by using the size limit attribute. Setting the attribute's value to -1 indicates that there is no limit. User-level attribute: nsSizeLimit Global configuration: Attribute: nsslapd-sizelimit Entry: cn=config You can add the nsSizeLimit attribute to the user's entry and for example give it a search return size limit of 500 entries: paged size You can specify the maximum number of entries the server returns to a client application for simple paged search operations by using the paged size limit attribute. Setting the attribute's value to -1 indicates that there is no limit. User-level attribute: nsPagedSizeLimit Global configuration: Attribute: nsslapd-pagedsizelimit Entry: cn=config time You can specify the maximum time the server can spend processing a search operation by using the time limit attribute. Setting the attribute's value to -1 indicates that there is no time limit. User-level attribute: nsTimeLimit Global configuration: Attribute: nsslapd-timelimit Entry: cn=config idle timeout You can specify the time in seconds for which a connection to the server can be idle before the connection is dropped by using the idle timeout attribute. Setting the attribute's value to -1 indicates that there is no limit. User-level attribute: nsidletimeout Global configuration: Attribute: nsslapd-idletimeout Entry: cn=config ID list scan You can specify the maximum number of entry IDs loaded from an index file for search results. If the ID list size is greater than the maximum number of IDs, the search will not use the index list, but will treat the search as an unindexed search and look through the entire database. User-level attribute: nsIDListScanLimit Global configuration: Attribute: nsslapd-idlistscanlimit Entry: cn=config,cn=ldbm database,cn=plugins,cn=config paged ID list scan You can specify the maximum number of entry IDs loaded from an index file for search results particularly for paged search operations by using the paged ID list scan limit. User-level attribute: nsPagedIDListScanLimit Global configuration: Attribute: nsslapd-pagedidlistscanlimit Entry: cn=config,cn=ldbm database,cn=plugins,cn=config range look-through You can specify the numbers of entries to examine for a range search operation by using the range look-through limit. Setting the attribute's value to -1 indicates that there is no limit. Note A range search is a search by using the greater-than , equal-to-or-greater-than , less-than , or equal-to-less-than operators. User-level attribute: not available Global configuration: Attribute: nsslapd-rangelookthroughlimit Entry: cn=config,cn=ldbm database,cn=plugins,cn=config Note You can set an access control list to prevent users from changing the setting. Additional resources Managing access control 5.5. Setting resource limits on anonymous binds You can configure resource limits for anonymous binds by creating a template user entry that has resource limits, and then applying this template to anonymous binds, because resource limits are set on a user entry and anonymous bind does not have a user entry associated with it. Prerequisites A template entry has been created. Procedure Set resource limits you want to apply to anonymous binds: Note For performance reasons, the template must be in the normal back end, not in the cn=config suffix that does not use an entry cache. Add the nsslapd-anonlimitsdn parameter to the server configuration, pointing to the DN of the template entry on all suppliers in a replication topology: 5.6. Performance improvement for range searches A range search (all IDs search) uses operators to set a bracket to search and return an entire subset of the entries within a directory. The range search can evaluate every entry in the directory to check if the entry is within the provided range. For example, to search for every entry modified at or after midnight on January 1, run the following command: To prevent a range search from turning into an all IDs search, you can use the look-through limit. By using this limit, you can improve overall performance and speed up range search results. However, some clients or administrative users, such as Directory Manager, cannot have the look-through limit set. In this case, the range search can take several minutes to complete or can even continue indefinitely. However, you can set a separate range look-through limit. By setting this limit, clients and administrative users can have high look-through limits and can still be able to set a reasonable limit on potentially performance-impaired range searches. You can configure such setting by using the nsslapd-rangelookthroughlimit attribute. The default value is 5000. To set the separate range look-through limit to 7500, run the following command: | [
"dsconf instance backend config set --lookthroughlimit value",
"dsconf instance backend config set --pagedlookthroughlimit value",
"dsconf instance config replace nsslapd-sizelimit value",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: uid=user_name,ou=People,dc=example,dc=com changetype: modify add: nsSizeLimit nsSizeLimit: 500",
"dsconf instance config replace nsslapd-pagedsizelimit value",
"dsconf instance config replace nsslapd-timelimit value",
"dsconf instance config replace nsslapd-idletimeout value",
"dsconf instance backend config set --idlistscanlimit value",
"dsconf instance backend config set --pagedidlistscanlimit value",
"dsconf instance backend config set ----rangelookthroughlimit value",
"ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=anonymous_template,ou=people,dc=example,dc=com objectclass: nsContainer objectclass: top cn: anonymous_template nsSizeLimit: 250 nsLookThroughLimit: 1000 nsTimeLimit: 60",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-anonlimitsdn=\"cn=anonymous_template,ou=people,dc=example,dc=com\"",
"(modifyTimestamp>=20210101010101Z)",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --rangelookthroughlimit 7500"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/searching_entries_and_tuning_searches/improving-search-performance-through-resource-limits_searching-entries-and-tuning-searches |
Chapter 65. Bean Validation | Chapter 65. Bean Validation Abstract Bean validation is a Java standard that enables you to define runtime constraints by adding Java annotations to service classes or interfaces. Apache CXF uses interceptors to integrate this feature with Web service method invocations. 65.1. Introduction Overview Bean Validation 1.1 ( JSR-349 )-which is an evolution of the original Bean Validation 1.0 (JSR-303) standard-enables you to declare constraints that can be checked at run time, using Java annotations. You can use annotations to define constraints on the following parts of the Java code: Fields in a bean class. Method and constructor parameters. Method return values. Example of annotated class The following example shows a Java class annotated with some standard bean validation constraints: Bean validation or schema validation? In some respects, bean validation and schema validation are quite similar. Configuring an endpoint with an XML schema is a well established way to validate messages at run time on a Web services endpoint. An XML schema can check many of the same constraints as bean validation on incoming and outgoing messages. Nevertheless, bean validation can sometimes be a useful alternative for one or more of the following reasons: Bean validation enables you to define constraints independently of the XML schema (which is useful, for example, in the case of code-first service development). If your current XML schema is too lax, you can use bean validation to define stricter constraints. Bean validation lets you define custom constraints, which might be impossible to define using XML schema language. Dependencies The Bean Validation 1.1 (JSR-349) standard defines just the API, not the implementation. Dependencies must therefore be provided in two parts: Core dependencies -provide the bean validation 1.1 API, Java unified expression language API and implementation. Hibernate Validator dependencies -provides the implementation of bean validation 1.1. Core dependencies To use bean validation, you must add the following core dependencies to your project's Maven pom.xml file: Note The javax.el/javax.el-api and org.glassfish/javax.el dependencies provide the API and implementation of Java's unified expression language. This expression language is used internally by bean validation, but is not important at the application programming level. Hibernate Validator dependencies To use the Hibernate Validator implementation of bean validation, you must add the following additional dependencies to your project's Maven pom.xml file: Resolving the validation provider in an OSGi environment The default mechanism for resolving a validation provider involves scanning the classpath to find the provider resource. In the case of an OSGi (Apache Karaf) environment, however, this mechanism does not work, because the validation provider (for example, the Hibernate validator) is packaged in a separate bundle and is thus not automatically available in your application classpath. In the context of OSGi, the Hibernate validator needs to be wired to your application bundle, and OSGi needs a bit of help to do this successfully. Configuring the validation provider explicitly in OSGi In the context of OSGi, you need to configure the validation provider explicitly, instead of relying on automatic discovery. For example, if you are using the common validation feature (see the section called "Bean validation feature" ) to enable bean validation, you must configure it with a validation provider, as follows: Where the HibernateValidationProviderResolver is a custom class that wraps the Hibernate validation provider. Example HibernateValidationProviderResolver class The following code example shows how to define a custom HibernateValidationProviderResolver , which resolves the Hibernate validator: When you build the preceding class in a Maven build system, which is configured to use the Maven bundle plug-in, your application will be wired to the Hibernate validator bundle at deploy time (assuming you have already deployed the Hibernate validator bundle to the OSGi container). 65.2. Developing Services with Bean Validation 65.2.1. Annotating a Service Bean Overview The first step in developing a service with bean validation is to apply the relevant validation annotations to the Java classes or interfaces that represent your services. The validation annotations enable you to apply constraints to method parameters, return values, and class fields, which are then checked at run time, every time the service is invoked. Validating simple input parameters To validate the parameters of a service method-where the parameters are simple Java types-you can apply any of the constraint annotations from the bean validation API ( javax.validation.constraints package). For example, the following code example tests both parameters for nullness ( @NotNull annotation), whether the id string matches the \\d+ regular expression ( @Pattern annotation), and whether the length of the name string lies in the range 1 to 50: Validating complex input parameters To validate complex input parameters (object instances), apply the @Valid annotation to the parameter, as shown in the following example: The @Valid annotation does not specify any constraints by itself. When you annotate the Book parameter with @Valid , you are effectively telling the validation engine to look inside the definition of the Book class (recursively) to look for validation constraints. In this example, the Book class is defined with validation constraints on its id and name fields, as follows: Validating return values (non-Response) To apply validation to regular method return values (non-Response), add the annotations in front of the method signature. For example, to test the return value for nullness ( @NotNull annotation) and to test validation constraints recursively ( @Valid annotation), annotate the getBook method as follows: Validating return values (Response) To apply validation to a method that returns a javax.ws.rs.core.Response object, you can use the same annotations as in the non-Response case. For example: 65.2.2. Standard Annotations Bean validation constraints Table 65.1, "Standard Annotations for Bean Validation" shows the standard annotations defined in the Bean Validation specification, which can be used to define constraints on fields and on method return values and parameters (none of the standard annotations can be applied at the class level). Table 65.1. Standard Annotations for Bean Validation Annotation Applicable to Description @AssertFalse Boolean , boolean Checks that the annotated element is false . @AssertTrue Boolean , boolean Checks that the annotated element is true . @DecimalMax(value=, inclusive=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers When inclusive=false , checks that the annotated value is less than the specified maximum. Otherwise, checks that the value is less than or equal to the specified maximum. The value parameter specifies the maximum in BigDecimal string format. @DecimalMin(value=, inclusive=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers When inclusive=false , checks that the annotated value is greater than the specified minimum. Otherwise, checks that the value is greater than or equal to the specified minimum. The value parameter specifies the minimum in BigDecimal string format. @Digits(integer=, fraction=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers Checks whether the annotated value is a number having up to integer digits and fraction fractional digits. @Future java.util.Date , java.util.Calendar Checks whether the annotated date is in the future. @Max(value=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers Checks whether the annotated value is less than or equal to the specified maximum. @Min(value=) BigDecimal , BigInteger , CharSequence , byte , short , int , long and primitive type wrappers Checks whether the annotated value is greater than or equal to the specified minimum. @NotNull Any type Checks that the annotated value is not null . @Null Any type Checks that the annotated value is null . @Past java.util.Date , java.util.Calendar Checks whether the annotated date is in the past. @Pattern(regex=, flag=) CharSequence Checks whether the annotated string matches the regular expression regex considering the given flag match. @Size(min=, max=) CharSequence , Collection , Map and arrays Checks whether the size of the annotated collection, map, or array lies between min and max (inclusive). @Valid Any non-primitive type Performs validation recursively on the annotated object. If the object is a collection or an array, the elements are validated recursively. If the object is a map, the value elements are validated recursively. 65.2.3. Custom Annotations Defining custom constraints in Hibernate It is possible to define your own custom constraints annotations with the bean validation API. For details of how to do this in the Hibernate validator implementation, see the Creating custom constraints chapter of the Hibernate Validator Reference Guide . 65.3. Configuring Bean Validation 65.3.1. JAX-WS Configuration Overview This section describes how to enable bean validation on a JAX-WS service endpoint, which is defined either in Blueprint XML or in Spring XML. The interceptors used to perform bean validation are common to both JAX-WS endpoints and JAX-RS 1.1 endpoints (JAX-RS 2.0 endpoints use different interceptor classes, however). Namespaces In the XML examples shown in this section, you must remember to map the jaxws namespace prefix to the appropriate namespace, either for Blueprint or Spring, as shown in the following table: XML Language Namespace Blueprint http://cxf.apache.org/blueprint/jaxws Spring http://cxf.apache.org/jaxws Bean validation feature The simplest way to enable bean validation on a JAX-WS endpoint is to add the bean validation feature to the endpoint. The bean validation feature is implemented by the following class: org.apache.cxf.validation.BeanValidationFeature By adding an instance of this feature class to the JAX-WS endpoint (either through the Java API or through the jaxws:features child element of jaxws:endpoint in XML), you can enable bean validation on the endpoint. This feature installs two interceptors: an In interceptor that validates incoming message data; and an Out interceptor that validates return values (where the interceptors are created with default configuration parameters). Sample JAX-WS configuration with bean validation feature The following XML example shows how to enable bean validation functionality in a JAX-WS endpoint, by adding the commonValidationFeature bean to the endpoint as a JAX-WS feature: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Note Remember to map the jaxws prefix to the appropriate XML namespace for either Blueprint or Spring, depending on the context. Common bean validation 1.1 interceptors If you want to have more fine-grained control over the configuration of the bean validation, you can install the interceptors individually, instead of using the bean validation feature. In place of the bean validation feature, you can configure one or both of the following interceptors: org.apache.cxf.validation.BeanValidationInInterceptor When installed in a JAX-WS (or JAX-RS 1.1) endpoint, validates resource method parameters against validation constraints. If validation fails, raises the javax.validation.ConstraintViolationException exception. To install this interceptor, add it to the endpoint through the jaxws:inInterceptors child element in XML (or the jaxrs:inInterceptors child element in XML). org.apache.cxf.validation.BeanValidationOutInterceptor When installed in a JAX-WS (or JAX-RS 1.1) endpoint, validates response values against validation constraints. If validation fails, raises the javax.validation.ConstraintViolationException exception. To install this interceptor, add it to the endpoint through the jaxws:outInterceptors child element in XML (or the jaxrs:outInterceptors child element in XML). Sample JAX-WS configuration with bean validation interceptors The following XML example shows how to enable bean validation functionality in a JAX-WS endpoint, by explicitly adding the relevant In interceptor bean and Out interceptor bean to the endpoint: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Configuring a BeanValidationProvider The org.apache.cxf.validation.BeanValidationProvider is a simple wrapper class that wraps the bean validation implementation ( validation provider ). By overriding the default BeanValidationProvider class, you can customize the implementation of bean validation. The BeanValidationProvider bean enables you to override one or more of the following provider classes: javax.validation.ParameterNameProvider Provides names for method and constructor parameters. Note that this class is needed, because the Java reflection API does not give you access to the names of method parameters or constructor parameters. javax.validation.spi.ValidationProvider<T> Provides an implementation of bean validation for the specified type, T . By implementing your own ValidationProvider class, you can define custom validation rules for your own classes. This mechanism effectively enables you to extend the bean validation framework. javax.validation.ValidationProviderResolver Implements a mechanism for discovering ValidationProvider classes and returns a list of the discovered classes. The default resolver looks for a META-INF/services/javax.validation.spi.ValidationProvider file on the classpath, which should contain a list of ValidationProvider classes. javax.validation.ValidatorFactory A factory that returns javax.validation.Validator instances. org.apache.cxf.validation.ValidationConfiguration A CXF wrapper class that enables you override more classes from the validation provider layer. To customize the BeanValidationProvider , pass a custom BeanValidationProvider instance to the constructor of the validation In interceptor and to the constructor of the validation Out interceptor. For example: 65.3.2. JAX-RS Configuration Overview This section describes how to enable bean validation on a JAX-RS service endpoint, which is defined either in Blueprint XML or in Spring XML. The interceptors used to perform bean validation are common to both JAX-WS endpoints and JAX-RS 1.1 endpoints (JAX-RS 2.0 endpoints use different interceptor classes, however). Namespaces In the XML examples shown in this section, you must remember to map the jaxws namespace prefix to the appropriate namespace, either for Blueprint or Spring, as shown in the following table: XML Language Namespace Blueprint http://cxf.apache.org/blueprint/jaxws Spring http://cxf.apache.org/jaxws Bean validation feature The simplest way to enable bean validation on a JAX-RS endpoint is to add the bean validation feature to the endpoint. The bean validation feature is implemented by the following class: org.apache.cxf.validation.BeanValidationFeature By adding an instance of this feature class to the JAX-RS endpoint (either through the Java API or through the jaxrs:features child element of jaxrs:server in XML), you can enable bean validation on the endpoint. This feature installs two interceptors: an In interceptor that validates incoming message data; and an Out interceptor that validates return values (where the interceptors are created with default configuration parameters). Validation exception mapper A JAX-RS endpoint also requires you to configure a validation exception mapper , which is responsible for mapping validation exceptions to HTTP error responses. The following class implements validation exception mapping for JAX-RS: org.apache.cxf.jaxrs.validation.ValidationExceptionMapper Implements validation exception mapping in accordance with the JAX-RS 2.0 specification: any input parameter validation violations are mapped to HTTP status code 400 Bad Request ; and any return value validation violation (or internal validation violation) is mapped to HTTP status code 500 Internal Server Error . Sample JAX-RS configuration The following XML example shows how to enable bean validation functionality in a JAX-RS endpoint, by adding the commonValidationFeature bean as a JAX-RS feature and by adding the exceptionMapper bean as a JAX-RS provider: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Note Remember to map the jaxrs prefix to the appropriate XML namespace for either Blueprint or Spring, depending on the context. Common bean validation 1.1 interceptors Instead of using the bean validation feature, you can optionally install bean validation interceptors to get more fine-grained control over the validation implementation. JAX-RS uses the same interceptors as JAX-WS for this purpose-see the section called "Common bean validation 1.1 interceptors" Sample JAX-RS configuration with bean validation interceptors The following XML example shows how to enable bean validation functionality in a JAX-RS endpoint, by explicitly adding the relevant In interceptor bean and Out interceptor bean to the server endpoint: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Configuring a BeanValidationProvider You can inject a custom BeanValidationProvider instance into the validation interceptors, as described in the section called "Configuring a BeanValidationProvider" . 65.3.3. JAX-RS 2.0 Configuration Overview Unlike JAX-RS 1.1 (which shares common validation interceptors with JAX-WS), the JAX-RS 2.0 configuration relies on dedicated validation interceptor classes that are specific to JAX-RS 2.0. Bean validation feature For JAX-RS 2.0, there is a dedicated bean validation feature, which is implemented by the following class: org.apache.cxf.validation.JAXRSBeanValidationFeature By adding an instance of this feature class to the JAX-RS endpoint (either through the Java API or through the jaxrs:features child element of jaxrs:server in XML), you can enable bean validation on a JAX-RS 2.0 server endpoint. This feature installs two interceptors: an In interceptor that validates incoming message data; and an Out interceptor that validates return values (where the interceptors are created with default configuration parameters). Validation exception mapper JAX-RS 2.0 uses the same validation exception mapper class as JAX-RS 1.x: org.apache.cxf.jaxrs.validation.ValidationExceptionMapper Implements validation exception mapping in accordance with the JAX-RS 2.0 specification: any input parameter validation violations are mapped to HTTP status code 400 Bad Request ; and any return value validation violation (or internal validation violation) is mapped to HTTP status code 500 Internal Server Error . Bean validation invoker If you configure the JAX-RS service with a non-default lifecycle policy (for example, using Spring lifecycle management), you should also register a org.apache.cxf.jaxrs.validation.JAXRSBeanValidationInvoker instance-using the jaxrs:invoker element in the endpoint configuration-with the service endpoint, to ensure that bean validation is invoked correctly. For more details about JAX-RS service lifecycle management, see the section called "Lifecycle management in Spring XML" . Sample JAX-RS 2.0 configuration with bean validation feature The following XML example shows how to enable bean validation functionality in a JAX-RS 2.0 endpoint, by adding the jaxrsValidationFeature bean as a JAX-RS feature and by adding the exceptionMapper bean as a JAX-RS provider: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Note Remember to map the jaxrs prefix to the appropriate XML namespace for either Blueprint or Spring, depending on the context. Common bean validation 1.1 interceptors If you want to have more fine-grained control over the configuration of the bean validation, you can install the JAX-RS interceptors individually, instead of using the bean validation feature. Configure one or both of the following JAX-RS interceptors: org.apache.cxf.validation.JAXRSBeanValidationInInterceptor When installed in a JAX-RS 2.0 server endpoint, validates resource method parameters against validation constraints. If validation fails, raises the javax.validation.ConstraintViolationException exception. To install this interceptor, add it to the endpoint through the jaxrs:inInterceptors child element in XML. org.apache.cxf.validation.JAXRSBeanValidationOutInterceptor When installed in a JAX-RS 2.0 endpoint, validates response values against validation constraints. If validation fails, raises the javax.validation.ConstraintViolationException exception. To install this interceptor, add it to the endpoint through the jaxrs:inInterceptors child element in XML. Sample JAX-RS 2.0 configuration with bean validation interceptors The following XML example shows how to enable bean validation functionality in a JAX-RS 2.0 endpoint, by explicitly adding the relevant In interceptor bean and Out interceptor bean to the server endpoint: For a sample implementation of the HibernateValidationProviderResolver class, see the section called "Example HibernateValidationProviderResolver class" . It is only necessary to configure the beanValidationProvider in the context of an OSGi environment (Apache Karaf). Configuring a BeanValidationProvider You can inject a custom BeanValidationProvider instance into the validation interceptors, as described in the section called "Configuring a BeanValidationProvider" . Configuring a JAXRSParameterNameProvider The org.apache.cxf.jaxrs.validation.JAXRSParameterNameProvider class is an implementation of the javax.validation.ParameterNameProvider interface, which can be used to provide the names for method and constructor parameters in the context of JAX-RS 2.0 endpoints. | [
"// Java import javax.validation.constraints.NotNull; import javax.validation.constraints.Max; import javax.validation.Valid; public class Person { @NotNull private String firstName; @NotNull private String lastName; @Valid @NotNull private Person boss; public @NotNull String saveItem( @Valid @NotNull Person person, @Max( 23 ) BigDecimal age ) { // } }",
"<dependency> <groupId>javax.validation</groupId> <artifactId>validation-api</artifactId> <version>1.1.0.Final</version> </dependency> <dependency> <groupId>javax.el</groupId> <artifactId>javax.el-api</artifactId> <!-- use 3.0-b02 version for Java 6 --> <version>3.0.0</version> </dependency> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.el</artifactId> <!-- use 3.0-b01 version for Java 6 --> <version>3.0.0</version> </dependency>",
"<dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-validator</artifactId> <version>5.0.3.Final</version> </dependency>",
"<bean id=\"commonValidationFeature\" class=\"org.apache.cxf.validation.BeanValidationFeature\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>",
"// Java package org.example; import static java.util.Collections.singletonList; import org.hibernate.validator.HibernateValidator; import javax.validation.ValidationProviderResolver; import java.util.List; /** * OSGi-friendly implementation of {@code javax.validation.ValidationProviderResolver} returning * {@code org.hibernate.validator.HibernateValidator} instance. * */ public class HibernateValidationProviderResolver implements ValidationProviderResolver { @Override public List getValidationProviders() { return singletonList(new HibernateValidator()); } }",
"import javax.validation.constraints.NotNull; import javax.validation.constraints.Pattern; import javax.validation.constraints.Size; @POST @Path(\"/books\") public Response addBook( @NotNull @Pattern(regexp = \"\\\\d+\") @FormParam(\"id\") String id, @NotNull @Size(min = 1, max = 50) @FormParam(\"name\") String name) { // do some work return Response.created().build(); }",
"import javax.validation.Valid; @POST @Path(\"/books\") public Response addBook( @Valid Book book ) { // do some work return Response.created().build(); }",
"import javax.validation.constraints.NotNull; import javax.validation.constraints.Pattern; import javax.validation.constraints.Size; public class Book { @NotNull @Pattern(regexp = \"\\\\d+\") private String id; @NotNull @Size(min = 1, max = 50) private String name; // }",
"import javax.validation.constraints.NotNull; import javax.validation.Valid; @GET @Path(\"/books/{bookId}\") @Override @NotNull @Valid public Book getBook(@PathParam(\"bookId\") String id) { return new Book( id ); }",
"import javax.validation.constraints.NotNull; import javax.validation.Valid; import javax.ws.rs.core.Response; @GET @Path(\"/books/{bookId}\") @Valid @NotNull public Response getBookResponse(@PathParam(\"bookId\") String id) { return Response.ok( new Book( id ) ).build(); }",
"<jaxws:endpoint xmlns:s=\"http://bookworld.com\" serviceName=\"s:BookWorld\" endpointName=\"s:BookWorldPort\" implementor=\"#bookWorldValidation\" address=\"/bwsoap\"> <jaxws:features> <ref bean=\"commonValidationFeature\" /> </jaxws:features> </jaxws:endpoint> <bean id=\"bookWorldValidation\" class=\"org.apache.cxf.systest.jaxrs.validation.spring.BookWorldImpl\"/> <bean id=\"commonValidationFeature\" class=\"org.apache.cxf.validation.BeanValidationFeature\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>",
"<jaxws:endpoint xmlns:s=\"http://bookworld.com\" serviceName=\"s:BookWorld\" endpointName=\"s:BookWorldPort\" implementor=\"#bookWorldValidation\" address=\"/bwsoap\"> <jaxws:inInterceptors> <ref bean=\"validationInInterceptor\" /> </jaxws:inInterceptors> <jaxws:outInterceptors> <ref bean=\"validationOutInterceptor\" /> </jaxws:outInterceptors> </jaxws:endpoint> <bean id=\"bookWorldValidation\" class=\"org.apache.cxf.systest.jaxrs.validation.spring.BookWorldImpl\"/> <bean id=\"validationInInterceptor\" class=\"org.apache.cxf.validation.BeanValidationInInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"validationOutInterceptor\" class=\"org.apache.cxf.validation.BeanValidationOutInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>",
"<bean id=\"validationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\" /> <bean id=\"validationInInterceptor\" class=\"org.apache.cxf.validation.BeanValidationInInterceptor\"> <property name=\"provider\" ref=\"validationProvider\" /> </bean> <bean id=\"validationOutInterceptor\" class=\"org.apache.cxf.validation.BeanValidationOutInterceptor\"> <property name=\"provider\" ref=\"validationProvider\" /> </bean>",
"<jaxrs:server address=\"/bwrest\"> <jaxrs:serviceBeans> <ref bean=\"bookWorldValidation\"/> </jaxrs:serviceBeans> <jaxrs:providers> <ref bean=\"exceptionMapper\"/> </jaxrs:providers> <jaxrs:features> <ref bean=\"commonValidationFeature\" /> </jaxrs:features> </jaxrs:server> <bean id=\"bookWorldValidation\" class=\"org.apache.cxf.systest.jaxrs.validation.spring.BookWorldImpl\"/> <beanid=\"exceptionMapper\"class=\"org.apache.cxf.jaxrs.validation.ValidationExceptionMapper\"/> <bean id=\"commonValidationFeature\" class=\"org.apache.cxf.validation.BeanValidationFeature\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>",
"<jaxrs:server address=\"/\"> <jaxrs:inInterceptors> <ref bean=\"validationInInterceptor\" /> </jaxrs:inInterceptors> <jaxrs:outInterceptors> <ref bean=\"validationOutInterceptor\" /> </jaxrs:outInterceptors> <jaxrs:serviceBeans> </jaxrs:serviceBeans> <jaxrs:providers> <ref bean=\"exceptionMapper\"/> </jaxrs:providers> </jaxrs:server> <bean id=\"exceptionMapper\" class=\"org.apache.cxf.jaxrs.validation.ValidationExceptionMapper\"/> <bean id=\"validationInInterceptor\" class=\"org.apache.cxf.validation.BeanValidationInInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\" /> </bean> <bean id=\"validationOutInterceptor\" class=\"org.apache.cxf.validation.BeanValidationOutInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\" /> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>",
"<jaxrs:server address=\"/\"> <jaxrs:serviceBeans> </jaxrs:serviceBeans> <jaxrs:providers> <ref bean=\"exceptionMapper\"/> </jaxrs:providers> <jaxrs:features> <ref bean=\"jaxrsValidationFeature\" /> </jaxrs:features> </jaxrs:server> <bean id=\"exceptionMapper\" class=\"org.apache.cxf.jaxrs.validation.ValidationExceptionMapper\"/> <bean id=\"jaxrsValidationFeature\" class=\"org.apache.cxf.validation.JAXRSBeanValidationFeature\"> <property name=\"provider\" ref=\"beanValidationProvider\"/> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>",
"<jaxrs:server address=\"/\"> <jaxrs:inInterceptors> <ref bean=\"validationInInterceptor\" /> </jaxrs:inInterceptors> <jaxrs:outInterceptors> <ref bean=\"validationOutInterceptor\" /> </jaxrs:outInterceptors> <jaxrs:serviceBeans> </jaxrs:serviceBeans> <jaxrs:providers> <ref bean=\"exceptionMapper\"/> </jaxrs:providers> </jaxrs:server> <bean id=\"exceptionMapper\" class=\"org.apache.cxf.jaxrs.validation.ValidationExceptionMapper\"/> <bean id=\"validationInInterceptor\" class=\"org.apache.cxf.jaxrs.validation.JAXRSBeanValidationInInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\" /> </bean> <bean id=\"validationOutInterceptor\" class=\"org.apache.cxf.jaxrs.validation.JAXRSBeanValidationOutInterceptor\"> <property name=\"provider\" ref=\"beanValidationProvider\" /> </bean> <bean id=\"beanValidationProvider\" class=\"org.apache.cxf.validation.BeanValidationProvider\"> <constructor-arg ref=\"validationProviderResolver\"/> </bean> <bean id=\"validationProviderResolver\" class=\"org.example.HibernateValidationProviderResolver\"/>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/validation |
4.2. Prioritizing Network Traffic | 4.2. Prioritizing Network Traffic When running multiple network-related services on a single server system, it is important to define network priorities between these services. Defining these priorities ensures that packets originating from certain services have a higher priority than packets originating from other services. For example, such priorities are useful when a server system simultaneously functions as an NFS and Samba server. The NFS traffic must be of high priority as users expect high throughput. The Samba traffic can be deprioritized to allow better performance of the NFS server. The net_prio subsystem can be used to set network priorities for processes in cgroups. These priorities are then translated into Type Of Service (TOS) bits and embedded into every packet. Follow the steps in Procedure 4.2, "Setting Network Priorities for File Sharing Services" to configure prioritization of two file sharing services (NFS and Samba). Procedure 4.2. Setting Network Priorities for File Sharing Services The net_prio subsystem is not compiled in the kernel, it is a module that must be loaded manually. To do so, type: ~]# modprobe net_prio Attach the net_prio subsystem to the /cgroup/net_prio cgroup: Create two cgroups, one for each service: To automatically move the nfs services to the nfs_high cgroup, add the following line to the /etc/sysconfig/nfs file: This configuration ensures that nfs service processes are moved to the nfs_high cgroup when the nfs service is started or restarted. For more information about moving service processes to cgroups, refer to Section 2.9.1, "Starting a Service in a Control Group" . The smbd daemon does not have a configuration file in the /etc/sysconfig directory. To automatically move the smbd daemon to the samba_low cgroup, add the following line to the /etc/cgrules.conf file: Note that this rule moves every smbd daemon, not only /usr/sbin/smbd , into the samba_low cgroup. You can define rules for the nmbd and winbindd daemons to be moved to the samba_low cgroup in a similar way. Start the cgred service to load the configuration from the step: For the purposes of this example, let us assume both services use the eth1 network interface. Define network priorities for each cgroup, where 1 denotes low priority and 10 denotes high priority: Start the nfs and smb services and check whether their processes have been moved into the correct cgroups: Network traffic originating from NFS now has higher priority than traffic originating from Samba. Similar to Procedure 4.2, "Setting Network Priorities for File Sharing Services" , the net_prio subsystem can be used to set network priorities for client applications, for example, Firefox. | [
"~]# mkdir /cgroup/net_prio ~]# mount -t cgroup -o net_prio net_prio /cgroup/net_prio",
"~]# mkdir /cgroup/net_prio/nfs_high ~]# mkdir /cgroup/net_prio/samba_low",
"CGROUP_DAEMON=\"net_prio:nfs_high\"",
"*:smbd net_prio samba_low",
"~]# service cgred start Starting CGroup Rules Engine Daemon: [ OK ]",
"~]# echo \"eth1 1\" > /cgroup/net_prio/samba_low ~]# echo \"eth1 10\" > /cgroup/net_prio/nfs_high",
"~]# service smb start Starting SMB services: [ OK ] ~]# cat /cgroup/net_prio/samba_low 16122 16124 ~]# service nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Stopping RPC idmapd: [ OK ] Starting RPC idmapd: [ OK ] Starting NFS daemon: [ OK ] ~]# cat /cgroup/net_prio/nfs_high 16321 16325 16376"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-prioritizing_network_traffic |
Part I. New Features | Part I. New Features This part documents new features in Red Hat Enterprise Linux 7.3. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new-features |
Chapter 13. Identity: Integrating with NIS Domains and Netgroups | Chapter 13. Identity: Integrating with NIS Domains and Netgroups Network information service (NIS) is one of the most common ways to manage identities and authentication on Unix networks. It is simple and easy to use, but it also has inherent security risks and a lack of flexibility that can make administering NIS domains problematic. Identity Management supplies a way to integrate netgroups and other NIS data into the IdM domain, which incorporates the stronger security structure of IdM over the NIS configuration. Alternatively, administrators can simply migrate user and host identities from a NIS domain into the IdM domain. 13.1. About NIS and Identity Management Network information service (NIS) centrally manages authentication and identity information such as users and passwords, hosts and IP addresses, and POSIX groups. This was originally called Yellow Pages (abbreviated YP) because of its simple focus on identity and authentication lookups. NIS is considered too insecure for most modern network environments because it provides no host authentication mechanisms and it transmits all of its information over the network unencrypted, including password hashes. Still, while NIS has been falling out of favor with administrators, it is still actively used by many system clients. There are ways to work around those insecurities by integrating NIS with other protocols which offer enhanced security. In Identity Management, NIS objects are integrated into IdM using the underlying LDAP directory. LDAP services offer support for NIS objects (as defined in RFC 2307 ), which Identity Management customizes to provide better integration with other domain identities. The NIS object is created inside the LDAP service and then a module like nss_ldap or SSSD fetches the object using an encrypted LDAP connection. NIS entities are stored in netgroups . A netgroup allows nesting (groups inside groups), which standard Unix groups don't support. Also, netgroups provide a way to group hosts, which is also missing in Unix group. NIS groups work by defining users and hosts as members of a larger domain. A netgroup sets a trio of information - host, user, domain. This is called a triple . A netgroup triple associates the user or the host with the domain; it does not associate the user and the host with each other. Therefore, a triple usually defines a host or a user for better clarity and management. NIS distributes more than just netgroup data. It stores information about users and passwords, groups, network data, and hosts, among other information. Identity Management can use a NIS listener to map passwords, groups, and netgroups to IdM entries. In IdM LDAP entries, the users in a netgroup can be a single user or a group; both are identified by the memberUser parameter. Likewise, hosts can be either a single host or a host group; both are identified by the memberHost attribute. In Identity Management, these netgroup entries are handled using the netgroup-* commands, which show the basic LDAP entry: When a client attempts to access the NIS netgroup, then Identity Management translates the LDAP entry into a traditional NIS map and sends it to a client over the NIS protocol (using a NIS plug-in) or it translates it into an LDAP format that is compliant with RFC 2307 or RFC 2307bis. | [
"host,user,domain",
"host.example.com,,nisdomain.example.com -,jsmith,nisdomain.example.com",
"dn: ipaUniqueID=d4453480-cc53-11dd-ad8b-0800200c9a66,cn=ng,cn=accounts, objectclass: top objectclass: ipaAssociation objectclass: ipaNISNetgroup ipaUniqueID: d4453480-cc53-11dd-ad8b-0800200c9a66 cn: netgroup1 memberHost: fqdn=host1.example.com,cn=computers,cn=accounts, memberHost: cn=VirtGuests,cn=hostgroups,cn=accounts, memberUser: cn=jsmith,cn=users,cn=accounts, memberUser: cn=bjensen,cn=users,cn=accounts, memberUser: cn=Engineering,cn=groups,cn=accounts, nisDomainName: nisdomain.example.com",
"ipa netgroup-show netgroup1 Netgroup name: netgroup1 Description: my netgroup NIS domain name: nisdomain Member User: jsmith Member User: bjensen Member User: Engineering Member Host: host1.example.com Member Host: VirtGuests"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/nis |
Chapter 17. General Updates | Chapter 17. General Updates Matahari Matahari in Red Hat Enterprise Linux 6.2 is fully supported only for x86 and AMD64 architectures. Builds for other architectures are considered a Technology Preview. Automatic Bug Reporting Tool Red Hat Enterprise Linux 6.2 introduces ABRT 2.0. ABRT logs details of software crashes on a local system, and provides interfaces (both graphical and command line based) to report issues to various issue trackers, including Red Hat support. This update provides the following notable enhancements: More flexible configuration with a new syntax. Out-ouf-process plugins (plugins run in separate processes and communicate via inter-process communication with other processes). Advantages of such a design are: bugs in plugins do not break the main daemon, more secure as most of the processing is now done under the normal (non-root) user, plugins can be written in any programming language. Reporting backend is shared across all of Red Hat's issue reporting tools: ABRT , sealert , all users of python-meh ( Anaconda , firstboot ) Because all of the tools above share the same configuration, it only has to be written once. Note For more information on ABRT configuration and its new syntax, refer to the Red Hat Enterprise Linux 6.2 Deployment Guide . Optimized math library for Linux on IBM System z Red Hat Enterprise Linux 6.2 provides an optimized linear algebra math library for Linux on System z which enables the compiler to generate code for high profile functions, taking advantage of the latest hardware functions. Improved tablet support Red Hat Enterprise Linux 6.2 improves support for Wacom devices. It is no longer necessary to reconfigure device settings after a device has been unplugged and plugged back in. Improved wireless detection NetworkManager can now scan wireless networks in the background, providing a better user experience. Increase in CPU support in GNOME The gnome-system-monitor utility can now monitor systems that have more than 64 CPUs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/general_updates |
2.2. pcs Usage Help Display | 2.2. pcs Usage Help Display You can use the -h option of pcs to display the parameters of a pcs command and a description of those parameters. For example, the following command displays the parameters of the pcs resource command. Only a portion of the output is shown. | [
"pcs resource -h Usage: pcs resource [commands] Manage pacemaker resources Commands: show [resource id] [--all] Show all currently configured resources or if a resource is specified show the options for the configured resource. If --all is specified resource options will be displayed start <resource id> Start resource specified by resource_id"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-pcshelp-haar |
Chapter 11. Controlling access to the Admin Console | Chapter 11. Controlling access to the Admin Console Each realm created on the Red Hat build of Keycloak has a dedicated Admin Console from which that realm can be managed. The master realm is a special realm that allows admins to manage more than one realm on the system. This chapter goes over all the scenarios for this. 11.1. Master realm access control The master realm in Red Hat build of Keycloak is a special realm and treated differently than other realms. Users in the Red Hat build of Keycloak master realm can be granted permission to manage zero or more realms that are deployed on the Red Hat build of Keycloak server. When a realm is created, Red Hat build of Keycloak automatically creates various roles that grant fine-grain permissions to access that new realm. Access to The Admin Console and Admin REST endpoints can be controlled by mapping these roles to users in the master realm. It's possible to create multiple superusers, as well as users that can only manage specific realms. 11.1.1. Global roles There are two realm-level roles in the master realm. These are: admin create-realm Users with the admin role are superusers and have full access to manage any realm on the server. Users with the create-realm role are allowed to create new realms. They will be granted full access to any new realm they create. 11.1.2. Realm specific roles Admin users within the master realm can be granted management privileges to one or more other realms in the system. Each realm in Red Hat build of Keycloak is represented by a client in the master realm. The name of the client is <realm name>-realm . These clients each have client-level roles defined which define varying level of access to manage an individual realm. The roles available are: view-realm view-users view-clients view-events manage-realm manage-users create-client manage-clients manage-events view-identity-providers manage-identity-providers impersonation Assign the roles you want to your users and they will only be able to use that specific part of the administration console. Important Admins with the manage-users role will only be able to assign admin roles to users that they themselves have. So, if an admin has the manage-users role but doesn't have the manage-realm role, they will not be able to assign this role. 11.2. Dedicated realm admin consoles Each realm has a dedicated Admin Console that can be accessed by going to the url /admin/{realm-name}/console . Users within that realm can be granted realm management permissions by assigning specific user role mappings. Each realm has a built-in client called realm-management . You can view this client by going to the Clients left menu item of your realm. This client defines client-level roles that specify permissions that can be granted to manage the realm. view-realm view-users view-clients view-events manage-realm manage-users create-client manage-clients manage-events view-identity-providers manage-identity-providers impersonation Assign the roles you want to your users and they will only be able to use that specific part of the administration console. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/admin_permissions |
Chapter 3. Distribution of content in RHEL 8 | Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Installation ISO image is in multiple GB size, and as a result, it might not fit on optical media formats. A USB key or USB hard drive is recommended when using the Installation ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user-space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/distribution-of-content-in-rhel-8 |
10.5. Troubleshooting virt-who | 10.5. Troubleshooting virt-who 10.5.1. Why is the hypervisor status red? Scenario: On the server side, you deploy a guest on a hypervisor that does not have a subscription. 24 hours later, the hypervisor displays its status as red. To remedy this situation you must get a subscription for that hypervisor. Or, permanently migrate the guest to a hypervisor with a subscription. 10.5.2. I have subscription status errors, what do I do? Scenario: Any of the following error messages display: System not properly subscribed Status unknown Late binding of a guest to a hypervisor through virt-who (host/guest mapping) To find the reason for the error open the virt-who log file, named rhsm.log , located in the /var/log/rhsm/ directory. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/troubleshooting-virt-who |
Introduction to Red Hat OpenShift AI Cloud Service | Introduction to Red Hat OpenShift AI Cloud Service Red Hat OpenShift AI Cloud Service 1 OpenShift AI is a platform for data scientists and developers of artificial intelligence and machine learning (AI/ML) applications | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/introduction_to_red_hat_openshift_ai_cloud_service/index |
12.2. Preparing for a Hard Drive Installation | 12.2. Preparing for a Hard Drive Installation Note Hard drive installations only work from ext2, ext3, ext4, or FAT file systems. You cannot use a hard drives formatted for any other file system as an installation source for Red Hat Enterprise Linux. To check the file system of a hard drive partition on a Windows operating system, use the Disk Management tool. To check the file system of a hard drive partition on a Linux operating system, use the fdisk tool. Important You cannot use ISO files on partitions controlled by LVM (Logical Volume Management). Use this option to install Red Hat Enterprise Linux on systems without a DVD drive or network connection. Hard drive installations use the following files: an ISO image of the installation DVD. An ISO image is a file that contains an exact copy of the content of a DVD. an install.img file extracted from the ISO image. optionally, a product.img file extracted from the ISO image. With these files present on a hard drive, you can choose Hard drive as the installation source when you boot the installation program (refer to Section 15.3, "Installation Method" ). Ensure that you have boot media available on CD, DVD, or a USB storage device such as a flash drive. To prepare a hard drive as an installation source, follow these steps: Obtain an ISO image of the Red Hat Enterprise Linux installation DVD (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). Alternatively, if you have the DVD on physical media, you can create an image of it with the following command on a Linux system: where dvd is your DVD drive device, name_of_image is the name you give to the resulting ISO image file, and path_to_image is the path to the location on your system where the resulting ISO image will be stored. Transfer the ISO image to the hard drive. The ISO image must be located on a hard drive that is either internal to the computer on which you will install Red Hat Enterprise Linux, or on a hard drive that is attached to that computer by USB. Use a SHA256 checksum program to verify that the ISO image that you copied is intact. Many SHA256 checksum programs are available for various operating systems. On a Linux system, run: where name_of_image is the name of the ISO image file. The SHA256 checksum program displays a string of 64 characters called a hash . Compare this hash to the hash displayed for this particular image on the Downloads page in the Red Hat Customer Portal (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). The two hashes should be identical. Copy the images/ directory from inside the ISO image to the same directory in which you stored the ISO image file itself. Enter the following commands: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and mount_point is a mount point on which to mount the image while you copy files from the image. For example: The ISO image file and an images/ directory are now present, side-by-side, in the same directory. Verify that the images/ directory contains at least the install.img file, without which installation cannot proceed. Optionally, the images/ directory should contain the product.img file, without which only the packages for a Minimal installation will be available during the package group selection stage (refer to Section 9.17, "Package Group Selection" ). Important install.img and product.img must be the only files in the images/ directory. Note anaconda has the ability to test the integrity of the installation media. It works with the DVD, hard drive ISO, and NFS ISO installation methods. We recommend that you test all installation media before starting the installation process, and before reporting any installation-related bugs (many of the bugs reported are actually due to improperly-burned DVDs). To use this test, type the following command at the boot: prompt: | [
"dd if=/dev/ dvd of=/ path_to_image / name_of_image .iso",
"sha256sum name_of_image .iso",
"mount -t iso9660 / path_to_image / name_of_image .iso / mount_point -o loop,ro cp -pr / mount_point /images / publicly_available_directory / umount / mount_point",
"mount -t iso9660 /var/isos/RHEL6.iso /mnt/tmp -o loop,ro cp -pr /mnt/tmp/images /var/isos/ umount /mnt/tmp",
"linux mediacheck"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-steps-hd-installs-ppc |
Chapter 10. Multiple regions and zones configuration for a cluster on VMware vSphere | Chapter 10. Multiple regions and zones configuration for a cluster on VMware vSphere As an administrator, you can specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. This configuration reduces the risk of a hardware failure or network outage causing your cluster to fail. A failure domain configuration lists parameters that create a topology. The following list states some of these parameters: computeCluster datacenter datastore networks resourcePool After you define multiple regions and zones for your OpenShift Container Platform cluster, you can create or migrate nodes to another failure domain. Important If you want to migrate pre-existing OpenShift Container Platform cluster compute nodes to a failure domain, you must define a new compute machine set for the compute node. This new machine set can scale up a compute node according to the topology of the failure domain, and scale down the pre-existing compute node. The cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to any compute node provisioned by a machine set resource. For more information, see Creating a compute machine set . 10.1. Specifying multiple regions and zones for your cluster on vSphere You can configure the infrastructures.config.openshift.io configuration resource to specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. Topology-aware features for the cloud controller manager and the vSphere Container Storage Interface (CSI) Operator Driver require information about the vSphere topology where you host your OpenShift Container Platform cluster. This topology information exists in the infrastructures.config.openshift.io configuration resource. Before you specify regions and zones for your cluster, you must ensure that all data centers and compute clusters contain tags, so that the cloud provider can add labels to your node. For example, if data-center-1 represents region-a and compute-cluster-1 represents zone-1 , the cloud provider adds an openshift-region category label with a value of region-a to data-center-1 . Additionally, the cloud provider adds an openshift-zone category tag with a value of zone-1 to compute-cluster-1 . Note You can migrate control plane nodes with vMotion capabilities to a failure domain. After you add these nodes to a failure domain, the cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to these nodes. Prerequisites You created the openshift-region and openshift-zone tag categories on the vCenter server. You ensured that each data center and compute cluster contains tags that represent the name of their associated region or zone, or both. Optional: If you defined API and Ingress static IP addresses to the installation program, you must ensure that all regions and zones share a common layer 2 network. This configuration ensures that API and Ingress Virtual IP (VIP) addresses can interact with your cluster. Important If you do not supply tags to all data centers and compute clusters before you create a node or migrate a node, the cloud provider cannot add the topology.kubernetes.io/zone and topology.kubernetes.io/region labels to the node. This means that services cannot route traffic to your node. Procedure Edit the infrastructures.config.openshift.io custom resource definition (CRD) of your cluster to specify multiple regions and zones in the failureDomains section of the resource by running the following command: USD oc edit infrastructures.config.openshift.io cluster Example infrastructures.config.openshift.io CRD for a instance named cluster with multiple regions and zones defined in its configuration spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_data_center> - <region_b_data_center> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: "</region_a_dc/host/zone_a_cluster>" resourcePool: "</region_a_dc/host/zone_a_cluster/Resources/resource_pool>" datastore: "</region_a_dc/datastore/datastore_a>" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {} Important After you create a failure domain and you define it in a CRD for a VMware vSphere cluster, you must not modify or delete the failure domain. Doing any of these actions with this configuration can impact the availability and fault tolerance of a control plane machine. Save the resource file to apply the changes. Additional resources Parameters for the cluster-wide infrastructure CRD 10.2. Enabling a multiple layer 2 network for your cluster You can configure your cluster to use a multiple layer 2 network configuration so that data transfer among nodes can span across multiple networks. Prerequisites You configured network connectivity among machines so that cluster components can communicate with each other. Procedure If you installed your cluster with installer-provisioned infrastructure, you must ensure that all control plane nodes share a common layer 2 network. Additionally, ensure compute nodes that are configured for Ingress pod scheduling share a common layer 2 network. If you need compute nodes to span multiple layer 2 networks, you can create infrastructure nodes that can host Ingress pods. If you need to provision workloads across additional layer 2 networks, you can create compute machine sets on vSphere and then move these workloads to your target layer 2 networks. If you installed your cluster on infrastructure that you provided, which is defined as a user-provisioned infrastructure, complete the following actions to meet your needs: Configure your API load balancer and network so that the load balancer can reach the API and Machine Config Server on the control plane nodes. Configure your Ingress load balancer and network so that the load balancer can reach the Ingress pods on the compute or infrastructure nodes. Additional resources Installing a cluster on vSphere with network customizations Creating infrastructure machine sets for production environments Creating a compute machine set 10.3. Parameters for the cluster-wide infrastructure CRD You must set values for specific parameters in the cluster-wide infrastructure, infrastructures.config.openshift.io , Custom Resource Definition (CRD) to define multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. The following table lists mandatory parameters for defining multiple regions and zones for your OpenShift Container Platform cluster: Parameter Description vcenters The vCenter server for your OpenShift Container Platform cluster. You can only specify one vCenter for your cluster. datacenters vCenter data centers where VMs associated with the OpenShift Container Platform cluster will be created or presently exist. port The TCP port of the vCenter server. server The fully qualified domain name (FQDN) of the vCenter server. failureDomains The list of failure domains. name The name of the failure domain. region The value of the openshift-region tag assigned to the topology for the failure failure domain. zone The value of the openshift-zone tag assigned to the topology for the failure failure domain. topology The vCenter reources associated with the failure domain. datacenter The data center associated with the failure domain. computeCluster The full path of the compute cluster associated with the failure domain. resourcePool The full path of the resource pool associated with the failure domain. datastore The full path of the datastore associated with the failure domain. networks A list of port groups associated with the failure domain. Only one portgroup may be defined. Additional resources Specifying multiple regions and zones for your cluster on vSphere | [
"oc edit infrastructures.config.openshift.io cluster",
"spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_data_center> - <region_b_data_center> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: \"</region_a_dc/host/zone_a_cluster>\" resourcePool: \"</region_a_dc/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_dc/datastore/datastore_a>\" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_vsphere/post-install-vsphere-zones-regions-configuration |
Chapter 5. Deploying Red Hat Quay on public cloud | Chapter 5. Deploying Red Hat Quay on public cloud Red Hat Quay can run on public clouds, either in standalone mode or where OpenShift Container Platform itself has been deployed on public cloud. A full list of tested and supported configurations can be found in the Red Hat Quay Tested Integrations Matrix at https://access.redhat.com/articles/4067991 . Recommendation: If Red Hat Quay is running on public cloud, then you should use the public cloud services for Red Hat Quay backend services to ensure proper high availability and scalability. 5.1. Running Red Hat Quay on Amazon Web Services If Red Hat Quay is running on Amazon Web Services (AWS), you can use the following features: AWS Elastic Load Balancer AWS S3 (hot) blob storage AWS RDS database AWS ElastiCache Redis EC2 virtual machine recommendation: M3.Large or M4.XLarge The following image provides a high level overview of Red Hat Quay running on AWS: Red Hat Quay on AWS 5.2. Running Red Hat Quay on Microsoft Azure If Red Hat Quay is running on Microsoft Azure, you can use the following features: Azure managed services such as highly available PostgreSQL Azure Blob Storage must be hot storage Azure cool storage is not available for Red Hat Quay Azure Cache for Redis The following image provides a high level overview of Red Hat Quay running on Microsoft Azure: Red Hat Quay on Microsoft Azure | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_architecture/arch-deploy-quay-public-cloud |
Chapter 21. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks | Chapter 21. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. It includes support for Identity Management (IdM). Learn more about Identity Management (IdM) host-based access policies and how to define them using Ansible . 21.1. Host-based access control rules in IdM Host-based access control (HBAC) rules define which users or user groups can access which hosts or host groups by using which services or services in a service group. As a system administrator, you can use HBAC rules to achieve the following goals: Limit access to a specified system in your domain to members of a specific user group. Allow only a specific service to be used to access systems in your domain. By default, IdM is configured with a default HBAC rule named allow_all , which means universal access to every host for every user via every relevant service in the entire IdM domain. You can fine-tune access to different hosts by replacing the default allow_all rule with your own set of HBAC rules. For centralized and simplified access control management, you can apply HBAC rules to user groups, host groups, or service groups instead of individual users, hosts, or services. 21.2. Ensuring the presence of an HBAC rule in IdM using an Ansible playbook Follow this procedure to ensure the presence of a host-based access control (HBAC) rule in Identity Management (IdM) using an Ansible playbook. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The users and user groups you want to use for your HBAC rule exist in IdM. See Managing user accounts using Ansible playbooks and Ensuring the presence of IdM groups and group members using Ansible playbooks for details. The hosts and host groups to which you want to apply your HBAC rule exist in IdM. See Managing hosts using Ansible playbooks and Managing host groups using Ansible playbooks for details. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create your Ansible playbook file that defines the HBAC policy whose presence you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/hbacrule/ensure-hbacrule-allhosts-present.yml file: Run the playbook: Verification Log in to the IdM Web UI as administrator. Navigate to Policy Host-Based-Access-Control HBAC Test . In the Who tab, select idm_user. In the Accessing tab, select client.idm.example.com . In the Via service tab, select sshd . In the Rules tab, select login . In the Run test tab, click the Run test button. If you see ACCESS GRANTED, the HBAC rule is implemented successfully. Additional resources See the README-hbacsvc.md , README-hbacsvcgroup.md , and README-hbacrule.md files in the /usr/share/doc/ansible-freeipa directory. See the playbooks in the subdirectories of the /usr/share/doc/ansible-freeipa/playbooks directory. | [
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hbacrules hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure idm_user can access client.idm.example.com via the sshd service - ipahbacrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: login user: idm_user host: client.idm.example.com hbacsvc: - sshd state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-new-hbacrule-present.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/ensuring-the-presence-of-host-based-access-control-rules-in-idm-using-Ansible-playbooks_using-ansible-to-install-and-manage-identity-management |
Chapter 1. About networking | Chapter 1. About networking Red Hat OpenShift Networking is an ecosystem of features, plugins and advanced networking capabilities that extend Kubernetes networking with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, inter- and intra-cluster traffic management and provides role-based observability tooling to reduce its natural complexities. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. The following list highlights some of the most commonly used Red Hat OpenShift Networking features available on your cluster: Primary cluster network provided by either of the following Container Network Interface (CNI) plugins: OVN-Kubernetes network plugin , the default plugin OpenShift SDN network plugin Certified 3rd-party alternative primary network plugins Cluster Network Operator for network plugin management Ingress Operator for TLS encrypted web traffic DNS Operator for name assignment MetalLB Operator for traffic load balancing on bare metal clusters IP failover support for high-availability Additional hardware network support through multiple CNI plugins, including for macvlan, ipvlan, and SR-IOV hardware networks IPv4, IPv6, and dual stack addressing Hybrid Linux-Windows host clusters for Windows-based workloads Red Hat OpenShift Service Mesh for discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring of services Single-node OpenShift Network Observability Operator for network debugging and insights Submariner for inter-cluster networking Red Hat Service Interconnect for layer 7 inter-cluster networking | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/about-networking |
Chapter 8. Clair in disconnected environments | Chapter 8. Clair in disconnected environments Note Currently, deploying Clair in disconnected environments is not supported on IBM Power and IBM Z. Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use. However, some users might require Red Hat Quay to run in a disconnected environment, or an environment without direct access to the internet. Clair supports disconnected environments by working with different types of update workflows that take network isolation into consideration. This works by using the clairctl command line interface tool, which obtains updater data from the internet by using an open host, securely transferring the data to an isolated host, and then important the updater data on the isolated host into Clair. Use this guide to deploy Clair in a disconnected environment. Note Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments. For more information about Clair updaters, see "Clair updaters". 8.1. Setting up Clair in a disconnected OpenShift Container Platform cluster Use the following procedures to set up an OpenShift Container Platform provisioned Clair pod in a disconnected OpenShift Container Platform cluster. 8.1.1. Installing the clairctl command line utility tool for OpenShift Container Platform deployments Use the following procedure to install the clairctl CLI tool for OpenShift Container Platform deployments. Procedure Install the clairctl program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command: USD oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl Note Unofficially, the clairctl tool can be downloaded Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 8.1.2. Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform Use the following procedure to retrieve and decode the configuration secret for an OpenShift Container Platform provisioned Clair instance on OpenShift Container Platform. Prerequisites You have installed the clairctl command line utility tool. Procedure Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML: USD oc get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={USD.data['config\.yaml']}" | base64 -d > clair-config.yaml Update the clair-config.yaml file so that the disable_updaters and airgap parameters are set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- 8.1.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 8.1.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 8.1.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 8.2. Setting up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster Use the following procedures to set up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster. 8.2.1. Installing the clairctl command line utility tool for a self-managed Clair deployment on OpenShift Container Platform Use the following procedure to install the clairctl CLI tool for self-managed Clair deployments on OpenShift Container Platform. Procedure Install the clairctl program for a self-managed Clair deployment by using the podman cp command, for example: USD sudo podman cp clairv4:/usr/bin/clairctl ./clairctl Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 8.2.2. Deploying a self-managed Clair container for disconnected OpenShift Container Platform clusters Use the following procedure to deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters. Prerequisites You have installed the clairctl command line utility tool. Procedure Create a folder for your Clair configuration file, for example: USD mkdir /etc/clairv4/config/ Create a Clair configuration file with the disable_updaters parameter set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- Start Clair by using the container image, mounting in the configuration from the file you created: 8.2.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 8.2.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 8.2.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 8.3. Mapping repositories to Common Product Enumeration information Note Currently, mapping repositories to Common Product Enumeration information is not supported on IBM Power and IBM Z. Clair's Red Hat Enterprise Linux (RHEL) scanner relies on a Common Product Enumeration (CPE) file to map RPM packages to the corresponding security data to produce matching results. These files are owned by product security and updated daily. The CPE file must be present, or access to the file must be allowed, for the scanner to properly process RPM packages. If the file is not present, RPM packages installed in the container image will not be scanned. Table 8.1. Clair CPE mapping files CPE Link to JSON mapping file repos2cpe Red Hat Repository-to-CPE JSON names2repos Red Hat Name-to-Repos JSON . In addition to uploading CVE information to the database for disconnected Clair installations, you must also make the mapping file available locally: For standalone Red Hat Quay and Clair deployments, the mapping file must be loaded into the Clair pod. For Red Hat Quay on OpenShift Container Platform deployments, you must set the Clair component to unmanaged . Then, Clair must be deployed manually, setting the configuration to load a local copy of the mapping file. 8.3.1. Mapping repositories to Common Product Enumeration example configuration Use the repo2cpe_mapping_file and name2repos_mapping_file fields in your Clair configuration to include the CPE JSON mapping files. For example: indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json For more information, see How to accurately match OVAL security data to installed RPMs . | [
"oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl",
"chmod u+x ./clairctl",
"oc get secret -n quay-enterprise example-registry-clair-config-secret -o \"jsonpath={USD.data['config\\.yaml']}\" | base64 -d > clair-config.yaml",
"--- indexer: airgap: true --- matcher: disable_updaters: true ---",
"./clairctl --config ./config.yaml export-updaters updates.gz",
"oc get svc -n quay-enterprise",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h",
"oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432",
"indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json",
"./clairctl --config ./clair-config.yaml import-updaters updates.gz",
"sudo podman cp clairv4:/usr/bin/clairctl ./clairctl",
"chmod u+x ./clairctl",
"mkdir /etc/clairv4/config/",
"--- indexer: airgap: true --- matcher: disable_updaters: true ---",
"sudo podman run -it --rm --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.13.3",
"./clairctl --config ./config.yaml export-updaters updates.gz",
"oc get svc -n quay-enterprise",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h",
"oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432",
"indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json",
"./clairctl --config ./clair-config.yaml import-updaters updates.gz",
"indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-disconnected-environments |
20.3. Input Methods | 20.3. Input Methods Changes in IBus Red Hat Enterprise Linux 7 includes support for the Intelligent Input Bus (IBus) version 1.5. Support for IBus is now integrated in GNOME. Input methods can be added using the gnome-control-center region command, and the gnome-control-center keyboard command can be used to set input hotkeys. For non-GNOME sessions, ibus can configure both XKB layouts and input methods in the ibus-setup tool and switch them with a hotkey. The default hotkey is Super + space , replacing Control + space in ibus included in Red Hat Enterprise Linux 6. This provides a similar UI which the user can see with the Alt + Tab combination. Multiple input methods can be switched using the Alt + Tab combination. Predictive Input Method for IBus ibus-typing-booster is a predictive input method for the ibus platform. It predicts complete words based on partial input. Users can select the desired word from a list of suggestions and improve their typing speed and spelling. ibus-typing-booster works also with the Hunspell dictionaries and can make suggestions for a language using a Hunspell dictionary. Note that the ibus-typing-booster package is an optional package, and therefore will not be installed as part of the input-methods group by default. For more detailed changes in input methods, see Desktop Migration and Administration Guide . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-internationalization-input_methods |
Chapter 3. Manually creating IAM for Azure | Chapter 3. Manually creating IAM for Azure In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator . 3.2. Manually create IAM The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Change to the directory that contains the installation program and create the install-config.yaml file by running the following command: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-set: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-set" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade From the directory that contains the installation program, proceed with your cluster creation: USD openshift-install create cluster --dir <installation_directory> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 3.3. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure | [
"openshift-install create install-config --dir <installation_directory>",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"grep \"release.openshift.io/feature-set\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade",
"openshift-install create cluster --dir <installation_directory>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure/manually-creating-iam-azure |
Chapter 13. Red Hat Enterprise Linux Atomic Host 7.6.5 | Chapter 13. Red Hat Enterprise Linux Atomic Host 7.6.5 13.1. Atomic Host OStree update : New Tree Version: 7.6.5 (hash: 52bd811e0f47458cebda889283064392251963536748ce94267a59f0fc7b3254) Changes since Tree Version 7.6.4 (hash: a403eb67b418b3fe30ba02a3bf8b00d63a6648baf4bf65457e1ae23f107d6e35) 13.2. Extras Updated packages : buildah-1.8.2-2.gite23314b.el7 container-selinux-2.99-1.el7_6 etcd-3.2.26-1.el7 oci-systemd-hook-0.2.0-1.git05e6923.el7_6 podman-1.3.2-1.git14fdcd0.el7 python-websocket-client-0.56.0-1.git3c25814.el7 13.2.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux 7.6 Container Image (rhel7.6, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux 7.6 Container Image for aarch64 (rhel7.6, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Universal Base Image 7 Container Image (rhel7/ubi7) Red Hat Universal Base Image 7 Init Container Image (rhel7/ubi7-init) Red Hat Universal Base Image 7 Minimal Container Image (rhel7/ubi7-minimal) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_6_5 |
Chapter 7. Monitoring the Load-balancing service | Chapter 7. Monitoring the Load-balancing service To keep load balancing operational, you can use the load-balancer management network and create, modify, and delete load-balancing health monitors. Section 7.1, "Load-balancing management network" Section 7.2, "Load-balancing service instance monitoring" Section 7.3, "Load-balancing service pool member monitoring" Section 7.4, "Load balancer provisioning status monitoring" Section 7.5, "Load balancer functionality monitoring" Section 7.6, "About Load-balancing service health monitors" Section 7.7, "Creating Load-balancing service health monitors" Section 7.8, "Modifying Load-balancing service health monitors" Section 7.9, "Deleting Load-balancing service health monitors" Section 7.10, "Best practices for Load-balancing service HTTP health monitors" 7.1. Load-balancing management network The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) monitors load balancers through a project network referred to as the load-balancing management network . Hosts that run the Load-balancing service must have interfaces to connect to the load-balancing management network. The supported interface configuration works with the neutron Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN) or the Open vSwitch mechanism driver (ML2/OVS). Use of the interfaces with other mechanism drivers has not been tested. The default interfaces created at deployment are internal Open vSwitch (OVS) ports on the default integration bridge br-int . You must associate these interfaces with actual Networking service (neutron) ports allocated on the load-balancer management network. The default interfaces are by default named, o-hm0 . They are defined through standard interface configuration files on the Load-balancing service hosts. RHOSP director automatically configures a Networking service port and an interface for each Load-balancing service host during deployment. Port information and a template is used to create the interface configuration file, including: IP network address information including the IP and netmask MTU configuration the MAC address the Networking service port ID In the default OVS case, the Networking service port ID is used to register extra data with the OVS port. The Networking service recognizes this interface as belonging to the port and configures OVS so it can communicate on the load-balancer management network. By default, RHOSP configures security groups and firewall rules that allow the Load-balancing service controllers to communicate with its VM instances (amphorae) on TCP port 9443, and allows the heartbeat messages from the amphorae to arrive on the controllers on UDP port 5555. Different mechanism drivers might require additional or alternate requirements to allow communication between load-balancing services and the load balancers. 7.2. Load-balancing service instance monitoring The Load-balancing service (octavia) monitors the load balancing instances (amphorae) and initiates failovers and replacements if the amphorae malfunction. Any time a failover occurs, the Load-balancing service logs the failover in the corresponding health manager log on the controller in /var/log/containers/octavia . Use log analytics to monitor failover trends to address problems early. Problems such as Networking service (neutron) connectivity issues, Denial of Service attacks, and Compute service (nova) malfunctions often lead to higher failover rates for load balancers. 7.3. Load-balancing service pool member monitoring The Load-balancing service (octavia) uses the health information from the underlying load balancing subsystems to determine the health of members of the load-balancing pool. Health information is streamed to the Load-balancing service database, and made available by the status tree or other API methods. For critical applications, you must poll for health information in regular intervals. 7.4. Load balancer provisioning status monitoring You can monitor the provisioning status of a load balancer and send alerts if the provisioning status is ERROR . Do not configure an alert to trigger when an application is making regular changes to the pool and enters several PENDING stages. The provisioning status of load balancer objects reflect the ability of the control plane to contact and successfully provision a create, update, and delete request. The operating status of a load balancer object reports on the current functionality of the load balancer. For example, a load balancer might have a provisioning status of ERROR , but an operating status of ONLINE . This might be caused by a Networking (neutron) failure that blocked that last requested update to the load balancer configuration from successfully completing. In this case, the load balancer continues to process traffic through the load balancer, but might not have applied the latest configuration updates yet. 7.5. Load balancer functionality monitoring You can monitor the operational status of your load balancer and its child objects. You can also use an external monitoring service that connects to your load balancer listeners and monitors them from outside of the cloud. An external monitoring service indicates if there is a failure outside of the Load-balancing service (octavia) that might impact the functionality of your load balancer, such as router failures, network connectivity issues, and so on. 7.6. About Load-balancing service health monitors A Load-balancing service (octavia) health monitor is a process that does periodic health checks on each back end member server to pre-emptively detect failed servers and temporarily pull them out of the pool. If the health monitor detects a failed server, it removes the server from the pool and marks the member in ERROR . After you have corrected the server and it is functional again, the health monitor automatically changes the status of the member from ERROR to ONLINE , and resumes passing traffic to it. Always use health monitors in production load balancers. If you do not have a health monitor, failed servers are not removed from the pool. This can lead to service disruption for web clients. There are several types of health monitors, as briefly described here: HTTP by default, probes the / path on the application server. HTTPS operates exactly like HTTP health monitors, but with TLS back end servers. If the servers perform client certificate validation, HAProxy does not have a valid certificate. In these cases, TLS-HELLO health monitoring is an alternative. TLS-HELLO ensures that the back end server responds to SSLv3-client hello messages. A TLS-HELLO health monitor does not check any other health metrics, like status code or body contents. PING sends periodic ICMP ping requests to the back end servers. You must configure back end servers to allow PINGs so that these health checks pass. Important A PING health monitor checks only if the member is reachable and responds to ICMP echo requests. PING health monitors do not detect if the application that runs on an instance is healthy. Use PING health monitors only in cases where an ICMP echo request is a valid health check. TCP opens a TCP connection to the back end server protocol port. The TCP application opens a TCP connection and, after the TCP handshake, closes the connection without sending any data. UDP-CONNECT performs a basic UDP port connect. A UDP-CONNECT health monitor might not work correctly if Destination Unreachable (ICMP type 3) is not enabled on the member server, or if it is blocked by a security rule. In these cases, a member server might be marked as having an operating status of ONLINE when it is actually down. 7.7. Creating Load-balancing service health monitors Use Load-balancing service (octavia) health monitors to avoid service disruptions for your users. The health monitors run periodic health checks on each back end server to pre-emptively detect failed servers and temporarily pull the servers out of the pool. Procedure Source your credentials file. Example Run the openstack loadbalancer healthmonitor create command, using argument values that are appropriate for your site. All health monitor types require the following configurable arguments: <pool> Name or ID of the pool of back-end member servers to be monitored. --type The type of health monitor. One of HTTP , HTTPS , PING , SCTP , TCP , TLS-HELLO , or UDP-CONNECT . --delay Number of seconds to wait between health checks. --timeout Number of seconds to wait for any given health check to complete. timeout must always be smaller than delay . --max-retries Number of health checks a back-end server must fail before it is considered down. Also, the number of health checks that a failed back-end server must pass to be considered up again. In addition, HTTP health monitor types also require the following arguments, which are set by default: --url-path Path part of the URL that should be retrieved from the back-end server. By default this is / . --http-method HTTP method that is used to retrieve the url_path . By default this is GET . --expected-codes List of HTTP status codes that indicate an OK health check. By default this is 200 . Example Verification Run the openstack loadbalancer healthmonitor list command and verify that your health monitor is running. Additional resources loadbalancer healthmonitor create in the Command line interface reference 7.8. Modifying Load-balancing service health monitors You can modify the configuration for Load-balancing service (octavia) health monitors when you want to change the interval for sending probes to members, the connection timeout interval, the HTTP method for requests, and so on. Procedure Source your credentials file. Example Modify your health monitor ( my-health-monitor ). In this example, a user is changing the time in seconds that the health monitor waits between sending probes to members. Example Verification Run the openstack loadbalancer healthmonitor show command to confirm your configuration changes. Additional resources loadbalancer healthmonitor set in the Command line interface reference loadbalancer healthmonitor show in the Command line interface reference 7.9. Deleting Load-balancing service health monitors You can remove a Load-balancing service (octavia) health monitor. Tip An alternative to deleting a health monitor is to disable it by using the openstack loadbalancer healthmonitor set --disable command. Procedure Source your credentials file. Example Delete the health monitor ( my-health-monitor ). Example Verification Run the openstack loadbalancer healthmonitor list command to verify that the health monitor you deleted no longer exists. Additional resources loadbalancer healthmonitor delete in the Command line interface reference 7.10. Best practices for Load-balancing service HTTP health monitors When you write the code that generates the health check in your web application, use the following best practices: The health monitor url-path does not require authentication to load. By default, the health monitor url-path returns an HTTP 200 OK status code to indicate a healthy server unless you specify alternate expected-codes . The health check does enough internal checks to ensure that the application is healthy and no more. Ensure that the following conditions are met for the application: Any required database or other external storage connections are up and running. The load is acceptable for the server on which the application runs. Your site is not in maintenance mode. Tests specific to your application are operational. The page generated by the health check should be small in size: It returns in a sub-second interval. It does not induce significant load on the application server. The page generated by the health check is never cached, although the code that runs the health check might reference cached data. For example, you might find it useful to run a more extensive health check using cron and store the results to disk. The code that generates the page at the health monitor url-path incorporates the results of this cron job in the tests it performs. Because the Load-balancing service only processes the HTTP status code returned, and because health checks are run so frequently, you can use the HEAD or OPTIONS HTTP methods to skip processing the entire page. | [
"source ~/overcloudrc",
"openstack loadbalancer healthmonitor create --name my-health-monitor --delay 10 --max-retries 4 --timeout 5 --type TCP lb-pool-1",
"source ~/overcloudrc",
"openstack loadbalancer healthmonitor set my_health_monitor --delay 600",
"openstack loadbalancer healthmonitor show my_health_monitor",
"source ~/overcloudrc",
"openstack loadbalancer healthmonitor delete my-health-monitor"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_load_balancing_as_a_service/monitor-lb-service_rhosp-lbaas |
Chapter 3. Release components | Chapter 3. Release components Node.js 22 LTS Builder Image for RHEL 8 Node.js 22 LTS Universal Base Image 8 Node.js 22 LTS Minimal Stand-alone Image for RHEL 8 Node.js 22 LTS Minimal Universal Base Image 8 Node.js 22 LTS Builder Image for RHEL 9 Node.js 22 LTS Universal Base Image 9 Node.js 22 LTS Minimal Stand-alone Image for RHEL 9 Node.js 22 LTS Minimal Universal Base Image 9 | null | https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/release_notes_for_node.js_22/release-components-nodejs |
Chapter 143. KafkaConnectorSpec schema reference | Chapter 143. KafkaConnectorSpec schema reference Used in: KafkaConnector Property Property type Description class string The Class for the Kafka Connector. tasksMax integer The maximum number of tasks for the Kafka Connector. autoRestart AutoRestart Automatic restart of connector and tasks configuration. config map The Kafka Connector configuration. The following properties cannot be set: name, connector.class, tasks.max. pause boolean The pause property has been deprecated. Deprecated in Streams for Apache Kafka 2.6, use state instead. Whether the connector should be paused. Defaults to false. state string (one of [running, paused, stopped]) The state the connector should be in. Defaults to running. listOffsets ListOffsets Configuration for listing offsets. alterOffsets AlterOffsets Configuration for altering offsets. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaconnectorspec-reference |
Disconnected installation mirroring | Disconnected installation mirroring OpenShift Container Platform 4.14 Mirroring the installation container images Red Hat OpenShift Documentation Team | [
"./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>",
"podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1",
"sudo ./mirror-registry upgrade -v",
"sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v",
"sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v",
"./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>",
"podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1",
"./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key",
"./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage",
"./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>",
"export QUAY=/USDHOME/quay-install",
"cp ~/ssl.crt USDQUAY/quay-config",
"cp ~/ssl.key USDQUAY/quay-config",
"systemctl --user restart quay-app",
"./mirror-registry uninstall -v --quayRoot <example_directory_name>",
"sudo systemctl status <service>",
"systemctl --user status <service>",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-install",
"podman login registry.redhat.io",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"podman login <mirror_registry>",
"oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6",
"src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2",
"oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5",
"info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2",
"podman login <mirror_registry>",
"oc adm catalog mirror file://local/index/<repository>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5",
"oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>[/<repository>] --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]",
"manifests-<index_image_name>-<random_number>",
"manifests-index/<repository>/<index_image_name>-<random_number>",
"tar xvzf oc-mirror.tar.gz",
"chmod +x oc-mirror",
"sudo mv oc-mirror /usr/local/bin/.",
"oc mirror help",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"mkdir -p <directory_name>",
"cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"oc mirror init --registry example.com/mirror/oc-mirror-metadata > imageset-config.yaml 1",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.14 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}",
"oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 2",
"oc mirror --config=./imageset-config.yaml \\ 1 file://<path_to_output_directory> 2",
"cd <path_to_output_directory>",
"ls",
"mirror_seq1_000000.tar",
"oc mirror --from=./mirror_seq1_000000.tar \\ 1 docker://registry.example:5000 2",
"oc apply -f ./oc-mirror-workspace/results-1639608409/",
"oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource -n openshift-marketplace",
"oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 \\ 2 --dry-run 3",
"Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt",
"cd oc-mirror-workspace/",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: local: path: /home/user/metadata 1 mirror: platform: channels: - name: stable-4.14 2 type: ocp graph: false operators: - catalog: oci:///home/user/oc-mirror/my-oci-catalog 3 targetCatalog: my-namespace/redhat-operator-index 4 packages: - name: aws-load-balancer-operator - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 5 packages: - name: rhacs-operator additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 6",
"oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 2",
"[[registry]] location = \"registry.redhat.io:5000\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"preprod-registry.example.com\" insecure = false",
"additionalImages: - name: registry.redhat.io/ubi8/ubi:latest",
"local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz",
"repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0",
"operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: elasticsearch-operator minVersion: '2.4.0'",
"operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'",
"architectures: - amd64 - arm64 - multi - ppc64le - s390x",
"channels: - name: stable-4.10 - name: stable-4.14",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.11.37 maxVersion: 4.12.15 shortestPath: true",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"multi\" channels: - name: stable-4.13 minVersion: 4.13.4 maxVersion: 4.13.6",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable minVersion: 4.0.1",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: mylocalregistry/ocp-mirror/openshift4 skipTLS: false mirror: platform: channels: - name: stable-4.11 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/certified-operator-index:v4.14 packages: - name: nutanixcsioperator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: elasticsearch-operator channels: - name: stable-5.7 - name: stable",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 full: true",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 targetCatalog: my-namespace/my-operator-catalog",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"s390x\" channels: - name: stable-4.14 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: graph: true # Required for the OSUS Operator architectures: - amd64 channels: - name: stable-4.12 minVersion: '4.12.28' maxVersion: '4.12.28' shortestPath: true type: ocp - name: eus-4.14 minVersion: '4.12.28' maxVersion: '4.14.16' shortestPath: true type: ocp"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/disconnected_installation_mirroring/index |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/getting_started_with_eclipse_temurin_8/providing-direct-documentation-feedback_openjdk |
Chapter 1. About Observability | Chapter 1. About Observability Red Hat OpenShift Observability provides real-time visibility, monitoring, and analysis of various system metrics, logs, traces, and events to help users quickly diagnose and troubleshoot issues before they impact systems or applications. To help ensure the reliability, performance, and security of your applications and infrastructure, OpenShift Container Platform offers the following Observability components: Monitoring Logging Distributed tracing Red Hat build of OpenTelemetry Network Observability Power monitoring Red Hat OpenShift Observability connects open-source observability tools and technologies to create a unified Observability solution. The components of Red Hat OpenShift Observability work together to help you collect, store, deliver, analyze, and visualize data. Note With the exception of monitoring, Red Hat OpenShift Observability components have distinct release cycles separate from the core OpenShift Container Platform release cycles. See the Red Hat OpenShift Operator Life Cycles page for their release compatibility. 1.1. Monitoring Monitor the in-cluster health and performance of your applications running on OpenShift Container Platform with metrics and customized alerts for CPU and memory usage, network connectivity, and other resource usage. Monitoring stack components are deployed and managed by the Cluster Monitoring Operator. Monitoring stack components are deployed by default in every OpenShift Container Platform installation and are managed by the Cluster Monitoring Operator (CMO). These components include Prometheus, Alertmanager, Thanos Querier, and others. The CMO also deploys the Telemeter Client, which sends a subset of data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. For more information, see About OpenShift Container Platform monitoring and About remote health monitoring . 1.2. Logging Collect, visualize, forward, and store log data to troubleshoot issues, identify performance bottlenecks, and detect security threats. In logging 5.7 and later versions, users can configure the LokiStack deployment to produce customized alerts and recorded metrics. For more information, see About Logging . 1.3. Distributed tracing Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use it for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications. For more information, see Distributed tracing architecture . 1.4. Red Hat build of OpenTelemetry Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Use open-source back ends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate. For more information, see Red Hat build of OpenTelemetry . 1.5. Network Observability Observe the network traffic for OpenShift Container Platform clusters and create network flows with the Network Observability Operator. View and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. For more information, see Network Observability overview . 1.6. Power monitoring Monitor the power usage of workloads and identify the most power-consuming namespaces running in a cluster with key power consumption metrics, such as CPU or DRAM measured at the container level. Visualize energy-related system statistics with the Power monitoring Operator. For more information, see Power monitoring overview . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/observability_overview/observability-overview |
B.3. Using KVM Virtualization on ARM Systems | B.3. Using KVM Virtualization on ARM Systems Important KVM virtualization is provided in Red Hat Enterprise Linux 7.5 and later for the 64-bit ARM architecture. As such, KVM virtualization on ARM systems is not supported by Red Hat, not intended for use in a production environment, and may not address known security vulnerabilities. In addition, because KVM virtualization on ARM is still in rapid development, the information below is not guaranteed to be accurate or complete. Installation To use install virtualization on Red Hat Enterprise Linux 7.5 for ARM: Install the host system from the bootable image on the Customer Portal . After the system is installed, install the virtualization stack on the system by using the following command: Make sure you have the Optional channel enabled for the installation to succeed. For more information, see Adding the Optional and Supplementary Repositories . Architecture Specifics KVM virtualization on Red Hat Enterprise Linux 7.5 for the 64-bit ARM architecture differs from KVM on AMD64 and Intel 64 systems in the following: PXE booting is only supported with the virtio-net-device and virtio-net-pci network interface controllers (NICs). In addition, the built-in VirtioNetDxe driver of the ARM Architecture Virtual Machine Firmware (AAVMF) needs to be used for PXE booting. Note that iPXE option ROMs are not supported. Only up to 123 virtual CPUs (vCPUs) can be allocated to a single guest. | [
"yum install qemu-kvm-ma libvirt libvirt-client virt-install AAVMF"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/appe-KVM_on_ARM |
Chapter 10. Revising project versions | Chapter 10. Revising project versions You can revise the version number of a project in Red Hat Process Automation Manager before you build and deploy a new instance of the project. Creating a new version of a project preserves the old version in case there is a problem with the new one and you need to revert back. Prerequisites KIE Server is deployed and connected to Business Central. Procedure In Business Central, go to Menu Design Projects . Click the project you want to deploy, for example Mortgage_Process . Click Deploy . If there is no container with the project name, a container with default values is automatically created. If an older version of the project is already deployed, go to the project settings and change the project version. When finished, save the change and click Deploy . This will deploy a new version of the same project with the latest changes in place, alongside the older version(s). Note You can also select the Build & Install option to build the project and publish the KJAR file to the configured Maven repository without deploying to a KIE Server. In a development environment, you can click Deploy to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The time you deploy or redeploy the built KJAR, the deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option is disabled and you can click Deploy only to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server. To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production . To configure the deployment behavior for a corresponding project in Business Central, go to project Settings General Settings Version and toggle the Development Mode option. By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode. To review project deployment details, click View deployment details in the deployment banner at the top of the screen or in the Deploy drop-down menu. This option directs you to the Menu Deploy Execution Servers page. To verify process definitions, click Menu Manage Process Definitions , and click . Click in the Actions column and select Start to start a new instance of the process. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/revise-project-ver |
25.6. Configuring rsyslog on a Logging Server | 25.6. Configuring rsyslog on a Logging Server The rsyslog service provides facilities both for running a logging server and for configuring individual systems to send their log files to the logging server. See Example 25.12, "Reliable Forwarding of Log Messages to a Server" for information on client rsyslog configuration. The rsyslog service must be installed on the system that you intend to use as a logging server and all systems that will be configured to send logs to it. Rsyslog is installed by default in Red Hat Enterprise Linux 6. If required, to ensure that it is, enter the following command as root : The default protocol and port for syslog traffic is UDP and 514 , as listed in the /etc/services file. However, rsyslog defaults to using TCP on port 514 . In the configuration file, /etc/rsyslog.conf , TCP is indicated by @@ . Other ports are sometimes used in examples, however SELinux is only configured to allow sending and receiving on the following ports by default: The semanage utility is provided as part of the policycoreutils-python package. If required, install the package as follows: In addition, by default the SELinux type for rsyslog , rsyslogd_t , is configured to permit sending and receiving to the remote shell ( rsh ) port with SELinux type rsh_port_t , which defaults to TCP on port 514 . Therefore it is not necessary to use semanage to explicitly permit TCP on port 514 . For example, to check what SELinux is set to permit on port 514 , enter a command as follows: For more information on SELinux, see Red Hat Enterprise Linux 6 SELinux User Guide . Perform the steps in the following procedures on the system that you intend to use as your logging server. All steps in these procedure must be made as the root user. Procedure 25.5. Configure SELinux to Permit rsyslog Traffic on a Port If required to use a new port for rsyslog traffic, follow this procedure on the logging server and the clients. For example, to send and receive TCP traffic on port 10514 , proceed as follows: Review the SELinux ports by entering the following command: If the new port was already configured in /etc/rsyslog.conf , restart rsyslog now for the change to take effect: Verify which ports rsyslog is now listening to: See the semanage-port(8) manual page for more information on the semanage port command. Procedure 25.6. Configuring The iptables Firewall Configure the iptables firewall to allow incoming rsyslog traffic. For example, to allow TCP traffic on port 10514 , proceed as follows: Open the /etc/sysconfig/iptables file in a text editor. Add an INPUT rule allowing TCP traffic on port 10514 to the file. The new rule must appear before any INPUT rules that REJECT traffic. Save the changes to the /etc/sysconfig/iptables file. Restart the iptables service for the firewall changes to take effect. Procedure 25.7. Configuring rsyslog to Receive and Sort Remote Log Messages Open the /etc/rsyslog.conf file in a text editor and proceed as follows: Add these lines below the modules section but above the Provides UDP syslog reception section: Replace the default Provides TCP syslog reception section with the following: Save the changes to the /etc/rsyslog.conf file. The rsyslog service must be running on both the logging server and the systems attempting to log to it. Use the service command to start the rsyslog service. To ensure the rsyslog service starts automatically in future, enter the following command as root: Your log server is now configured to receive and store log files from the other systems in your environment. 25.6.1. Using The New Template Syntax on a Logging Server Rsyslog 7 has a number of different templates styles. The string template most closely resembles the legacy format. Reproducing the templates from the example above using the string format would look as follows: These templates can also be written in the list format as follows: This template text format might be easier to read for those new to rsyslog and therefore can be easier to adapt as requirements change. To complete the change to the new syntax, we need to reproduce the module load command, add a rule set, and then bind the rule set to the protocol, port, and ruleset: | [
"~]# yum install rsyslog",
"~]# semanage port -l | grep syslog syslogd_port_t tcp 6514, 601 syslogd_port_t udp 514, 6514, 601",
"~]# yum install policycoreutils-python",
"~]# semanage port -l | grep 514 output omitted rsh_port_t tcp 514 syslogd_port_t tcp 6514, 601 syslogd_port_t udp 514, 6514, 601",
"~]# semanage port -a -t syslogd_port_t -p tcp 10514",
"~]# semanage port -l | grep syslog",
"~]# service rsyslog restart",
"~]# netstat -tnlp | grep rsyslog tcp 0 0 0.0.0.0: 10514 0.0.0.0:* LISTEN 2528/rsyslogd tcp 0 0 :::10514 :::* LISTEN 2528/rsyslogd",
"-A INPUT -m state --state NEW -m tcp -p tcp --dport 10514 -j ACCEPT",
"~]# service iptables restart",
"Define templates before the rules that use them ### Per-Host Templates for Remote Systems ### USDtemplate TmplAuthpriv, \"/var/log/remote/auth/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log\" USDtemplate TmplMsg, \"/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log\"",
"Provides TCP syslog reception USDModLoad imtcp Adding this ruleset to process remote messages USDRuleSet remote1 authpriv.* ?TmplAuthpriv *.info;mail.none;authpriv.none;cron.none ?TmplMsg USDRuleSet RSYSLOG_DefaultRuleset #End the rule set by switching back to the default rule set USDInputTCPServerBindRuleset remote1 #Define a new input and bind it to the \"remote1\" rule set USDInputTCPServerRun 10514",
"~]# service rsyslog start",
"~]# chkconfig rsyslog on",
"template(name=\"TmplAuthpriv\" type=\"string\" string=\"/var/log/remote/auth/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log\" ) template(name=\"TmplMsg\" type=\"string\" string=\"/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log\" )",
"template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") }",
"module(load=\"imtcp\") ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imtcp\" port=\"10514\" ruleset=\"remote1\")"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-configuring_rsyslog_on_a_logging_server |
Chapter 1. Release Schedule | Chapter 1. Release Schedule The following table lists the dates of the Red Hat OpenStack Platform 16.2 GA, along with the dates of each subsequent asynchronous release for core components: Table 1.1. Red Hat OpenStack Platform 16.2 core component release schedule Release Date 16.2.0 (General Availability) 15 September, 2021 | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/package_manifest/ch01 |
14.4.3. Using the sftp Utility | 14.4.3. Using the sftp Utility The sftp utility can be used to open a secure, interactive FTP session. In its design, it is similar to ftp except that it uses a secure, encrypted connection. To connect to a remote system, use a command in the following form: For example, to log in to a remote machine named penguin.example.com with john as a user name, type: After you enter the correct password, you will be presented with a prompt. The sftp utility accepts a set of commands similar to those used by ftp (see Table 14.3, "A selection of available sftp commands" ). Table 14.3. A selection of available sftp commands Command Description ls [ directory ] List the content of a remote directory . If none is supplied, a current working directory is used by default. cd directory Change the remote working directory to directory . mkdir directory Create a remote directory . rmdir path Remove a remote directory . put localfile [ remotefile ] Transfer localfile to a remote machine. get remotefile [ localfile ] Transfer remotefile from a remote machine. For a complete list of available commands, see the sftp (1) manual page. | [
"sftp username @ hostname",
"~]USD sftp [email protected] [email protected]'s password: Connected to penguin.example.com. sftp>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-ssh-clients-sftp |
Chapter 6. Scaling storage of Microsoft Azure OpenShift Data Foundation cluster | Chapter 6. Scaling storage of Microsoft Azure OpenShift Data Foundation cluster 6.1. Scaling up storage capacity of Microsoft Azure OpenShift Data Foundation cluster To increase the storage capacity in a dynamically created Microsoft Azure storage cluster on a user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. You can scale up storage capacity of a Microsoft Azure Red Hat OpenShift Data Foundation cluster in two ways: Scaling up storage capacity on a Microsoft Azure cluster by adding a new set of OSDs . Scaling up storage capacity on a Microsoft Azure cluster by resizing existing OSDs . 6.1.1. Scaling up storage capacity on a cluster by adding a new set of OSDs To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 6.1.2. Scaling up storage capacity on a cluster by resizing existing OSDs To increase the storage capacity on a cluster, you can add storage capacity by resizing existing OSDs. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Update the dataPVCTemplate size for the storageDeviceSets with the new desired size using the oc patch command. In this example YAML, the storage parameter under storageDeviceSets reflects the current size of 512Gi . Using the oc patch command: Get the current OSD storage for the storageDeviceSets you are increasing storage for: Increase the storage with the desired value (the following example reflect the size change of 2Ti): Wait for the OSDs to restart. Confirm that the resize took effect: Verify that for all the resized OSDs, resize is completed and reflected correctly in the CAPACITY column of the command output. If the resize did not take effect, restart the OSD pods again. It may take multiple restarts for the resize to complete. 6.2. Scaling out storage capacity on a Microsoft Azure cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 6.2.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . 6.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"storageDeviceSets: - name: example-deviceset count: 3 resources: {} placement: {} dataPVCTemplate: spec: storageClassName: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 512Gi",
"get storagecluster ocs-storagecluster -n openshift-storage -o jsonpath=' {.spec.storageDeviceSets[0].dataPVCTemplate.spec.resources.requests.storage} ' 512Gi",
"patch storagecluster ocs-storagecluster -n openshift-storage --type merge --patch \"USD(oc get storagecluster ocs-storagecluster -n openshift-storage -o jsonpath=' {.spec.storageDeviceSets[0]} ' | jq '.dataPVCTemplate.spec.resources.requests.storage=\"2Ti\"' | jq -c '{spec: {storageDeviceSets: [.]}}')\" storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get pvc -l ceph.rook.io/DeviceSet -n openshift-storage",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/scaling_storage/scaling_storage_of_microsoft_azure_openshift_data_foundation_cluster |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/login_module_reference/making-open-source-more-inclusive |
Chapter 34. federation | Chapter 34. federation This chapter describes the commands under the federation command. 34.1. federation domain list List accessible domains Usage: Table 34.1. Command arguments Value Summary -h, --help Show this help message and exit Table 34.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 34.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.2. federation project list List accessible projects Usage: Table 34.6. Command arguments Value Summary -h, --help Show this help message and exit Table 34.7. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 34.8. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.10. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.3. federation protocol create Create new federation protocol Usage: Table 34.11. Positional arguments Value Summary <name> New federation protocol name (must be unique per identity provider) Table 34.12. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that will support the new federation protocol (name or ID) (required) --mapping <mapping> Mapping that is to be used (name or id) (required) Table 34.13. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 34.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.15. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 34.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.4. federation protocol delete Delete federation protocol(s) Usage: Table 34.17. Positional arguments Value Summary <federation-protocol> Federation protocol(s) to delete (name or id) Table 34.18. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) 34.5. federation protocol list List federation protocols Usage: Table 34.19. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider to list (name or id) (required) Table 34.20. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 34.21. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.6. federation protocol set Set federation protocol properties Usage: Table 34.24. Positional arguments Value Summary <name> Federation protocol to modify (name or id) Table 34.25. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) --mapping <mapping> Mapping that is to be used (name or id) 34.7. federation protocol show Display federation protocol details Usage: Table 34.26. Positional arguments Value Summary <federation-protocol> Federation protocol to display (name or id) Table 34.27. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) Table 34.28. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 34.29. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.30. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 34.31. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack federation domain list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack federation project list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack federation protocol create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --identity-provider <identity-provider> --mapping <mapping> <name>",
"openstack federation protocol delete [-h] --identity-provider <identity-provider> <federation-protocol> [<federation-protocol> ...]",
"openstack federation protocol list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] --identity-provider <identity-provider>",
"openstack federation protocol set [-h] --identity-provider <identity-provider> [--mapping <mapping>] <name>",
"openstack federation protocol show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --identity-provider <identity-provider> <federation-protocol>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/federation |
25.6. Generating a Key | 25.6. Generating a Key You must be root to generate a key. First, use the cd command to change to the /etc/httpd/conf/ directory. Remove the fake key and certificate that were generated during the installation with the following commands: , create your own random key. Change to the /usr/share/ssl/certs/ directory and type in the following command: Your system displays a message similar to the following: You now must enter in a passphrase. For security reason, it should contain at least eight characters, include numbers and/or punctuation, and it should not be a word in a dictionary. Also, remember that your passphrase is case sensitive. Note You are required to remember and enter this passphrase every time you start your secure server. If you forget this passphrase, the key must be completely re-generated. Re-type the passphrase to verify that it is correct. Once you have typed it in correctly, /etc/httpd/conf/ssl.key/server.key , the file containing your key, is created. Note that if you do not want to type in a passphrase every time you start your secure server, you must use the following two commands instead of make genkey to create the key. Use the following command to create your key: Then, use the following command to make sure the permissions are set correctly for the file: After you use the above commands to create your key, you do not need to use a passphrase to start your secure server. Warning Disabling the passphrase feature for your secure server is a security risk. It is not recommended that you disable the passphrase feature for secure server. Problems associated with not using a passphrase are directly related to the security maintained on the host machine. For example, if an unscrupulous individual compromises the regular UNIX security on the host machine, that person could obtain your private key (the contents of your server.key file). The key could be used to serve webpages that appear to be from your secure server. If UNIX security practices are rigorously maintained on the host computer (all operating system patches and updates are installed as soon as they are available, no unnecessary or risky services are operating, and so on), secure server's passphrase may seem unnecessary. However, since your secure server should not need to be re-booted very often, the extra security provided by entering a passphrase is a worthwhile effort in most cases. The server.key file should be owned by the root user on your system and should not be accessible to any other user. Make a backup copy of this file and keep the backup copy in a safe, secure place. You need the backup copy because if you ever lose the server.key file after using it to create your certificate request, your certificate no longer works and the CA is not able to help you. Your only option is to request (and pay for) a new certificate. If you are going to purchase a certificate from a CA, continue to Section 25.7, "Generating a Certificate Request to Send to a CA" . If you are generating your own self-signed certificate, continue to Section 25.8, "Creating a Self-Signed Certificate" . | [
"rm ssl.key/server.key rm ssl.crt/server.crt",
"make genkey",
"umask 77 ; /usr/bin/openssl genrsa -des3 1024 > /etc/httpd/conf/ssl.key/server.key Generating RSA private key, 1024 bit long modulus .......++++++ ................................................................++++++ e is 65537 (0x10001) Enter pass phrase:",
"/usr/bin/openssl genrsa 1024 > /etc/httpd/conf/ssl.key/server.key",
"chmod go-rwx /etc/httpd/conf/ssl.key/server.key"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/apache_http_secure_server_configuration-generating_a_key |
Chapter 8. Access management for Red Hat Quay | Chapter 8. Access management for Red Hat Quay As a Quay.io user, you can create your own repositories and make them accessible to other users that are part of your instance. Alternatively, you can create an organization and associate a set of repositories directly to that organization, referred to as an organization repository . Organization repositories differ from basic repositories in that the organization is intended to set up shared repositories through groups of users. In Quay.io, groups of users can be either Teams , or sets of users with the same permissions, or individual users . You can also allow access to user repositories and organization repositories by creating credentials associated with Robot Accounts. Robot Accounts make it easy for a variety of container clients, such as Docker or Podman, to access your repositories without requiring that the client have a Quay.io user account. 8.1. Red Hat Quay teams overview In Red Hat Quay a team is a group of users with shared permissions, allowing for efficient management and collaboration on projects. Teams can help streamline access control and project management within organizations and repositories. They can be assigned designated permissions and help ensure that members have the appropriate level of access to their repositories based on their roles and responsibilities. 8.1.1. Creating a team by using the UI When you create a team for your organization you can select the team name, choose which repositories to make available to the team, and decide the level of access to the team. Use the following procedure to create a team for your organization repository. Prerequisites You have created an organization. Procedure On the Red Hat Quay v2 UI, click the name of an organization. On your organization's page, click Teams and membership . Click the Create new team box. In the Create team popup window, provide a name for your new team. Optional. Provide a description for your new team. Click Proceed . A new popup window appears. Optional. Add this team to a repository, and set the permissions to one of the following: None . Team members have no permission to the repository. Read . Team members can view and pull from the repository. Write . Team members can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Optional. Add a team member or robot account. To add a team member, enter the name of their Red Hat Quay account. Review and finish the information, then click Review and Finish . The new team appears under the Teams and membership page . 8.1.2. Managing a team by using the UI After you have created a team, you can use the UI to manage team members, set repository permissions, delete the team, or view more general information about the team. 8.1.2.1. Adding users to a team by using the UI With administrative privileges to an Organization, you can add users and robot accounts to a team. When you add a user, Quay.io sends an email to that user. The user remains pending until they accept the invitation. Use the following procedure to add users or robot accounts to a team. Procedure On the Red Hat Quay landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the menu kebab of the team that you want to add users or robot accounts to. Then, click Manage team members . Click Add new member . In the textbox, enter information for one of the following: A username from an account on the registry. The email address for a user account on the registry. The name of a robot account. The name must be in the form of <organization_name>+<robot_name>. Note Robot Accounts are immediately added to the team. For user accounts, an invitation to join is mailed to the user. Until the user accepts that invitation, the user remains in the INVITED TO JOIN state. After the user accepts the email invitation to join the team, they move from the INVITED TO JOIN list to the MEMBERS list for the Organization. Click Add member . 8.1.2.2. Setting a team role by using the UI After you have created a team, you can set the role of that team within the Organization. Prerequisites You have created a team. Procedure On the Red Hat Quay landing page, click the name of your Organization. In the navigation pane, click Teams and Membership . Select the TEAM ROLE drop-down menu, as shown in the following figure: For the selected team, choose one of the following roles: Admin . Full administrative access to the organization, including the ability to create teams, add members, and set permissions. Member . Inherits all permissions set for the team. Creator . All member permissions, plus the ability to create new repositories. 8.1.2.2.1. Managing team members and repository permissions Use the following procedure to manage team members and set repository permissions. On the Teams and membership page of your organization, you can also manage team members and set repository permissions. Click the kebab menu, and select one of the following options: Manage Team Members . On this page, you can view all members, team members, robot accounts, or users who have been invited. You can also add a new team member by clicking Add new member . Set repository permissions . On this page, you can set the repository permissions to one of the following: None . Team members have no permission to the repository. Read . Team members can view and pull from the repository. Write . Team members can read (pull) from and write (push) to the repository. Admin . Full access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. Delete . This popup windows allows you to delete the team by clicking Delete . 8.1.2.2.2. Viewing additional information about a team Use the following procedure to view general information about the team. Procedure On the Teams and membership page of your organization, you can click the one of the following options to reveal more information about teams, members, and collaborators: Team View . This menu shows all team names, the number of members, the number of repositories, and the role for each team. Members View . This menu shows all usernames of team members, the teams that they are part of, the repository permissions of the user. Collaborators View . This menu shows repository collaborators. Collaborators are users that do not belong to any team in the organization, but who have direct permissions on one or more repositories belonging to the organization. 8.1.3. Managing a team by using the Red Hat Quay API After you have created a team, you can use the API to obtain information about team permissions or team members, add, update, or delete team members (including by email), or delete an organization team. The following procedures show you how to how to manage a team using the Red Hat Quay API. 8.1.3.1. Setting the role of a team within an organization by using the API Use the following procedure to view and set the role a team within an organization using the API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following GET /api/v1/organization/{orgname}/team/{teamname}/permissions command to return a list of repository permissions for the organization's team. Note that your team must have been added to a repository for this command to return information. USD curl -X GET \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions" Example output {"permissions": [{"repository": {"name": "api-repo", "is_public": true}, "role": "admin"}]} You can create or update a team within an organization to have a specified role of admin , member , or creator using the PUT /api/v1/organization/{orgname}/team/{teamname} command. For example: USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{ "role": "<role>" }' \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>" Example output {"name": "testteam", "description": "", "can_view": true, "role": "creator", "avatar": {"name": "testteam", "hash": "827f8c5762148d7e85402495b126e0a18b9b168170416ed04b49aae551099dc8", "color": "#ff7f0e", "kind": "team"}, "new_team": false} 8.2. Creating and managing default permissions by using the UI Default permissions define permissions that should be granted automatically to a repository when it is created, in addition to the default of the repository's creator. Permissions are assigned based on the user who created the repository. Use the following procedure to create default permissions using the Red Hat Quay v2 UI. Procedure Click the name of an organization. Click Default permissions . Click Create default permissions . A toggle drawer appears. Select either Anyone or Specific user to create a default permission when a repository is created. If selecting Anyone , the following information must be provided: Applied to . Search, invite, or add a user/robot/team. Permission . Set the permission to one of Read , Write , or Admin . If selecting Specific user , the following information must be provided: Repository creator . Provide either a user or robot account. Applied to . Provide a username, robot account, or team name. Permission . Set the permission to one of Read , Write , or Admin . Click Create default permission . A confirmation box appears, returning the following alert: Successfully created default permission for creator . 8.3. Adjusting access settings for a repository by using the UI Use the following procedure to adjust access settings for a user or robot account for a repository using the v2 UI. Prerequisites You have created a user account or robot account. Procedure Log into Quay.io. On the v2 UI, click Repositories . Click the name of a repository, for example, quayadmin/busybox . Click the Settings tab. Optional. Click User and robot permissions . You can adjust the settings for a user or robot account by clicking the dropdown menu option under Permissions . You can change the settings to Read , Write , or Admin . Read . The User or Robot Account can view and pull from the repository. Write . The User or Robot Account can read (pull) from and write (push) to the repository. Admin . The User or Robot account has access to pull from, and push to, the repository, plus the ability to do administrative tasks associated with the repository. | [
"curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions\"",
"{\"permissions\": [{\"repository\": {\"name\": \"api-repo\", \"is_public\": true}, \"role\": \"admin\"}]}",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"role\": \"<role>\" }' \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"",
"{\"name\": \"testteam\", \"description\": \"\", \"can_view\": true, \"role\": \"creator\", \"avatar\": {\"name\": \"testteam\", \"hash\": \"827f8c5762148d7e85402495b126e0a18b9b168170416ed04b49aae551099dc8\", \"color\": \"#ff7f0e\", \"kind\": \"team\"}, \"new_team\": false}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/about_quay_io/use-quay-manage-repo |
Server Installation and Configuration Guide | Server Installation and Configuration Guide Red Hat Single Sign-On 7.6 For Use with Red Hat Single Sign-On 7.6 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_installation_and_configuration_guide/index |
Chapter 4. Importing content | Chapter 4. Importing content This chapter outlines how you can import different types of custom content to Satellite. For example, you can use the following chapters for information on specific types of custom content but the underlying procedures are the same: Chapter 12, Managing ISO images Chapter 14, Managing custom file type content 4.1. Products and repositories in Satellite Both Red Hat content and custom content in Satellite have similarities: The relationship between a Product and its repositories is the same and the repositories still require synchronization. Custom products require a subscription for hosts to access, similar to subscriptions to Red Hat products. Satellite creates a subscription for each custom product you create. Red Hat content is already organized into Products. For example, Red Hat Enterprise Linux Server is a Product in Satellite. The repositories for that Product consist of different versions, architectures, and add-ons. For Red Hat repositories, Products are created automatically after enabling the repository. For more information, see Section 4.6, "Enabling Red Hat repositories" . Other content can be organized into custom products however you want. For example, you might create an EPEL (Extra Packages for Enterprise Linux) Product and add an "EPEL 7 x86_64" repository to it. For more information about creating and packaging RPMs, see the Red Hat Enterprise Linux RPM Packaging Guide . 4.2. Best practices for products and repositories Use one content type per product and content view, for example, yum content only. Make file repositories available over HTTP. If you set Protected to true, you can only download content using a global debugging certificate. Automate the creation of multiple products and repositories by using a Hammer script or an Ansible playbook . For Red Hat content, import your Red Hat manifest into Satellite. For more information, see Chapter 2, Managing Red Hat subscriptions . Avoid uploading content to repositories with an Upstream URL . Instead, create a repository to synchronize content and upload content to without setting an Upstream URL . If you upload content to a repository that already synchronizes another repository, the content might be overwritten, depending on the mirroring policy and content type. 4.3. Importing custom SSL certificates Before you synchronize custom content from an external source, you might need to import SSL certificates into your custom product. This might include client certs and keys or CA certificates for the upstream repositories you want to synchronize. If you require SSL certificates and keys to download packages, you can add them to Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Content Credentials . In the Content Credentials window, click Create Content Credential . In the Name field, enter a name for your SSL certificate. From the Type list, select SSL Certificate . In the Content Credentials Content field, paste your SSL certificate, or click Browse to upload your SSL certificate. Click Save . CLI procedure Copy the SSL certificate to your Satellite Server: Or download the SSL certificate to your Satellite Server from an online source: Upload the SSL Certificate to Satellite: 4.4. Creating a custom product Create a custom product so that you can add repositories to the custom product. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products , click Create Product . In the Name field, enter a name for the product. Satellite automatically completes the Label field based on what you have entered for Name . Optional: From the GPG Key list, select the GPG key for the product. Optional: From the SSL CA Cert list, select the SSL CA certificate for the product. Optional: From the SSL Client Cert list, select the SSL client certificate for the product. Optional: From the SSL Client Key list, select the SSL client key for the product. Optional: From the Sync Plan list, select an existing sync plan or click Create Sync Plan and create a sync plan for your product requirements. In the Description field, enter a description of the product. Click Save . CLI procedure To create the product, enter the following command: 4.5. Adding custom RPM repositories Use this procedure to add custom RPM repositories in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . The Products window in the Satellite web UI also provides a Repo Discovery function that finds all repositories from a URL and you can select which ones to add to your custom product. For example, you can use the Repo Discovery to search https://download.postgresql.org/pub/repos/yum/16/redhat/ and list all repositories for different Red Hat Enterprise Linux versions and architectures. This helps users save time importing multiple repositories from a single source. Support for custom RPMs Red Hat does not support the upstream RPMs directly from third-party sites. These RPMs are used to demonstrate the synchronization process. For any issues with these RPMs, contact the third-party developers. Procedure In the Satellite web UI, navigate to Content > Products and select the product that you want to use, and then click New Repository . In the Name field, enter a name for the repository. Satellite automatically completes the Label field based on what you have entered for Name . Optional: In the Description field, enter a description for the repository. From the Type list, select yum as type of repository. Optional: From the Restrict to Architecture list, select an architecture. If you want to make the repository available to all hosts regardless of the architecture, ensure to select No restriction . Optional: From the Restrict to OS Version list, select the OS version. If you want to make the repository available to all hosts regardless of the OS version, ensure to select No restriction . Optional: In the Upstream URL field, enter the URL of the external repository to use as a source. Satellite supports three protocols: http:// , https:// , and file:// . If you are using a file:// repository, you have to place it under /var/lib/pulp/sync_imports/ directory. If you do not enter an upstream URL, you can manually upload packages. Optional: Check the Ignore SRPMs checkbox to exclude source RPM packages from being synchronized to Satellite. Optional: Check the Ignore treeinfo checkbox if you receive the error Treeinfo file should have INI format . All files related to Kickstart will be missing from the repository if treeinfo files are skipped. Select the Verify SSL checkbox if you want to verify that the upstream repository's SSL certificates are signed by a trusted CA. Optional: In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication. Optional: In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication. Optional: In the Upstream Authentication Token field, provide the token of the upstream repository user for authentication. Leave this field empty if the repository does not require authentication. From the Download Policy list, select the type of synchronization Satellite Server performs. For more information, see Section 4.9, "Download policies overview" . From the Mirroring Policy list, select the type of content synchronization Satellite Server performs. For more information, see Section 4.12, "Mirroring policies overview" . Optional: In the Retain package versions field, enter the number of versions you want to retain per package. Optional: In the HTTP Proxy Policy field, select an HTTP proxy. From the Checksum list, select the checksum type for the repository. Optional: You can clear the Unprotected checkbox to require a subscription entitlement certificate for accessing this repository. By default, the repository is published through HTTP. Optional: From the GPG Key list, select the GPG key for the product. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository. Optional: In the SSL Client cert field, select the SSL Client Certificate for the repository. Optional: In the SSL Client Key field, select the SSL Client Key for the repository. Click Save to create the repository. CLI procedure Enter the following command to create the repository: Continue to synchronize the repository . 4.6. Enabling Red Hat repositories If outside network access requires usage of an HTTP proxy, configure a default HTTP proxy for your server. For more information, see Adding a Default HTTP Proxy to Satellite . To select the repositories to synchronize, you must first identify the Product that contains the repository, and then enable that repository based on the relevant release version and base architecture. For Red Hat Enterprise Linux 8 hosts To provision Red Hat Enterprise Linux 8 hosts, you require the Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) and Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) repositories. For Red Hat Enterprise Linux 7 hosts To provision Red Hat Enterprise Linux 7 hosts, you require the Red Hat Enterprise Linux 7 Server (RPMs) repository. The difference between associating Red Hat Enterprise Linux operating system release version with either 7Server repositories or 7. X repositories is that 7Server repositories contain all the latest updates while Red Hat Enterprise Linux 7. X repositories stop getting updates after the minor version release. Note that Kickstart repositories only have minor versions. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . To find repositories, either enter the repository name, or toggle the Recommended Repositories button to the on position to view a list of repositories that you require. In the Available Repositories pane, click a repository to expand the repository set. Click the Enable icon to the base architecture and release version that you want. CLI procedure To search for your Product, enter the following command: List the repository set for the Product: Enable the repository using either the name or ID number. Include the release version, such as 7Server , and base architecture, such as x86_64 . 4.7. Synchronizing repositories You must synchronize repositories to download content into Satellite. You can use this procedure for an initial synchronization of repositories or to synchronize repositories manually as you need. You can also sync all repositories in an organization. For more information, see Section 4.8, "Synchronizing all repositories in an organization" . Create a sync plan to ensure updates on a regular basis. For more information, see Section 4.23, "Creating a sync plan" . The synchronization duration depends on the size of each repository and the speed of your network connection. The following table provides estimates of how long it would take to synchronize content, depending on the available Internet bandwidth: Single Package (10Mb) Minor Release (750Mb) Major Release (6Gb) 256 Kbps 5 Mins 27 Secs 6 Hrs 49 Mins 36 Secs 2 Days 7 Hrs 55 Mins 512 Kbps 2 Mins 43.84 Secs 3 Hrs 24 Mins 48 Secs 1 Day 3 Hrs 57 Mins T1 (1.5 Mbps) 54.33 Secs 1 Hr 7 Mins 54.78 Secs 9 Hrs 16 Mins 20.57 Secs 10 Mbps 8.39 Secs 10 Mins 29.15 Secs 1 Hr 25 Mins 53.96 Secs 100 Mbps 0.84 Secs 1 Min 2.91 Secs 8 Mins 35.4 Secs 1000 Mbps 0.08 Secs 6.29 Secs 51.54 Secs Procedure In the Satellite web UI, navigate to Content > Products and select the Product that contains the repositories that you want to synchronize. Select the repositories that you want to synchronize and click Sync Now . Optional: To view the progress of the synchronization in the Satellite web UI, navigate to Content > Sync Status and expand the corresponding Product or repository tree. CLI procedure Synchronize an entire Product: Synchronize an individual repository: 4.8. Synchronizing all repositories in an organization Use this procedure to synchronize all repositories within an organization. Procedure Log in to your Satellite Server using SSH. Run the following Bash script: ORG=" My_Organization " for i in USD(hammer --no-headers --csv repository list --organization USDORG --fields Id) do hammer repository synchronize --id USD{i} --organization USDORG --async done 4.9. Download policies overview Red Hat Satellite provides multiple download policies for synchronizing RPM content. For example, you might want to download only the content metadata while deferring the actual content download for later. Satellite Server has the following policies: Immediate Satellite Server downloads all metadata and packages during synchronization. On Demand Satellite Server downloads only the metadata during synchronization. Satellite Server only fetches and stores packages on the file system when Capsules or directly connected clients request them. This setting has no effect if you set a corresponding repository on a Capsule to Immediate because Satellite Server is forced to download all the packages. The On Demand policy acts as a Lazy Synchronization feature because they save time synchronizing content. The lazy synchronization feature must be used only for Yum repositories. You can add the packages to content views and promote to lifecycle environments as normal. Capsule Server has the following policies: Immediate Capsule Server downloads all metadata and packages during synchronization. Do not use this setting if the corresponding repository on Satellite Server is set to On Demand as Satellite Server is forced to download all the packages. On Demand Capsule Server only downloads the metadata during synchronization. Capsule Server fetches and stores packages only on the file system when directly connected clients request them. When you use an On Demand download policy, content is downloaded from Satellite Server if it is not available on Capsule Server. Inherit Capsule Server inherits the download policy for the repository from the corresponding repository on Satellite Server. Streamed Download Policy Streamed Download Policy for Capsules permits Capsules to avoid caching any content. When content is requested from the Capsule, it functions as a proxy and requests the content directly from the Satellite. 4.10. Changing the default download policy You can set the default download policy that Satellite applies to repositories that you create in all organizations. Depending on whether it is a Red Hat or non-Red Hat custom repository, Satellite uses separate settings. Changing the default value does not change existing settings. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Content tab. Change the default download policy depending on your requirements: To change the default download policy for a Red Hat repository, change the value of the Default Red Hat Repository download policy setting. To change the default download policy for a custom repository, change the value of the Default Custom Repository download policy setting. CLI procedure To change the default download policy for Red Hat repositories to one of immediate or on_demand , enter the following command: To change the default download policy for a non-Red Hat custom repository to one of immediate or on_demand , enter the following command: 4.11. Changing the download policy for a repository You can set the download policy for a repository. Procedure In the Satellite web UI, navigate to Content > Products . Select the required product name. On the Repositories tab, click the required repository name, locate the Download Policy field, and click the edit icon. From the list, select the required download policy and then click Save . CLI procedure List the repositories for an organization: Change the download policy for a repository to immediate or on_demand : 4.12. Mirroring policies overview Mirroring keeps the local repository exactly in synchronization with the upstream repository. If any content is removed from the upstream repository since the last synchronization, with the synchronization, it will be removed from the local repository as well. You can use mirroring policies for finer control over mirroring of repodata and content when synchronizing a repository. For example, if it is not possible to mirror the repodata for a repository, you can set the mirroring policy to mirror only content for this repository. Satellite Server has the following mirroring policies: Additive Neither the content nor the repodata is mirrored. Thus, only new content added since the last synchronization is added to the local repository and nothing is removed. Content Only Mirrors only content and not the repodata. Some repositories do not support metadata mirroring, in such cases you can set the mirroring policy to content only to only mirror the content. Complete Mirroring Mirrors content as well as repodata. This is the fastest method. This mirroring policy is only available for Yum content. Warning Avoid republishing metadata for repositories with Complete Mirror mirroring policy. This also applies to content views containing repositories with the Complete Mirror mirroring policy. 4.13. Changing the mirroring policy for a repository You can set the mirroring policy for a repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products . Select the product name. On the Repositories tab, click the repository name, locate the Mirroring Policy field, and click the edit icon. From the list, select a mirroring policy and click Save . CLI procedure List the repositories for an organization: Change the mirroring policy for a repository to additive , mirror_complete , or mirror_content_only : 4.14. Uploading content to custom RPM repositories You can upload individual RPMs and source RPMs to custom RPM repositories. You can upload RPMs using the Satellite web UI or the Hammer CLI. You must use the Hammer CLI to upload source RPMs. Procedure In the Satellite web UI, navigate to Content > Products . Click the name of the custom product. In the Repositories tab, click the name of the custom RPM repository. Under Upload Package , click Browse... and select the RPM you want to upload. Click Upload . To view all RPMs in this repository, click the number to Packages under Content Counts . CLI procedure Enter the following command to upload an RPM: Enter the following command to upload a source RPM: When the upload is complete, you can view information about a source RPM by using the commands hammer srpm list and hammer srpm info --id srpm_ID . 4.15. Refreshing content counts on Capsule If your Capsules have synchronized content enabled, you can refresh the number of content counts available to the environments associated with the Capsule. This displays the content views inside those environments available to the Capsule. You can then expand the content view to view the repositories associated with that content view version. Procedure In the Satellite web UI, navigate to Infrastructure > Capsules , and select the Capsule where you want to see the synchronized content. Select the Overview tab. Under Content Sync , toggle the Synchronize button to do an Optimized Sync or a Complete Sync to synchronize the Capsule which refreshes the content counts. Select the Content tab. Choose an Environment to view content views available to those Capsules by clicking > . Expand the content view by clicking > to view repositories available to the content view and the specific version for the environment. View the number of content counts under Packages specific to yum repositories. View the number of errata, package groups, files, container tags, container manifests, and Ansible collections under Additional content . Click the vertical ellipsis in the column to the right to the environment and click Refresh counts to refresh the content counts synchronized on the Capsule under Packages . 4.16. Configuring SELinux to permit content synchronization on custom ports SELinux permits access of Satellite for content synchronization only on specific ports. By default, connecting to web servers running on the following ports is permitted: 80, 81, 443, 488, 8008, 8009, 8443, and 9000. Procedure On Satellite, to verify the ports that are permitted by SELinux for content synchronization, enter a command as follows: To configure SELinux to permit a port for content synchronization, for example 10011, enter a command as follows: 4.17. Recovering a corrupted repository In case of repository corruption, you can recover it by using an advanced synchronization, which has three options: Optimized Sync Synchronizes the repository bypassing packages that have no detected differences from the upstream packages. Complete Sync Synchronizes all packages regardless of detected changes. Use this option if specific packages could not be downloaded to the local repository even though they exist in the upstream repository. Verify Content Checksum Synchronizes all packages and then verifies the checksum of all packages locally. If the checksum of an RPM differs from the upstream, it re-downloads the RPM. This option is relevant only for Yum content. Use this option if you have one of the following errors: Specific packages cause a 404 error while synchronizing with yum . Package does not match intended download error, which means that specific packages are corrupted. Procedure In the Satellite web UI, navigate to Content > Products . Select the product containing the corrupted repository. Select the name of a repository you want to synchronize. To perform optimized sync or complete sync, select Advanced Sync from the Select Action menu. Select the required option and click Sync . Optional: To verify the checksum, click Verify Content Checksum from the Select Action menu. CLI procedure Obtain a list of repository IDs: Synchronize a corrupted repository using the necessary option: For the optimized synchronization: For the complete synchronization: For the validate content synchronization: 4.18. Republishing repository metadata You can republish repository metadata when a repository distribution does not have the content that should be distributed based on the contents of the repository. Use this procedure with caution. Red Hat recommends a complete repository sync or publishing a new content view version to repair broken metadata. Procedure In the Satellite web UI, navigate to Content > Products . Select the product that includes the repository for which you want to republish metadata. On the Repositories tab, select a repository. To republish metadata for the repository, click Republish Repository Metadata from the Select Action menu. Note This action is not available for repositories that use the Complete Mirroring policy because the metadata is copied verbatim from the upstream source of the repository. 4.19. Republishing content view metadata Use this procedure to republish content view metadata. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view. On the Versions tab, select a content view version. To republish metadata for the content view version, click Republish repository metadata from the vertical ellipsis icon. Republishing repository metadata will regenerate metadata for all repositories in the content view version that do not adhere to the Complete Mirroring policy. 4.20. Adding an HTTP proxy Use this procedure to add HTTP proxies to Satellite. You can then specify which HTTP proxy to use for Products, repositories, and supported compute resources. Prerequisites Your HTTP proxy must allow access to the following hosts: Host name Port Protocol subscription.rhsm.redhat.com 443 HTTPS cdn.redhat.com 443 HTTPS *.akamaiedge.net 443 HTTPS cert.console.redhat.com (if using Red Hat Insights) 443 HTTPS api.access.redhat.com (if using Red Hat Insights) 443 HTTPS cert-api.access.redhat.com (if using Red Hat Insights) 443 HTTPS If Satellite Server uses a proxy to communicate with subscription.rhsm.redhat.com and cdn.redhat.com then the proxy must not perform SSL inspection on these communications. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > HTTP Proxies and select New HTTP Proxy . In the Name field, enter a name for the HTTP proxy. In the URL field, enter the URL for the HTTP proxy, including the port number. If your HTTP proxy requires authentication, enter a Username and Password . Optional: In the Test URL field, enter the HTTP proxy URL, then click Test Connection to ensure that you can connect to the HTTP proxy from Satellite. Click the Locations tab and add a location. Click the Organization tab and add an organization. Click Submit . CLI procedure On Satellite Server, enter the following command to add an HTTP proxy: If your HTTP proxy requires authentication, add the --username name and --password password options. For further information, see the Knowledgebase article How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy on the Red Hat Customer Portal. 4.21. Changing the HTTP proxy policy for a product For granular control over network traffic, you can set an HTTP proxy policy for each Product. A Product's HTTP proxy policy applies to all repositories in the Product, unless you set a different policy for individual repositories. To set an HTTP proxy policy for individual repositories, see Section 4.22, "Changing the HTTP proxy policy for a repository" . Procedure In the Satellite web UI, navigate to Content > Products and select the checkbox to each of the Products that you want to change. From the Select Action list, select Manage HTTP Proxy . Select an HTTP Proxy Policy from the list: Global Default : Use the global default proxy setting. No HTTP Proxy : Do not use an HTTP proxy, even if a global default proxy is configured. Use specific HTTP Proxy : Select an HTTP Proxy from the list. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 4.20, "Adding an HTTP proxy" . Click Update . 4.22. Changing the HTTP proxy policy for a repository For granular control over network traffic, you can set an HTTP proxy policy for each repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . To set the same HTTP proxy policy for all repositories in a Product, see Section 4.21, "Changing the HTTP proxy policy for a product" . Procedure In the Satellite web UI, navigate to Content > Products and click the name of the Product that contains the repository. In the Repositories tab, click the name of the repository. Locate the HTTP Proxy field and click the edit icon. Select an HTTP Proxy Policy from the list: Global Default : Use the global default proxy setting. No HTTP Proxy : Do not use an HTTP proxy, even if a global default proxy is configured. Use specific HTTP Proxy : Select an HTTP Proxy from the list. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 4.20, "Adding an HTTP proxy" . Click Save . CLI procedure On Satellite Server, enter the following command, specifying the HTTP proxy policy you want to use: Specify one of the following options for --http-proxy-policy : none : Do not use an HTTP proxy, even if a global default proxy is configured. global_default_http_proxy : Use the global default proxy setting. use_selected_http_proxy : Specify an HTTP proxy using either --http-proxy My_HTTP_Proxy_Name or --http-proxy-id My_HTTP_Proxy_ID . To add a new HTTP proxy to Satellite, see Section 4.20, "Adding an HTTP proxy" . 4.23. Creating a sync plan A sync plan checks and updates the content at a scheduled date and time. In Satellite, you can create a sync plan and assign products to the plan. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Sync Plans and click New Sync Plan . In the Name field, enter a name for the plan. Optional: In the Description field, enter a description of the plan. From the Interval list, select the interval at which you want the plan to run. From the Start Date and Start Time lists, select when to start running the synchronization plan. Click Save . CLI procedure To create the synchronization plan, enter the following command: View the available sync plans for an organization to verify that the sync plan has been created: 4.24. Assigning a sync plan to a product A sync plan checks and updates the content at a scheduled date and time. In Satellite, you can assign a sync plan to products to update content regularly. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products . Select a product. On the Details tab, select a Sync Plan from the drop down menu. CLI procedure Assign a sync plan to a product: 4.25. Assigning a sync plan to multiple products Use this procedure to assign a sync plan to the products in an organization that have been synchronized at least once and contain at least one repository. Procedure Run the following Bash script: ORG=" My_Organization " SYNC_PLAN="daily_sync_at_3_a.m" hammer sync-plan create --name USDSYNC_PLAN --interval daily --sync-date "2023-04-5 03:00:00" --enabled true --organization USDORG for i in USD(hammer --no-headers --csv --csv-separator="|" product list --organization USDORG --per-page 999 | grep -vi not_synced | awk -F'|' 'USD5 != "0" { print USD1}') do hammer product set-sync-plan --sync-plan USDSYNC_PLAN --organization USDORG --id USDi done After executing the script, view the products assigned to the sync plan: 4.26. Best practices for sync plans Add sync plans to products and regularly synchronize content to keep the load on Satellite low during synchronization. Synchronize content rather more often than less often. For example, setup a sync plan to synchronize content every day rather than only once a month. Automate the creation and update of sync plans by using a Hammer script or an Ansible playbook . Distribute synchronization tasks over several hours to reduce the task load by creating multiple sync plans with the Custom Cron tool. Table 4.1. Cron expression examples Cron expression Explanation 0 22 * * 1-5 every day at 22:00 from Monday to Friday 30 3 * * 6,0 at 03:30 every Saturday and Sunday 30 2 8-14 * * at 02:30 every day between the 8th and the 14th days of the month 4.27. Limiting synchronization concurrency By default, each Repository Synchronization job can fetch up to ten files at a time. This can be adjusted on a per repository basis. Increasing the limit may improve performance, but can cause the upstream server to be overloaded or start rejecting requests. If you are seeing Repository syncs fail due to the upstream servers rejecting requests, you may want to try lowering the limit. CLI procedure 4.28. Importing a custom GPG key When clients are consuming signed custom content, ensure that the clients are configured to validate the installation of packages with the appropriate GPG Key. This helps to ensure that only packages from authorized sources can be installed. Red Hat content is already configured with the appropriate GPG key and thus GPG Key management of Red Hat Repositories is not supported. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you have a copy of the GPG key used to sign the RPM content that you want to use and manage in Satellite. Most RPM distribution providers provide their GPG Key on their website. You can also extract this manually from an RPM: Download a copy of the version specific repository package to your local machine: Extract the RPM file without installing it: The GPG key is located relative to the extraction at etc/pki/rpm-gpg/RPM-GPG-KEY- EXAMPLE-95 . Procedure In the Satellite web UI, navigate to Content > Content Credentials and in the upper-right of the window, click Create Content Credential . Enter the name of your repository and select GPG Key from the Type list. Either paste the GPG key into the Content Credential Contents field, or click Browse and select the GPG key file that you want to import. If your custom repository contains content signed by multiple GPG keys, you must enter all required GPG keys in the Content Credential Contents field with new lines between each key, for example: Click Save . CLI procedure Copy the GPG key to your Satellite Server: Upload the GPG key to Satellite: 4.29. Restricting a custom repository to a specific operating system or architecture in Satellite You can configure Satellite to make a custom repository available only on hosts with a specific operating system version or architecture. For example, you can restrict a custom repository only to Red Hat Enterprise Linux 9 hosts. Note Only restrict architecture and operating system version for custom products. Satellite applies these restrictions automatically for Red Hat repositories. Procedure In the Satellite web UI, navigate to Content > Products . Click the product that contains the repository sets you want to restrict. In the Repositories tab, click the repository you want to restrict. In the Publishing Settings section, set the following options: Set Restrict to OS version to restrict the operating system version. Set Restrict to architecture to restrict the architecture. | [
"scp My_SSL_Certificate [email protected]:~/.",
"wget -P ~ http:// upstream-satellite.example.com /pub/katello-server-ca.crt",
"hammer content-credential create --content-type cert --name \" My_SSL_Certificate \" --organization \" My_Organization \" --path ~/ My_SSL_Certificate",
"hammer product create --name \" My_Product \" --sync-plan \" Example Plan \" --description \" Content from My Repositories \" --organization \" My_Organization \"",
"hammer repository create --arch \" My_Architecture \" --content-type \"yum\" --gpg-key-id My_GPG_Key_ID --name \" My_Repository \" --organization \" My_Organization \" --os-version \" My_OS_Version \" --product \" My_Product \" --publish-via-http true --url My_Upstream_URL",
"hammer product list --organization \" My_Organization \"",
"hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"",
"hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (RPMs)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"",
"hammer product synchronize --name \" My_Product \" --organization \" My_Organization \"",
"hammer repository synchronize --name \" My_Repository \" --organization \" My_Organization \" --product \" My Product \"",
"ORG=\" My_Organization \" for i in USD(hammer --no-headers --csv repository list --organization USDORG --fields Id) do hammer repository synchronize --id USD{i} --organization USDORG --async done",
"hammer settings set --name default_redhat_download_policy --value immediate",
"hammer settings set --name default_download_policy --value immediate",
"hammer repository list --organization-label My_Organization_Label",
"hammer repository update --download-policy immediate --name \" My_Repository \" --organization-label My_Organization_Label --product \" My_Product \"",
"hammer repository list --organization-label My_Organization_Label",
"hammer repository update --id 1 --mirroring-policy mirror_complete",
"hammer repository upload-content --id My_Repository_ID --path /path/to/example-package.rpm",
"hammer repository upload-content --content-type srpm --id My_Repository_ID --path /path/to/example-package.src.rpm",
"semanage port -l | grep ^http_port_t http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000",
"semanage port -a -t http_port_t -p tcp 10011",
"hammer repository list --organization \" My_Organization \"",
"hammer repository synchronize --id My_ID",
"hammer repository synchronize --id My_ID --skip-metadata-check true",
"hammer repository synchronize --id My_ID --validate-contents true",
"hammer http-proxy create --name proxy-name --url proxy-URL:port-number",
"hammer repository update --http-proxy-policy HTTP_Proxy_Policy --id Repository_ID",
"hammer sync-plan create --description \" My_Description \" --enabled true --interval daily --name \" My_Products \" --organization \" My_Organization \" --sync-date \"2023-01-01 01:00:00\"",
"hammer sync-plan list --organization \" My_Organization \"",
"hammer product set-sync-plan --name \" My_Product_Name \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan_Name \"",
"ORG=\" My_Organization \" SYNC_PLAN=\"daily_sync_at_3_a.m\" hammer sync-plan create --name USDSYNC_PLAN --interval daily --sync-date \"2023-04-5 03:00:00\" --enabled true --organization USDORG for i in USD(hammer --no-headers --csv --csv-separator=\"|\" product list --organization USDORG --per-page 999 | grep -vi not_synced | awk -F'|' 'USD5 != \"0\" { print USD1}') do hammer product set-sync-plan --sync-plan USDSYNC_PLAN --organization USDORG --id USDi done",
"hammer product list --organization USDORG --sync-plan USDSYNC_PLAN",
"hammer repository update --download-concurrency 5 --id Repository_ID --organization \" My_Organization \"",
"wget http://www.example.com/9.5/example-9.5-2.noarch.rpm",
"rpm2cpio example-9.5-2.noarch.rpm | cpio -idmv",
"-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFy/HE4BEADttv2TCPzVrre+aJ9f5QsR6oWZMm7N5Lwxjm5x5zA9BLiPPGFN 4aTUR/g+K1S0aqCU+ZS3Rnxb+6fnBxD+COH9kMqXHi3M5UNzbp5WhCdUpISXjjpU XIFFWBPuBfyr/FKRknFH15P+9kLZLxCpVZZLsweLWCuw+JKCMmnA =F6VG -----END PGP PUBLIC KEY BLOCK----- -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFw467UBEACmREzDeK/kuScCmfJfHJa0Wgh/2fbJLLt3KSvsgDhORIptf+PP OTFDlKuLkJx99ZYG5xMnBG47C7ByoMec1j94YeXczuBbynOyyPlvduma/zf8oB9e Wl5GnzcLGAnUSRamfqGUWcyMMinHHIKIc1X1P4I= =WPpI -----END PGP PUBLIC KEY BLOCK-----",
"scp ~/etc/pki/rpm-gpg/RPM-GPG-KEY- EXAMPLE-95 [email protected]:~/.",
"hammer content-credentials create --content-type gpg_key --name \" My_GPG_Key \" --organization \" My_Organization \" --path ~/RPM-GPG-KEY- EXAMPLE-95"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/importing_content_content-management |
Chapter 2. Creating the required Alibaba Cloud resources | Chapter 2. Creating the required Alibaba Cloud resources Before you install OpenShift Container Platform, you must use the Alibaba Cloud console to create a Resource Access Management (RAM) user that has sufficient permissions to install OpenShift Container Platform into your Alibaba Cloud. This user must also have permissions to create new RAM users. You can also configure and use the ccoctl tool to create new credentials for the OpenShift Container Platform components with the permissions that they require. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. Creating the required RAM user You must have a Alibaba Cloud Resource Access Management (RAM) user for the installation that has sufficient privileges. You can use the Alibaba Cloud Resource Access Management console to create a new user or modify an existing user. Later, you create credentials in OpenShift Container Platform based on this user's permissions. When you configure the RAM user, be sure to consider the following requirements: The user must have an Alibaba Cloud AccessKey ID and AccessKey secret pair. For a new user, you can select Open API Access for the Access Mode when creating the user. This mode generates the required AccessKey pair. For an existing user, you can add an AccessKey pair or you can obtain the AccessKey pair for that user. Note When created, the AccessKey secret is displayed only once. You must immediately save the AccessKey pair because the AccessKey pair is required for API calls. Add the AccessKey ID and secret to the ~/.alibabacloud/credentials file on your local computer. Alibaba Cloud automatically creates this file when you log in to the console. The Cloud Credential Operator (CCO) utility, ccoutil, uses these credentials when processing Credential Request objects. For example: [default] # Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret 1 Add your AccessKeyID and AccessKeySecret here. The RAM user must have the AdministratorAccess policy to ensure that the account has sufficient permission to create the OpenShift Container Platform cluster. This policy grants permissions to manage all Alibaba Cloud resources. When you attach the AdministratorAccess policy to a RAM user, you grant that user full access to all Alibaba Cloud services and resources. If you do not want to create a user with full access, create a custom policy with the following actions that you can add to your RAM user for installation. These actions are sufficient to install OpenShift Container Platform. Tip You can copy and paste the following JSON code into the Alibaba Cloud console to create a custom poicy. For information on creating custom policies, see Create a custom policy in the Alibaba Cloud documentation. Example 2.1. Example custom policy JSON file { "Version": "1", "Statement": [ { "Action": [ "tag:ListTagResources", "tag:UntagResources" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "vpc:DescribeVpcs", "vpc:DeleteVpc", "vpc:DescribeVSwitches", "vpc:DeleteVSwitch", "vpc:DescribeEipAddresses", "vpc:DescribeNatGateways", "vpc:ReleaseEipAddress", "vpc:DeleteNatGateway", "vpc:DescribeSnatTableEntries", "vpc:CreateSnatEntry", "vpc:AssociateEipAddress", "vpc:ListTagResources", "vpc:TagResources", "vpc:DescribeVSwitchAttributes", "vpc:CreateVSwitch", "vpc:CreateNatGateway", "vpc:DescribeRouteTableList", "vpc:CreateVpc", "vpc:AllocateEipAddress", "vpc:ListEnhanhcedNatGatewayAvailableZones" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ecs:ModifyInstanceAttribute", "ecs:DescribeSecurityGroups", "ecs:DeleteSecurityGroup", "ecs:DescribeSecurityGroupReferences", "ecs:DescribeSecurityGroupAttribute", "ecs:RevokeSecurityGroup", "ecs:DescribeInstances", "ecs:DeleteInstances", "ecs:DescribeNetworkInterfaces", "ecs:DescribeInstanceRamRole", "ecs:DescribeUserData", "ecs:DescribeDisks", "ecs:ListTagResources", "ecs:AuthorizeSecurityGroup", "ecs:RunInstances", "ecs:TagResources", "ecs:ModifySecurityGroupPolicy", "ecs:CreateSecurityGroup", "ecs:DescribeAvailableResource", "ecs:DescribeRegions", "ecs:AttachInstanceRamRole" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "pvtz:DescribeRegions", "pvtz:DescribeZones", "pvtz:DeleteZone", "pvtz:DeleteZoneRecord", "pvtz:BindZoneVpc", "pvtz:DescribeZoneRecords", "pvtz:AddZoneRecord", "pvtz:SetZoneRecordStatus", "pvtz:DescribeZoneInfo", "pvtz:DescribeSyncEcsHostTask", "pvtz:AddZone" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "slb:DescribeLoadBalancers", "slb:SetLoadBalancerDeleteProtection", "slb:DeleteLoadBalancer", "slb:SetLoadBalancerModificationProtection", "slb:DescribeLoadBalancerAttribute", "slb:AddBackendServers", "slb:DescribeLoadBalancerTCPListenerAttribute", "slb:SetLoadBalancerTCPListenerAttribute", "slb:StartLoadBalancerListener", "slb:CreateLoadBalancerTCPListener", "slb:ListTagResources", "slb:TagResources", "slb:CreateLoadBalancer" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ram:ListResourceGroups", "ram:DeleteResourceGroup", "ram:ListPolicyAttachments", "ram:DetachPolicy", "ram:GetResourceGroup", "ram:CreateResourceGroup", "ram:DeleteRole", "ram:GetPolicy", "ram:DeletePolicy", "ram:ListPoliciesForRole", "ram:CreateRole", "ram:AttachPolicyToRole", "ram:GetRole", "ram:CreatePolicy", "ram:CreateUser", "ram:DetachPolicyFromRole", "ram:CreatePolicyVersion", "ram:DetachPolicyFromUser", "ram:ListPoliciesForUser", "ram:AttachPolicyToUser", "ram:CreateUser", "ram:GetUser", "ram:DeleteUser", "ram:CreateAccessKey", "ram:ListAccessKeys", "ram:DeleteAccessKey", "ram:ListUsers", "ram:ListPolicyVersions" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "oss:DeleteBucket", "oss:DeleteBucketTagging", "oss:GetBucketTagging", "oss:GetBucketCors", "oss:GetBucketPolicy", "oss:GetBucketLifecycle", "oss:GetBucketReferer", "oss:GetBucketTransferAcceleration", "oss:GetBucketLog", "oss:GetBucketWebSite", "oss:GetBucketInfo", "oss:PutBucketTagging", "oss:PutBucket", "oss:OpenOssService", "oss:ListBuckets", "oss:GetService", "oss:PutBucketACL", "oss:GetBucketLogging", "oss:ListObjects", "oss:GetObject", "oss:PutObject", "oss:DeleteObject" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "alidns:DescribeDomainRecords", "alidns:DeleteDomainRecord", "alidns:DescribeDomains", "alidns:DescribeDomainRecordInfo", "alidns:AddDomainRecord", "alidns:SetDomainRecordStatus" ], "Resource": "*", "Effect": "Allow" }, { "Action": "bssapi:CreateInstance", "Resource": "*", "Effect": "Allow" }, { "Action": "ram:PassRole", "Resource": "*", "Effect": "Allow", "Condition": { "StringEquals": { "acs:Service": "ecs.aliyuncs.com" } } } ] } For more information about creating a RAM user and granting permissions, see Create a RAM user and Grant permissions to a RAM user in the Alibaba Cloud documentation. 2.2. Configuring the Cloud Credential Operator utility To assign RAM users and policies that provide long-term RAM AccessKeys (AKs) for each in-cluster component, extract and prepare the Cloud Credential Operator (CCO) utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials 2.3. steps Install a cluster on Alibaba Cloud infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on Alibaba Cloud : You can install a cluster quickly by using the default configuration options. Installing a customized cluster on Alibaba Cloud : The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . | [
"Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret",
"{ \"Version\": \"1\", \"Statement\": [ { \"Action\": [ \"tag:ListTagResources\", \"tag:UntagResources\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"vpc:DescribeVpcs\", \"vpc:DeleteVpc\", \"vpc:DescribeVSwitches\", \"vpc:DeleteVSwitch\", \"vpc:DescribeEipAddresses\", \"vpc:DescribeNatGateways\", \"vpc:ReleaseEipAddress\", \"vpc:DeleteNatGateway\", \"vpc:DescribeSnatTableEntries\", \"vpc:CreateSnatEntry\", \"vpc:AssociateEipAddress\", \"vpc:ListTagResources\", \"vpc:TagResources\", \"vpc:DescribeVSwitchAttributes\", \"vpc:CreateVSwitch\", \"vpc:CreateNatGateway\", \"vpc:DescribeRouteTableList\", \"vpc:CreateVpc\", \"vpc:AllocateEipAddress\", \"vpc:ListEnhanhcedNatGatewayAvailableZones\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ecs:ModifyInstanceAttribute\", \"ecs:DescribeSecurityGroups\", \"ecs:DeleteSecurityGroup\", \"ecs:DescribeSecurityGroupReferences\", \"ecs:DescribeSecurityGroupAttribute\", \"ecs:RevokeSecurityGroup\", \"ecs:DescribeInstances\", \"ecs:DeleteInstances\", \"ecs:DescribeNetworkInterfaces\", \"ecs:DescribeInstanceRamRole\", \"ecs:DescribeUserData\", \"ecs:DescribeDisks\", \"ecs:ListTagResources\", \"ecs:AuthorizeSecurityGroup\", \"ecs:RunInstances\", \"ecs:TagResources\", \"ecs:ModifySecurityGroupPolicy\", \"ecs:CreateSecurityGroup\", \"ecs:DescribeAvailableResource\", \"ecs:DescribeRegions\", \"ecs:AttachInstanceRamRole\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"pvtz:DescribeRegions\", \"pvtz:DescribeZones\", \"pvtz:DeleteZone\", \"pvtz:DeleteZoneRecord\", \"pvtz:BindZoneVpc\", \"pvtz:DescribeZoneRecords\", \"pvtz:AddZoneRecord\", \"pvtz:SetZoneRecordStatus\", \"pvtz:DescribeZoneInfo\", \"pvtz:DescribeSyncEcsHostTask\", \"pvtz:AddZone\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"slb:DescribeLoadBalancers\", \"slb:SetLoadBalancerDeleteProtection\", \"slb:DeleteLoadBalancer\", \"slb:SetLoadBalancerModificationProtection\", \"slb:DescribeLoadBalancerAttribute\", \"slb:AddBackendServers\", \"slb:DescribeLoadBalancerTCPListenerAttribute\", \"slb:SetLoadBalancerTCPListenerAttribute\", \"slb:StartLoadBalancerListener\", \"slb:CreateLoadBalancerTCPListener\", \"slb:ListTagResources\", \"slb:TagResources\", \"slb:CreateLoadBalancer\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ram:ListResourceGroups\", \"ram:DeleteResourceGroup\", \"ram:ListPolicyAttachments\", \"ram:DetachPolicy\", \"ram:GetResourceGroup\", \"ram:CreateResourceGroup\", \"ram:DeleteRole\", \"ram:GetPolicy\", \"ram:DeletePolicy\", \"ram:ListPoliciesForRole\", \"ram:CreateRole\", \"ram:AttachPolicyToRole\", \"ram:GetRole\", \"ram:CreatePolicy\", \"ram:CreateUser\", \"ram:DetachPolicyFromRole\", \"ram:CreatePolicyVersion\", \"ram:DetachPolicyFromUser\", \"ram:ListPoliciesForUser\", \"ram:AttachPolicyToUser\", \"ram:CreateUser\", \"ram:GetUser\", \"ram:DeleteUser\", \"ram:CreateAccessKey\", \"ram:ListAccessKeys\", \"ram:DeleteAccessKey\", \"ram:ListUsers\", \"ram:ListPolicyVersions\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"oss:DeleteBucket\", \"oss:DeleteBucketTagging\", \"oss:GetBucketTagging\", \"oss:GetBucketCors\", \"oss:GetBucketPolicy\", \"oss:GetBucketLifecycle\", \"oss:GetBucketReferer\", \"oss:GetBucketTransferAcceleration\", \"oss:GetBucketLog\", \"oss:GetBucketWebSite\", \"oss:GetBucketInfo\", \"oss:PutBucketTagging\", \"oss:PutBucket\", \"oss:OpenOssService\", \"oss:ListBuckets\", \"oss:GetService\", \"oss:PutBucketACL\", \"oss:GetBucketLogging\", \"oss:ListObjects\", \"oss:GetObject\", \"oss:PutObject\", \"oss:DeleteObject\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"alidns:DescribeDomainRecords\", \"alidns:DeleteDomainRecord\", \"alidns:DescribeDomains\", \"alidns:DescribeDomainRecordInfo\", \"alidns:AddDomainRecord\", \"alidns:SetDomainRecordStatus\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"bssapi:CreateInstance\", \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"ram:PassRole\", \"Resource\": \"*\", \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\": { \"acs:Service\": \"ecs.aliyuncs.com\" } } } ] }",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_alibaba/manually-creating-alibaba-ram |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.12/making-open-source-more-inclusive |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.20/making-open-source-more-inclusive |
Chapter 31. Using LLDP to debug network configuration problems | Chapter 31. Using LLDP to debug network configuration problems You can use the Link Layer Discovery Protocol (LLDP) to debug network configuration problems in the topology. This means that, LLDP can report configuration inconsistencies with other hosts or routers and switches. 31.1. Debugging an incorrect VLAN configuration using LLDP information If you configured a switch port to use a certain VLAN and a host does not receive these VLAN packets, you can use the Link Layer Discovery Protocol (LLDP) to debug the problem. Perform this procedure on the host that does not receive the packets. Prerequisites The nmstate package is installed. The switch supports LLDP. LLDP is enabled on neighbor devices. Procedure Create the ~/enable-LLDP-enp1s0.yml file with the following content: interfaces: - name: enp1s0 type: ethernet lldp: enabled: true Use the ~/enable-LLDP-enp1s0.yml file to enable LLDP on interface enp1s0 : Display the LLDP information: Verify the output to ensure that the settings match your expected configuration. For example, the LLDP information of the interface connected to the switch shows that the switch port this host is connected to uses VLAN ID 448 : - type: 127 ieee-802-1-vlans: - name: v2-0488-03-0505 vid: 488 If the network configuration of the enp1s0 interface uses a different VLAN ID, change it accordingly. Additional resources Configuring VLAN tagging | [
"interfaces: - name: enp1s0 type: ethernet lldp: enabled: true",
"nmstatectl apply ~/enable-LLDP-enp1s0.yml",
"nmstatectl show enp1s0 - name: enp1s0 type: ethernet state: up ipv4: enabled: false dhcp: false ipv6: enabled: false autoconf: false dhcp: false lldp: enabled: true neighbors: - - type: 5 system-name: Summit300-48 - type: 6 system-description: Summit300-48 - Version 7.4e.1 (Build 5) 05/27/05 04:53:11 - type: 7 system-capabilities: - MAC Bridge component - Router - type: 1 _description: MAC address chassis-id: 00:01:30:F9:AD:A0 chassis-id-type: 4 - type: 2 _description: Interface name port-id: 1/1 port-id-type: 5 - type: 127 ieee-802-1-vlans: - name: v2-0488-03-0505 vid: 488 oui: 00:80:c2 subtype: 3 - type: 127 ieee-802-3-mac-phy-conf: autoneg: true operational-mau-type: 16 pmd-autoneg-cap: 27648 oui: 00:12:0f subtype: 1 - type: 127 ieee-802-1-ppvids: - 0 oui: 00:80:c2 subtype: 2 - type: 8 management-addresses: - address: 00:01:30:F9:AD:A0 address-subtype: MAC interface-number: 1001 interface-number-subtype: 2 - type: 127 ieee-802-3-max-frame-size: 1522 oui: 00:12:0f subtype: 4 mac-address: 82:75:BE:6F:8C:7A mtu: 1500",
"- type: 127 ieee-802-1-vlans: - name: v2-0488-03-0505 vid: 488"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_using-lldp-to-debug-network-configuration-problems_configuring-and-managing-networking |
Chapter 11. Configuring user roles and permissions | Chapter 11. Configuring user roles and permissions Secure access to Data Grid services by configuring role-based access control (RBAC) for users. This requires you to assign roles to users so that they have permission to access caches and Data Grid resources. 11.1. Enabling security authorization By default authorization is disabled to ensure backwards compatibility with Infinispan CR instances. Complete the following procedure to enable authorization and use role-based access control (RBAC) for Data Grid users. Procedure Set true as the value for the spec.security.authorization.enabled field in your Infinispan CR. spec: security: authorization: enabled: true Apply the changes. 11.2. User roles and permissions Data Grid Operator provides a set of default roles that are associated with different permissions. Table 11.1. Default roles and permissions Role Permissions Description admin ALL Superuser with all permissions including control of the Cache Manager lifecycle. deployer ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE Can create and delete Data Grid resources in addition to application permissions. application ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR Has read and write access to Data Grid resources in addition to observer permissions. Can also listen to events and execute server tasks and scripts. observer ALL_READ, MONITOR Has read access to Data Grid resources in addition to monitor permissions. monitor MONITOR Can view statistics for Data Grid clusters. Data Grid Operator credentials Data Grid Operator generates credentials that it uses to authenticate with Data Grid clusters to perform internal operations. By default Data Grid Operator credentials are automatically assigned the admin role when you enable security authorization. Additional resources How security authorization works ( Data Grid Security Guide ). 11.3. Assigning roles and permissions to users Assign users with roles that control whether users are authorized to access Data Grid cluster resources. Roles can have different permission levels, from read-only to unrestricted access. Note Users gain authorization implicitly. For example, "admin" gets admin permissions automatically. A user named "deployer" has the deployer role automatically, and so on. Procedure Create an identities.yaml file that assigns roles to users. credentials: - username: admin password: changeme - username: my-user-1 password: changeme roles: - admin - username: my-user-2 password: changeme roles: - monitor Create an authentication secret from identities.yaml . If necessary, delete the existing secret first. Specify the authentication secret with spec.security.endpointSecretName in your Infinispan CR and then apply the changes. 11.4. Adding custom roles and permissions You can define custom roles with different combinations of permissions. Procedure Open your Infinispan CR for editing. Specify custom roles and their associated permissions with the spec.security.authorization.roles field. spec: security: authorization: enabled: true roles: - name: my-role-1 permissions: - ALL - name: my-role-2 permissions: - READ - WRITE Apply the changes. | [
"spec: security: authorization: enabled: true",
"credentials: - username: admin password: changeme - username: my-user-1 password: changeme roles: - admin - username: my-user-2 password: changeme roles: - monitor",
"delete secret connect-secret --ignore-not-found create secret generic --from-file=identities.yaml connect-secret",
"spec: security: endpointSecretName: connect-secret",
"spec: security: authorization: enabled: true roles: - name: my-role-1 permissions: - ALL - name: my-role-2 permissions: - READ - WRITE"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/authorization |
function::msecs_to_string | function::msecs_to_string Name function::msecs_to_string - Human readable string for given milliseconds Synopsis Arguments msecs Number of milliseconds to translate. Description Returns a string representing the number of milliseconds as a human readable string consisting of " XmY.ZZZs " , where X is the number of minutes, Y is the number of seconds and ZZZ is the number of milliseconds. | [
"msecs_to_string:string(msecs:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-msecs-to-string |
Chapter 4. Basic configuration | Chapter 4. Basic configuration As a storage administrator, learning the basics of configuring the Ceph Object Gateway is important. You can learn about the defaults and the embedded web server called Beast. For troubleshooting issues with the Ceph Object Gateway, you can adjust the logging and debugging output generated by the Ceph Object Gateway. Also, you can provide a High-Availability proxy for storage cluster access using the Ceph Object Gateway. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software package. 4.1. Add a wildcard to the DNS You can add the wildcard such as hostname to the DNS record of the DNS server. Prerequisite A running Red Hat Ceph Storage cluster. Ceph Object Gateway installed. Root-level access to the admin node. Procedure To use Ceph with S3-style subdomains, add a wildcard to the DNS record of the DNS server that the ceph-radosgw daemon uses to resolve domain names: Syntax For dnsmasq , add the following address setting with a dot (.) prepended to the host name: Syntax Example For bind , add a wildcard to the DNS record: Example Restart the DNS server and ping the server with a subdomain to ensure that the ceph-radosgw daemon can process the subdomain requests: Syntax Example If the DNS server is on the local machine, you might need to modify /etc/resolv.conf by adding a nameserver entry for the local machine. Add the host name in the Ceph Object Gateway zone group: Get the zone group: Syntax Example Take a back-up of the JSON file: Example View the zonegroup.json file: Example Update the zonegroup.json file with new host name: Example Set the zone group back in the Ceph Object Gateway: Syntax Example Update the period: Example Restart the Ceph Object Gateway so that the DNS setting takes effect. Additional Resources See the The Ceph configuration database section in the Red Hat Ceph Storage Configuration Guide for more details. 4.2. The Beast front-end web server The Ceph Object Gateway provides Beast, a C/C embedded front-end web server. Beast uses the `Boost.Beast` C library to parse HTTP, and Boost.Asio for asynchronous network I/O. Additional Resources Boost C++ Libraries 4.3. Beast configuration options The following Beast configuration options can be passed to the embedded web server in the Ceph configuration file for the RADOS Gateway. Each option has a default value. If a value is not specified, the default value is empty. Option Description Default endpoint and ssl_endpoint Sets the listening address in the form address[:port] where the address is an IPv4 address string in dotted decimal form, or an IPv6 address in hexadecimal notation surrounded by square brackets. The optional port defaults to 8080 for endpoint and 443 for ssl_endpoint . It can be specified multiple times as in endpoint=[::1] endpoint=192.168.0.100:8000 . EMPTY ssl_certificate Path to the SSL certificate file used for SSL-enabled endpoints. EMPTY ssl_private_key Optional path to the private key file used for SSL-enabled endpoints. If one is not given the file specified by ssl_certificate is used as the private key. EMPTY tcp_nodelay Performance optimization in some environments. EMPTY Example /etc/ceph/ceph.conf file with Beast options using SSL: Note By default, the Beast front end writes an access log line recording all requests processed by the server to the RADOS Gateway log file. Additional Resources See Using the Beast front end for more information. 4.4. Configuring SSL for Beast You can configure the Beast front-end web server to use the OpenSSL library to provide Transport Layer Security (TLS). To use Secure Socket Layer (SSL) with Beast, you need to obtain a certificate from a Certificate Authority (CA) that matches the hostname of the Ceph Object Gateway node. Beast also requires the secret key, server certificate, and any other CA in a single .pem file. Important Prevent unauthorized access to the .pem file, because it contains the secret key hash. Important Red Hat recommends obtaining a certificate from a CA with the Subject Alternative Name (SAN) field, and a wildcard for use with S3-style subdomains. Important Red Hat recommends only using SSL with the Beast front-end web server for small to medium sized test environments. For production environments, you must use HAProxy and keepalived to terminate the SSL connection at the HAProxy. If the Ceph Object Gateway acts as a client and a custom certificate is used on the server, you can inject a custom CA by importing it on the node and then mapping the etc/pki directory into the container with the extra_container_args parameter in the Ceph Object Gateway specification file. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software package. Installation of the OpenSSL software package. Root-level access to the Ceph Object Gateway node. Procedure Create a new file named rgw.yml in the current directory: Example Open the rgw.yml file for editing, and customize it for the environment: Syntax Example Deploy the Ceph Object Gateway using the service specification file: Example 4.5. D3N data cache Datacenter-Data-Delivery Network (D3N) uses high-speed storage, such as NVMe , to cache datasets on the access side. Such caching allows big data jobs to use the compute and fast-storage resources available on each Rados Gateway node at the edge. The Rados Gateways act as cache servers for the back-end object store (OSDs), storing data locally for reuse. Note Each time the Rados Gateway is restarted the content of the cache directory is purged. 4.5.1. Adding D3N cache directory To enable D3N cache on RGW, you need to also include the D3N cache directory in podman unit.run . Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway installed. Root-level access to the admin node. A fast NVMe drive in each RGW node to serve as the local cache storage. Procedure Create a mount point for the NVMe drive. Syntax Example Create a cache directory path. Syntax Example Provide a+rwx permission to nvme-mount-path and rgw_d3n_l1_datacache_persistent_path . Syntax Example Create/Modify a RGW specification file with extra_container_args to add rgw_d3n_l1_datacache_persistent_path into podman unit.run . Syntax Example Note If there are multiple instances of RGW in a single host, then a separate rgw_d3n_l1_datacache_persistent_path has to be created for each instance and add each path in extra_container_args . Example : For two instances of RGW in each host, create two separate cache-directory under rgw_d3n_l1_datacache_persistent_path : /mnt/nvme0n1/rgw_datacache/rgw1 and /mnt/nvme0n1/rgw_datacache/rgw2 Example for "extra_container_args" in rgw specification file: Example for rgw-spec.yml: : Redeploy the RGW service: Example 4.5.2. Configuring D3N on rados gateway You can configure the D3N data cache on an existing RGW to improve the performance of big-data jobs running in Red Hat Ceph Storage clusters. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway installed. Root-level access to the admin node. A fast NVMe to serve as the cache storage. Adding the required D3N-related configuration To enable D3N on an existing RGW, the following configuration needs to be set for each Rados Gateways client : Syntax rgw_d3n_l1_local_datacache_enabled=true rgw_d3n_l1_datacache_persistent_path= path to the cache directory Example rgw_d3n_l1_datacache_size= max_size_of_cache_in_bytes Example Example procedure Create a test object: Note The test object needs to be larger than 4 MB to cache. Example Perform GET of an object: Example Verify cache creation. Cache will be created with the name consisting of object key-name within a configured rgw_d3n_l1_datacache_persistent_path . Example Once the cache is created for an object, the GET operation for that object will access from cache resulting in faster access. Example In the above example, to demonstrate the cache acceleration, we are writing to RAM drive ( /dev/shm ). Additional Resources See the Ceph subsystems default logging level values section in the Red Hat Ceph Storage Troubleshooting Guide for additional details on using high availability. See the Understanding Ceph logs section in the Red Hat Ceph Storage Troubleshooting Guide for additional details on using high availability. 4.6. Adjusting logging and debugging output Once you finish the setup procedure, check your logging output to ensure it meets your needs. By default, the Ceph daemons log to journald , and you can view the logs using the journalctl command. Alternatively, you can also have the Ceph daemons log to files, which are located under the /var/log/ceph/ CEPH_CLUSTER_ID / directory. Important Verbose logging can generate over 1 GB of data per hour. This type of logging can potentially fill up the operating system's disk, causing the operating system to stop functioning. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Procedure Set the following parameter to increase the Ceph Object Gateway logging output: Syntax Example You can also modify these settings at runtime: Syntax Example Optionally, you can configure the Ceph daemons to log their output to files. Set the log_to_file , and mon_cluster_log_to_file options to true : Example Additional Resources See the Ceph debugging and logging configuration section of the Red Hat Ceph Storage Configuration Guide for more details. 4.7. Static web hosting As a storage administrator, you can configure the Ceph Object Gateway to host static websites in S3 buckets. Traditional website hosting involves configuring a web server for each website, which can use resources inefficiently when content does not change dynamically. For example, sites that do not use server-side services like PHP, servlets, databases, nodejs, and the like. This approach is substantially more economical than setting up virtual machines with web servers for each site. Prerequisites A healthy, running Red Hat Ceph Storage cluster. 4.7.1. Static web hosting assumptions Static web hosting requires at least one running Red Hat Ceph Storage cluster, and at least two Ceph Object Gateway instances for the static web sites. Red Hat assumes that each zone will have multiple gateway instances using a load balancer, such as high-availability (HA) Proxy and keepalived . Important Red Hat DOES NOT support using a Ceph Object Gateway instance to deploy both standard S3/Swift APIs and static web hosting simultaneously. Additional Resources See the High availability service section in the Red Hat Ceph Storage Object Gateway Guide for additional details on using high availability. 4.7.2. Static web hosting requirements Static web hosting functionality uses its own API, so configuring a gateway to use static web sites in S3 buckets requires the following: S3 static web hosting uses Ceph Object Gateway instances that are separate and distinct from instances used for standard S3/Swift API use cases. Gateway instances hosting S3 static web sites should have separate, non-overlapping domain names from the standard S3/Swift API gateway instances. Gateway instances hosting S3 static web sites should use separate public-facing IP addresses from the standard S3/Swift API gateway instances. Gateway instances hosting S3 static web sites load balance, and if necessary terminate SSL, using HAProxy/keepalived. 4.7.3. Static web hosting gateway setup To enable a Ceph Object Gateway for static web hosting, set the following options: Syntax Example The rgw_enable_static_website setting MUST be true . The rgw_enable_apis setting MUST enable the s3website API. The rgw_dns_name and rgw_dns_s3website_name settings must provide their fully qualified domains. If the site uses canonical name extensions, then set the rgw_resolve_cname option to true . Important The FQDNs of rgw_dns_name and rgw_dns_s3website_name MUST NOT overlap. 4.7.4. Static web hosting DNS configuration The following is an example of assumed DNS settings, where the first two lines specify the domains of the gateway instance using a standard S3 interface and point to the IPv4 and IPv6 addresses. The third line provides a wildcard CNAME setting for S3 buckets using canonical name extensions. The fourth and fifth lines specify the domains for the gateway instance using the S3 website interface and point to their IPv4 and IPv6 addresses. Note The IP addresses in the first two lines differ from the IP addresses in the fourth and fifth lines. If using Ceph Object Gateway in a multi-site configuration, consider using a routing solution to route traffic to the gateway closest to the client. The Amazon Web Service (AWS) requires static web host buckets to match the host name. Ceph provides a few different ways to configure the DNS, and HTTPS will work if the proxy has a matching certificate. Hostname to a Bucket on a Subdomain To use AWS-style S3 subdomains, use a wildcard in the DNS entry which can redirect requests to any bucket. A DNS entry might look like the following: Access the bucket name, where the bucket name is bucket1 , in the following manner: Hostname to Non-Matching Bucket Ceph supports mapping domain names to buckets without including the bucket name in the request, which is unique to Ceph Object Gateway. To use a domain name to access a bucket, map the domain name to the bucket name. A DNS entry might look like the following: Where the bucket name is bucket2 . Access the bucket in the following manner: Hostname to Long Bucket with CNAME AWS typically requires the bucket name to match the domain name. To configure the DNS for static web hosting using CNAME, the DNS entry might look like the following: Access the bucket in the following manner: Hostname to Long Bucket without CNAME If the DNS name contains other non-CNAME records, such as SOA , NS , MX or TXT , the DNS record must map the domain name directly to the IP address. For example: Access the bucket in the following manner: 4.7.5. Creating a static web hosting site To create a static website, perform the following steps: Create an S3 bucket. The bucket name MIGHT be the same as the website's domain name. For example, mysite.com may have a bucket name of mysite.com . This is required for AWS, but it is NOT required for Ceph. See the Static web hosting DNS configuration section in the Red Hat Ceph Storage Object Gateway Guide for details. Upload the static website content to the bucket. Contents may include HTML, CSS, client-side JavaScript, images, audio/video content, and other downloadable files. A website MUST have an index.html file and might have an error.html file. Verify the website's contents. At this point, only the creator of the bucket has access to the contents. Set permissions on the files so that they are publicly readable. 4.8. High availability for the Ceph Object Gateway As a storage administrator, you can assign many instances of the Ceph Object Gateway to a single zone. This allows you to scale out as the load increases, that is, the same zone group and zone; however, you do not need a federated architecture to use a highly available proxy. Since each Ceph Object Gateway daemon has its own IP address, you can use the ingress service to balance the load across many Ceph Object Gateway daemons or nodes. The ingress service manages HAProxy and keepalived daemons for the Ceph Object Gateway environment. You can also terminate HTTPS traffic at the HAProxy server, and use HTTP between the HAProxy server and the Beast front-end web server instances for the Ceph Object Gateway. Prerequisites At least two Ceph Object Gateway daemons running on different hosts. Capacity for at least two instances of the ingress service running on different hosts. 4.8.1. High availability service The ingress service provides a highly available endpoint for the Ceph Object Gateway. The ingress service can be deployed to any number of hosts as needed. Red Hat recommends having at least two supported Red Hat Enterprise Linux servers, each server configured with the ingress service. You can run a high availability (HA) service with a minimum set of configuration options. The Ceph orchestrator deploys the ingress service, which manages the haproxy and keepalived daemons, by providing load balancing with a floating virtual IP address. The active haproxy distributes all Ceph Object Gateway requests to all the available Ceph Object Gateway daemons. A virtual IP address is automatically configured on one of the ingress hosts at a time, known as the primary host. The Ceph orchestrator selects the first network interface based on existing IP addresses that are configured as part of the same subnet. In cases where the virtual IP address does not belong to the same subnet, you can define a list of subnets for the Ceph orchestrator to match with existing IP addresses. If the keepalived daemon and the active haproxy are not responding on the primary host, then the virtual IP address moves to a backup host. This backup host becomes the new primary host. Warning Currently, you can not configure a virtual IP address on a network interface that does not have a configured IP address. Important To use the secure socket layer (SSL), SSL must be terminated by the ingress service and not at the Ceph Object Gateway. 4.8.2. Configuring high availability for the Ceph Object Gateway To configure high availability (HA) for the Ceph Object Gateway you write a YAML configuation file, and the Ceph orchestrator does the installation, configuraton, and management of the ingress service. The ingress service uses the haproxy and keepalived daemons to provide high availability for the Ceph Object Gateway. Prerequisites A minimum of two hosts running Red Hat Enterprise Linux 9, or higher, for installing the ingress service on. A healthy running Red Hat Ceph Storage cluster. A minimum of two Ceph Object Gateway daemons running on different hosts. Root-level access to the host running the ingress service. If using a firewall, then open port 80 for HTTP and port 443 for HTTPS traffic. Procedure Create a new ingress.yaml file: Example Open the ingress.yaml file for editing. Added the following options, and add values applicable to the environment: Syntax 1 Must be set to ingress . 2 Must match the existing Ceph Object Gateway service name. 3 Where to deploy the haproxy and keepalived containers. 4 The virtual IP address where the ingress service is available. 5 The port to access the ingress service. 6 The port to access the haproxy load balancer status. 7 Optional list of available subnets. 8 Optional SSL certificate and private key. Example Launch the Cephadm shell: Example Configure the latest haproxy and keepalived images: Syntax Red Hat Enterprise Linux 9 Install and configure the new ingress service using the Ceph orchestrator: After the Ceph orchestrator completes, verify the HA configuration. On the host running the ingress service, check that the virtual IP address appears: Example Try reaching the Ceph Object Gateway from a Ceph client: Syntax Example If this returns an index.html with similar content as in the example below, then the HA configuration for the Ceph Object Gateway is working properly. Example Additional resources See the Performing a Standard RHEL Installation Guide for more details. See the High availability service section in the Red Hat Ceph Storage Object Gateway Guide for more details. | [
"bucket-name.domain-name.com",
"address=/. HOSTNAME_OR_FQDN / HOST_IP_ADDRESS",
"address=/.gateway-host01/192.168.122.75",
"USDTTL 604800 @ IN SOA gateway-host01. root.gateway-host01. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS gateway-host01. @ IN A 192.168.122.113 * IN CNAME @",
"ping mybucket. HOSTNAME",
"ping mybucket.gateway-host01",
"radosgw-admin zonegroup get --rgw-zonegroup= ZONEGROUP_NAME > zonegroup.json",
"radosgw-admin zonegroup get --rgw-zonegroup=us > zonegroup.json",
"cp zonegroup.json zonegroup.backup.json",
"cat zonegroup.json { \"id\": \"d523b624-2fa5-4412-92d5-a739245f0451\", \"name\": \"asia\", \"api_name\": \"asia\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"zones\": [ { \"id\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"name\": \"india\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"d7e2ad25-1630-4aee-9627-84f24e13017f\", \"sync_policy\": { \"groups\": [] } }",
"\"hostnames\": [\"host01\", \"host02\",\"host03\"],",
"radosgw-admin zonegroup set --rgw-zonegroup= ZONEGROUP_NAME --infile=zonegroup.json",
"radosgw-admin zonegroup set --rgw-zonegroup=us --infile=zonegroup.json",
"radosgw-admin period update --commit",
"[client.rgw.node1] rgw frontends = beast ssl_endpoint=192.168.0.100:443 ssl_certificate=<path to SSL certificate>",
"touch rgw.yml",
"service_type: rgw service_id: SERVICE_ID service_name: SERVICE_NAME placement: hosts: - HOST_NAME spec: ssl: true rgw_frontend_ssl_certificate: CERT_HASH",
"service_type: rgw service_id: foo service_name: rgw.foo placement: hosts: - host01 spec: ssl: true rgw_frontend_ssl_certificate: | -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END CERTIFICATE-----",
"ceph orch apply -i rgw.yml",
"mkfs.ext4 nvme-drive-path",
"mkfs.ext4 /dev/nvme0n1 mount /dev/nvme0n1 /mnt/nvme0n1/",
"mkdir <nvme-mount-path>/cache-directory-name",
"mkdir /mnt/nvme0n1/rgw_datacache",
"chmod a+rwx nvme-mount-path ; chmod a+rwx rgw_d3n_l1_datacache_persistent_path",
"chmod a+rwx /mnt/nvme0n1 ; chmod a+rwx /mnt/nvme0n1/rgw_datacache/",
"\"extra_container_args: \"-v\" \"rgw_d3n_l1_datacache_persistent_path:rgw_d3n_l1_datacache_persistent_path\" \"",
"cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/:/mnt/nvme0n1/rgw_datacache/\"",
"\"extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\" \"",
"cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 count_per_host: 2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\"",
"ceph orch apply -i rgw-spec.yml",
"ceph config set <client.rgw> <CONF-OPTION> <VALUE>",
"rgw_d3n_l1_datacache_persistent_path=/mnt/nvme/rgw_datacache/",
"rgw_d3n_l1_datacache_size=10737418240",
"fallocate -l 1G ./1G.dat s3cmd mb s3://bkt s3cmd put ./1G.dat s3://bkt",
"s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 13s 73.94 MB/s done",
"ls -lh /mnt/nvme/rgw_datacache rw-rr. 1 ceph ceph 1.0M Jun 2 06:18 cc7f967c-0021-43b2-9fdf-23858e868663.615391.1_shadow.ZCiCtMWeu_19wb100JIEZ-o4tv2IyA_1",
"s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 6s 155.07 MB/s done",
"ceph config set client.rgw debug_rgw VALUE",
"ceph config set client.rgw debug_rgw 20",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw. NAME .asok config set debug_rgw VALUE",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw.rgw.asok config set debug_rgw 20",
"ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_enable_static_website true ceph config set client.rgw rgw_enable_apis s3,s3website ceph config set client.rgw rgw_dns_name objects-zonegroup.example.com ceph config set client.rgw rgw_dns_s3website_name objects-website-zonegroup.example.com ceph config set client.rgw rgw_resolve_cname true",
"objects-zonegroup.domain.com. IN A 192.0.2.10 objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10 *.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com. objects-website-zonegroup.domain.com. IN A 192.0.2.20 objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20",
"*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.",
"http://bucket1.objects-website-zonegroup.domain.com",
"www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.",
"http://www.example.com",
"www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.",
"http://www.example.com",
"www.example.com. IN A 192.0.2.20 www.example.com. IN AAAA 2001:DB8::192:0:2:20",
"http://www.example.com",
"[root@host01 ~] touch ingress.yaml",
"service_type: ingress 1 service_id: SERVICE_ID 2 placement: 3 hosts: - HOST1 - HOST2 - HOST3 spec: backend_service: SERVICE_ID virtual_ip: IP_ADDRESS / CIDR 4 frontend_port: INTEGER 5 monitor_port: INTEGER 6 virtual_interface_networks: 7 - IP_ADDRESS / CIDR ssl_cert: | 8",
"service_type: ingress service_id: rgw.foo placement: hosts: - host01.example.com - host02.example.com - host03.example.com spec: backend_service: rgw.foo virtual_ip: 192.168.1.2/24 frontend_port: 8080 monitor_port: 1967 virtual_interface_networks: - 10.10.0.0/16 ssl_cert: | -----BEGIN CERTIFICATE----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END PRIVATE KEY-----",
"cephadm shell --mount ingress.yaml:/var/lib/ceph/radosgw/ingress.yaml",
"ceph config set mgr mgr/cephadm/container_image_haproxy HAPROXY_IMAGE_ID ceph config set mgr mgr/cephadm/container_image_keepalived KEEPALIVED_IMAGE_ID",
"ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest",
"ceph orch apply -i /var/lib/ceph/radosgw/ingress.yaml",
"ip addr show",
"wget HOST_NAME",
"wget host01.example.com",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/object_gateway_guide/basic-configuration |
Chapter 274. Reactor Component | Chapter 274. Reactor Component Available as of Camel version 2.20 Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-reactor</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-reactor</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/reactor_component |
Chapter 5. YAML in a Nutshell | Chapter 5. YAML in a Nutshell 5.1. Overview YAML - which stands for "YAML Ain't Markup Language" - is a human-friendly data serialization standard, similar in scope to JSON (Javascript Object Notation). Unlike JSON, there are only a handful of special characters used to represent mappings and bullet lists, the two basic types of structure, and indentation is used to represent substructure. 5.2. Basics The YAML format is line-oriented, with two top-level parts, HEAD and BODY , separated by a line of three hyphens. The head holds configuration information and the body holds the data. this topic does not discuss the configuration aspect; all the examples here show only the data portion. In such cases, the "---" is optional. The most basic data element is one of: A number A Unicode string A boolean value, spelled either true or false In a key/value pair context, a missing value is parsed as nil Comments start with a "#" (hash, U+23) and go to the end of the line. Indentation is whitespace at the start of the line. You are strongly encouraged to avoid TAB (U+09) characters and use a series of SPACE (U+20) characters, instead. 5.3. Lists A list is a series of lines, each beginning with the same amount of indentation, followed by a hyphen, followed by a list element. Lists cannot have blank lines. For example, here is a list of three elements, the third of which has a comment: Note: The third element is the string "bottom dweller" and does not include the whitespace between "dweller" and the comment. WARNING : Lists cannot normally nest directly; there should be an intervening mapping (described below). In the following example, the list's second element seems, due to the indentation (two SPACE characters), to host a sub-list: In reality, the second element is actually parsed as a single string. The input is equivalent to: The newlines and indentation are normalized to a single space. 5.4. Mappings To write a mapping (also known as an associative array or hash table), use a ":" (colon, U+3A ) followed by one or more SPACE characters between the key and the value: All keys in a mapping must be unique. For example, this is invalid YAML for two reasons: the key square is repeated, and there is no space after the colon following triangle : Mappings can nest directly, by starting the sub-mapping on the line with increased indentation. In the example, the value for key square is itself a mapping (keys sides and perimeter ), and likewise for the value for key triangle . The value for key pentagon is the number 5. The following example shows a mapping with three key/value pairs. The first and third values are nil , while the second is a list of two elements, "highish middle" and "lowish middle". 5.5. Quotation Double-quotation marks (also known as "double-quotes") are useful for forcing non-string data to be interpreted as a string, for preserving whitespace, and for suppressing the meaning of colon. To include a double-quote in a string, escape it with `"\" (backslash, U+5C ). In the following example, all keys and values are strings. The second key has a colon in it. The second value has two spaces both preceding and trailing the visible text. For readability when double-quoting the key, you are encouraged to add whitespace before the colon. 5.6. Block Content There are two kinds of block content, typically found in the value position of a mapping element: newline-preserving and folded. If a block begins with "|" (pipe, U+7C ), the newlines in that block are preserved. If it begins with ">" (greater-than, U+3E ), consecutive newlines are folded into a single space. The following example shows both kinds of block content as the values for keys good-bye and anyway . Using \n (backslash-n) to indicate newline, the values for keys good-bye and anyway are, respectively: Note that the newlines are preserved in the good-bye value but folded into a single space in the anyway value. Also, each value ends with a single newline, even though there are two blank lines between "fourth and last" and "anyway", and no blank lines between "in life" and "lastly". 5.7. Compact Representation Another, more compact, way to represent lists and mappings is to begin with a start character, finish with an end character, and separate elements with "," (comma, U+2C ). For lists, the start and end characters are "[" (left square brace, U+5B ) and "]" (right square brace, U+5D ), respectively. In the following example, the values in the mapping are identical: Note: The double-quotes around the second list element of the second value; they prevent the comma from being misinterpreted as an element separator. (If we remove them, the list would have three elements: "echo", "hello" and "world!".) For mappings, the start and end characters are "{" (left curly brace, U+7B ) and "}" (right curly brace, U+7D ), respectively. In the following example, the values of both one and two are identical: 5.8. Additional Information There is much more to YAML, not described in this topic: directives, complex mapping keys, flow styles, references, aliases, and tags. For detailed information, see the official YAML site , specifically the latest ( version 1.2 at time of writing) specification. | [
"HEAD --- BODY",
"- top shelf - middle age - bottom dweller # stability is important",
"- top - middle - highish middle - lowish middle - bottom",
"- top - middle - highish middle - lowish middle - bottom",
"square: 4 triangle: 3 pentagon: 5",
"square: 4 triangle:3 # invalid key/value separation square: 5 # repeated key",
"square: sides: 4 perimeter: sides * side-length triangle: sides: 3 perimeter: see square pentagon: 5",
"top: middle: - highish middle - lowish middle bottom:",
"\"true\" : \"1\" \"key the second (which has a \\\":\\\" in it)\" : \" second value \"",
"hello: world good-bye: | first line third fourth and last anyway: > nothing is guaranteed in life lastly:",
"first line\\n\\nthird\\nfourth and last\\n nothing is guaranteed in life\\n",
"one: - echo - hello, world! two: [ echo, \"hello, world!\" ]",
"one: roses: red violets: blue two: { roses: red, violets: blue }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/yaml_in_a_nutshell |
Chapter 2. ConsoleCLIDownload [console.openshift.io/v1] | Chapter 2. ConsoleCLIDownload [console.openshift.io/v1] Description ConsoleCLIDownload is an extension for configuring openshift web console command line interface (CLI) downloads. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleCLIDownloadSpec is the desired cli download configuration. 2.1.1. .spec Description ConsoleCLIDownloadSpec is the desired cli download configuration. Type object Required description displayName links Property Type Description description string description is the description of the CLI download (can include markdown). displayName string displayName is the display name of the CLI download. links array links is a list of objects that provide CLI download link details. links[] object 2.1.2. .spec.links Description links is a list of objects that provide CLI download link details. Type array 2.1.3. .spec.links[] Description Type object Required href Property Type Description href string href is the absolute secure URL for the link (must use https) text string text is the display text for the link 2.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleclidownloads DELETE : delete collection of ConsoleCLIDownload GET : list objects of kind ConsoleCLIDownload POST : create a ConsoleCLIDownload /apis/console.openshift.io/v1/consoleclidownloads/{name} DELETE : delete a ConsoleCLIDownload GET : read the specified ConsoleCLIDownload PATCH : partially update the specified ConsoleCLIDownload PUT : replace the specified ConsoleCLIDownload /apis/console.openshift.io/v1/consoleclidownloads/{name}/status GET : read status of the specified ConsoleCLIDownload PATCH : partially update status of the specified ConsoleCLIDownload PUT : replace status of the specified ConsoleCLIDownload 2.2.1. /apis/console.openshift.io/v1/consoleclidownloads HTTP method DELETE Description delete collection of ConsoleCLIDownload Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleCLIDownload Table 2.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownloadList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleCLIDownload Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 202 - Accepted ConsoleCLIDownload schema 401 - Unauthorized Empty 2.2.2. /apis/console.openshift.io/v1/consoleclidownloads/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the ConsoleCLIDownload HTTP method DELETE Description delete a ConsoleCLIDownload Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleCLIDownload Table 2.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleCLIDownload Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleCLIDownload Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 401 - Unauthorized Empty 2.2.3. /apis/console.openshift.io/v1/consoleclidownloads/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the ConsoleCLIDownload HTTP method GET Description read status of the specified ConsoleCLIDownload Table 2.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleCLIDownload Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleCLIDownload Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/console_apis/consoleclidownload-console-openshift-io-v1 |
3.8. Configuring atime Updates | 3.8. Configuring atime Updates Each file inode and directory inode has three time stamps associated with it: ctime - The last time the inode status was changed mtime - The last time the file (or directory) data was modified atime - The last time the file (or directory) data was accessed If atime updates are enabled as they are by default on GFS2 and other Linux file systems then every time a file is read, its inode needs to be updated. Because few applications use the information provided by atime , those updates can require a significant amount of unnecessary write traffic and file locking traffic. That traffic can degrade performance; therefore, it may be preferable to turn off or reduce the frequency of atime updates. Two methods of reducing the effects of atime updating are available: Mount with relatime (relative atime), which updates the atime if the atime update is older than the mtime or ctime update. Mount with noatime , which disables atime updates on that file system. 3.8.1. Mount with relatime The relatime (relative atime) Linux mount option can be specified when the file system is mounted. This specifies that the atime is updated if the atime update is older than the mtime or ctime update. Usage BlockDevice Specifies the block device where the GFS2 file system resides. MountPoint Specifies the directory where the GFS2 file system should be mounted. Example In this example, the GFS2 file system resides on /dev/vg01/lvol0 and is mounted on directory /mygfs2 . The atime updates take place only if the atime update is older than the mtime or ctime update. 3.8.2. Mount with noatime The noatime Linux mount option can be specified when the file system is mounted, which disables atime updates on that file system. Usage BlockDevice Specifies the block device where the GFS2 file system resides. MountPoint Specifies the directory where the GFS2 file system should be mounted. Example In this example, the GFS2 file system resides on /dev/vg01/lvol0 and is mounted on directory /mygfs2 with atime updates turned off. | [
"mount BlockDevice MountPoint -o relatime",
"mount /dev/vg01/lvol0 /mygfs2 -o relatime",
"mount BlockDevice MountPoint -o noatime",
"mount /dev/vg01/lvol0 /mygfs2 -o noatime"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-manage-atimeconf |
Deploying OpenShift Data Foundation using Amazon Web Services | Deploying OpenShift Data Foundation using Amazon Web Services Red Hat OpenShift Data Foundation 4.18 Instructions for deploying OpenShift Data Foundation using Amazon Web Services for cloud storage Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Amazon Web Services. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process for your environment based on your requirement: Deploy using dynamic storage devices Deploy standalone Multicloud Object Gateway component Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the certificate authority (CA) to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Amazon Web Services (AWS) EBS (type, gp2-csi or gp3-csi ) that provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 2.3.1.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.3.1.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. 2.4. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . As of OpenShift Data Foundation version 4.12, you can choose gp2-csi or gp3-csi as the storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 2.5.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) ceph-csi-operator ceph-csi-controller-manager-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 4. Creating an AWS-STS-backed backingstore Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following: Creating an AWS role using a script, which helps to get the temporary security credentials for the role session Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Creating backingstore in AWS STS OpenShift cluster 4.1. Creating an AWS role using a script You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator. Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Procedure Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation. The following example shows the details that are required to create the role: where 123456789123 Is the AWS account ID mybucket Is the bucket name (using public bucket configuration) us-east-2 Is the AWS region openshift-storage Is the namespace name Sample script 4.1.1. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Procedure Install OpenShift Data Foundation Operator from the Operator Hub. During the installation add the role ARN in the ARN Details field. Make sure that the Update approval field is set to Manual . 4.1.2. Creating a new AWS STS backingstore Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Install OpenShift Data Foundation Operator. For more information, see Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster . Procedure Install Multicloud Object Gateway (MCG). It is installed with the default backingstore by using the short-lived credentials. After the MCG system is ready, you can create more backingstores of the type aws-sts-s3 using the following MCG command line interface command: where backingstore-name Name of the backingstore aws-sts-role-arn The AWS STS role ARN which will assume role region The AWS bucket region target-bucket The target bucket name on the cloud Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"mybucket-oidc.s3.us-east-2.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-storage:noobaa\", \"system:serviceaccount:openshift-storage:noobaa-core\", \"system:serviceaccount:openshift-storage:noobaa-endpoint\" ] } } } ] }",
"#!/bin/bash set -x This is a sample script to help you deploy MCG on AWS STS cluster. This script shows how to create role-policy and then create the role in AWS. For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html WARNING: This is a sample script. You need to adjust the variables based on your requirement. Variables : user variables - REPLACE these variables with your values: ROLE_NAME=\"<role-name>\" # role name that you pick in your AWS account NAMESPACE=\"<namespace>\" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage. MCG variables SERVICE_ACCOUNT_NAME_1=\"noobaa\" # The service account name of deployment operator SERVICE_ACCOUNT_NAME_2=\"noobaa-endpoint\" # The service account name of deployment endpoint SERVICE_ACCOUNT_NAME_3=\"noobaa-core\" # The service account name of statefulset core AWS variables Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER) AWS_ACCOUNT_ID is your AWS account number AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) If you want to create the role before using the cluster, replace this field too. The OIDC provider is in the structure: 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") the permission (S3 full access) POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" Creating the role (with AWS command line interface) read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_1}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_2}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_3}\" ] } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDROLE_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDROLE_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/deploying_openshift_data_foundation_using_amazon_web_services/index |
Appendix A. Business Central system properties | Appendix A. Business Central system properties The Business Central system properties listed in this section are passed to standalone*.xml files. Git directory Use the following properties to set the location and name for the Business Central Git directory: org.uberfire.nio.git.dir : Location of the Business Central Git directory. org.uberfire.nio.git.dirname : Name of the Business Central Git directory. Default value: .niogit . org.uberfire.nio.git.ketch : Enables or disables Git ketch. org.uberfire.nio.git.hooks : Location of the Git hooks directory. Git over HTTP Use the following properties to configure access to the Git repository over HTTP: org.uberfire.nio.git.proxy.ssh.over.http : Specifies whether SSH should use an HTTP proxy. Default value: false . http.proxyHost : Defines the host name of the HTTP proxy. Default value: null . http.proxyPort : Defines the host port (integer value) of the HTTP proxy. Default value: null . http.proxyUser : Defines the user name of the HTTP proxy. http.proxyPassword : Defines the user password of the HTTP proxy. org.uberfire.nio.git.http.enabled : Enables or disables the HTTP daemon. Default value: true . org.uberfire.nio.git.http.host : If the HTTP daemon is enabled, it uses this property as the host identifier. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.http.hostname : If the HTTP daemon is enabled, it uses this property as the host name identifier. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.http.port : If the HTTP daemon is enabled, it uses this property as the port number. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: 8080 . Git over HTTPS Use the following properties to configure access to the Git repository over HTTPS: org.uberfire.nio.git.proxy.ssh.over.https : Specifies whether SSH uses an HTTPS proxy. Default value: false . https.proxyHost : Defines the host name of the HTTPS proxy. Default value: null . https.proxyPort : Defines the host port (integer value) of the HTTPS proxy. Default value: null . https.proxyUser : Defines the user name of the HTTPS proxy. https.proxyPassword : Defines the user password of the HTTPS proxy. user.dir : Location of the user directory. org.uberfire.nio.git.https.enabled : Enables or disables the HTTPS daemon. Default value: false org.uberfire.nio.git.https.host : If the HTTPS daemon is enabled, it uses this property as the host identifier. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.https.hostname : If the HTTPS daemon is enabled, it uses this property as the host name identifier. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.https.port : If the HTTPS daemon is enabled, it uses this property as the port number. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: 8080 . JGit org.uberfire.nio.jgit.cache.instances : Defines the JGit cache size. org.uberfire.nio.jgit.cache.overflow.cleanup.size : Defines the JGit cache overflow cleanup size. org.uberfire.nio.jgit.remove.eldest.iterations : Enables or disables whether to remove eldest JGit iterations. org.uberfire.nio.jgit.cache.evict.threshold.duration : Defines the JGit evict threshold duration. org.uberfire.nio.jgit.cache.evict.threshold.time.unit : Defines the JGit evict threshold time unit. Git daemon Use the following properties to enable and configure the Git daemon: org.uberfire.nio.git.daemon.enabled : Enables or disables the Git daemon. Default value: true . org.uberfire.nio.git.daemon.host : If the Git daemon is enabled, it uses this property as the local host identifier. Default value: localhost . org.uberfire.nio.git.daemon.hostname : If the Git daemon is enabled, it uses this property as the local host name identifier. Default value: localhost org.uberfire.nio.git.daemon.port : If the Git daemon is enabled, it uses this property as the port number. Default value: 9418 . org.uberfire.nio.git.http.sslVerify : Enables or disables SSL certificate checking for Git repositories. Default value: true . Note If the default or assigned port is already in use, a new port is automatically selected. Ensure that the ports are available and check the log for more information. Git SSH Use the following properties to enable and configure the Git SSH daemon: org.uberfire.nio.git.ssh.enabled : Enables or disables the SSH daemon. Default value: true . org.uberfire.nio.git.ssh.host : If the SSH daemon enabled, it uses this property as the local host identifier. Default value: localhost . org.uberfire.nio.git.ssh.hostname : If the SSH daemon is enabled, it uses this property as local host name identifier. Default value: localhost . org.uberfire.nio.git.ssh.port : If the SSH daemon is enabled, it uses this property as the port number. Default value: 8001 . Note If the default or assigned port is already in use, a new port is automatically selected. Ensure that the ports are available and check the log for more information. org.uberfire.nio.git.ssh.cert.dir : Location of the .security directory where local certificates are stored. Default value: Working directory. org.uberfire.nio.git.ssh.idle.timeout : Sets the SSH idle timeout. org.uberfire.nio.git.ssh.passphrase : Pass phrase used to access the public key store of your operating system when cloning git repositories with SCP style URLs. Example: [email protected]:user/repository.git . org.uberfire.nio.git.ssh.algorithm : Algorithm used by SSH. Default value: RSA . org.uberfire.nio.git.gc.limit : Sets the GC limit. org.uberfire.nio.git.ssh.ciphers : A comma-separated string of ciphers. The available ciphers are aes128-ctr , aes192-ctr , aes256-ctr , arcfour128 , arcfour256 , aes192-cbc , aes256-cbc . If the property is not used, all available ciphers are loaded. org.uberfire.nio.git.ssh.macs : A comma-separated string of message authentication codes (MACs). The available MACs are hmac-md5 , hmac-md5-96 , hmac-sha1 , hmac-sha1-96 , hmac-sha2-256 , hmac-sha2-512 . If the property is not used, all available MACs are loaded. Note If you plan to use RSA or any algorithm other than DSA, make sure you set up your application server to use the Bouncy Castle JCE library. KIE Server nodes and Process Automation Manager controller Use the following properties to configure the connections with the KIE Server nodes from the Process Automation Manager controller: org.kie.server.controller : The URL is used to connect to the Process Automation Manager controller. For example, ws://localhost:8080/business-central/websocket/controller . org.kie.server.user : User name used to connect to the KIE Server nodes from the Process Automation Manager controller. This property is only required when using this Business Central installation as a Process Automation Manager controller. org.kie.server.pwd : Password used to connect to the KIE Server nodes from the Process Automation Manager controller. This property is only required when using this Business Central installation as a Process Automation Manager controller. Maven and miscellaneous Use the following properties to configure Maven and other miscellaneous functions: kie.maven.offline.force : Forces Maven to behave as if offline. If true, disables online dependency resolution. Default value: false . Note Use this property for Business Central only. If you share a runtime environment with any other component, isolate the configuration and apply it only to Business Central. org.uberfire.gzip.enable : Enables or disables Gzip compression on the GzipFilter compression filter. Default value: true . org.kie.workbench.profile : Selects the Business Central profile. Possible values are FULL or PLANNER_AND_RULES . A prefix FULL_ sets the profile and hides the profile preferences from the administrator preferences. Default value: FULL org.appformer.m2repo.url : Business Central uses the default location of the Maven repository when looking for dependencies. It directs to the Maven repository inside Business Central, for example, http://localhost:8080/business-central/maven2 . Set this property before starting Business Central. Default value: File path to the inner m2 repository. appformer.ssh.keystore : Defines the custom SSH keystore to be used with Business Central by specifying a class name. If the property is not available, the default SSH keystore is used. appformer.ssh.keys.storage.folder : When using the default SSH keystore, this property defines the storage folder for the user's SSH public keys. If the property is not available, the keys are stored in the Business Central .security folder. appformer.experimental.features : Enables the experimental features framework. Default value: false . org.kie.demo : Enables an external clone of a demo application from GitHub. org.uberfire.metadata.index.dir : Place where the Lucene .index directory is stored. Default value: Working directory. org.uberfire.ldap.regex.role_mapper : Regex pattern used to map LDAP principal names to the application role name. Note that the variable role must be a part of the pattern as the application role name substitutes the variable role when matching a principle value and role name. org.uberfire.sys.repo.monitor.disabled : Disables the configuration monitor. Do not disable unless you are sure. Default value: false . org.uberfire.secure.key : Password used by password encryption. Default value: org.uberfire.admin . org.uberfire.secure.alg : Crypto algorithm used by password encryption. Default value: PBEWithMD5AndDES . org.uberfire.domain : Security-domain name used by uberfire. Default value: ApplicationRealm . org.guvnor.m2repo.dir : Place where the Maven repository folder is stored. Default value: <working-directory>/repositories/kie . org.guvnor.project.gav.check.disabled : Disables group ID, artifact ID, and version (GAV) checks. Default value: false . org.kie.build.disable-project-explorer : Disables automatic build of a selected project in Project Explorer. Default value: false . org.kie.builder.cache.size : Defines the cache size of the project builder. Default value: 20 . org.kie.library.assets_per_page : You can customize the number of assets per page in the project screen. Default value: 15 . org.kie.verification.disable-dtable-realtime-verification : Disables the real-time validation and verification of decision tables. Default value: false . Process Automation Manager controller Use the following properties to configure how to connect to the Process Automation Manager controller: org.kie.workbench.controller : The URL used to connect to the Process Automation Manager controller, for example, ws://localhost:8080/kie-server-controller/websocket/controller . org.kie.workbench.controller.user : The Process Automation Manager controller user. Default value: kieserver . org.kie.workbench.controller.pwd : The Process Automation Manager controller password. Default value: kieserver1! . org.kie.workbench.controller.token : The token string used to connect to the Process Automation Manager controller. Java Cryptography Extension KeyStore (JCEKS) Use the following properties to configure JCEKS: kie.keystore.keyStoreURL : The URL used to load a Java Cryptography Extension KeyStore (JCEKS). For example, file:///home/kie/keystores/keystore.jceks. kie.keystore.keyStorePwd : The password used for the JCEKS. kie.keystore.key.ctrl.alias : The alias of the key for the default REST Process Automation Manager controller. kie.keystore.key.ctrl.pwd : The password of the alias for the default REST Process Automation Manager controller. Rendering Use the following properties to switch between Business Central and KIE Server rendered forms: org.jbpm.wb.forms.renderer.ext : Switches the form rendering between Business Central and KIE Server. By default, the form rendering is performed by Business Central. Default value: false . org.jbpm.wb.forms.renderer.name : Enables you to switch between Business Central and KIE Server rendered forms. Default value: workbench . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/business-central-system-properties-ref_install-on-eap |
Chapter 2. Red Hat Enterprise Linux 8 | Chapter 2. Red Hat Enterprise Linux 8 This section outlines the packages released for Red Hat Enterprise Linux 8. 2.1. Red Hat Satellite 6.15 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the satellite-6.15-for-rhel-8-x86_64-rpms repository. Table 2.1. Red Hat Satellite 6.15 for RHEL 8 x86_64 (RPMs) Name Version Advisory ansible-collection-redhat-satellite 4.0.0-2.el8sat RHSA-2024:2010 ansible-collection-redhat-satellite_operations 2.1.0-1.el8sat RHSA-2024:2010 ansible-lint 5.4.0-1.el8pc RHSA-2024:2010 ansible-runner 2.2.1-5.1.el8pc RHSA-2024:2010 ansiblerole-foreman_scap_client 0.2.0-2.el8sat RHSA-2024:2010 ansiblerole-insights-client 1.7.1-2.el8sat RHSA-2024:2010 candlepin 4.3.12-1.el8sat RHSA-2024:2010 candlepin-selinux 4.3.12-1.el8sat RHSA-2024:2010 cjson 1.7.14-5.el8sat RHSA-2024:2010 createrepo_c 1.0.2-5.el8pc RHSA-2024:2010 createrepo_c-libs 1.0.2-5.el8pc RHSA-2024:2010 dynflow-utils 1.6.3-1.el8sat RHSA-2024:2010 foreman 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-bootloaders-redhat 202102220000-1.el8sat RHSA-2024:2010 foreman-bootloaders-redhat-tftpboot 202102220000-1.el8sat RHSA-2024:2010 foreman-cli 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-debug 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-discovery-image 4.1.0-31.el8sat RHSA-2024:2010 foreman-discovery-image-service 1.0.0-4.1.el8sat RHSA-2024:2010 foreman-discovery-image-service-tui 1.0.0-4.1.el8sat RHSA-2024:2010 foreman-dynflow-sidekiq 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-ec2 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-fapolicyd 1.0.1-2.el8sat RHSA-2024:2010 foreman-installer 3.9.0-3.el8sat RHSA-2024:2010 foreman-installer-katello 3.9.0-3.el8sat RHSA-2024:2010 foreman-journald 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-libvirt 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-obsolete-packages 1.6-1.el8sat RHSA-2024:2010 foreman-openstack 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-ovirt 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-pcp 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-postgresql 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-proxy 3.9.0-1.el8sat RHSA-2024:2010 foreman-proxy-fapolicyd 1.0.1-2.el8sat RHSA-2024:2010 foreman-proxy-journald 3.9.0-1.el8sat RHSA-2024:2010 foreman-redis 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-selinux 3.9.0-1.el8sat RHSA-2024:2010 foreman-service 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-telemetry 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-vmware 3.9.1.6-1.el8sat RHSA-2024:2010 katello 4.11.0-1.el8sat RHSA-2024:2010 katello-certs-tools 2.9.0-2.el8sat RHSA-2024:2010 katello-client-bootstrap 1.7.9-1.el8sat RHSA-2024:2010 katello-common 4.11.0-1.el8sat RHSA-2024:2010 katello-debug 4.11.0-1.el8sat RHSA-2024:2010 katello-selinux 5.0.2-1.el8sat RHSA-2024:2010 libcomps 0.1.18-8.el8pc RHSA-2024:2010 libsodium 1.0.17-3.el8sat RHSA-2024:2010 libsolv 0.7.22-6.el8pc RHSA-2024:2010 mosquitto 2.0.17-1.el8sat RHSA-2024:2010 postgresql-evr 0.0.2-1.el8sat RHSA-2024:2010 pulpcore-obsolete-packages 1.0-9.el8pc RHSA-2024:2010 pulpcore-selinux 2.0.1-1.el8pc RHSA-2024:2010 puppet-agent 7.28.0-1.el8sat RHSA-2024:2010 puppet-agent-oauth 0.5.10-1.el8sat RHSA-2024:2010 puppet-foreman_scap_client 0.4.0-1.el8sat RHSA-2024:2010 puppetlabs-stdlib 9.4.1-1.el8sat RHSA-2024:2010 puppetserver 7.14.0-1.el8sat RHSA-2024:2010 python3-createrepo_c 1.0.2-5.el8pc RHSA-2024:2010 python3-libcomps 0.1.18-8.el8pc RHSA-2024:2010 python3-solv 0.7.22-6.el8pc RHSA-2024:2010 python3-websockify 0.10.0-3.el8sat RHSA-2024:2010 python3.11-aiodns 3.0.0-6.el8pc RHSA-2024:2010 python3.11-aiofiles 22.1.0-4.el8pc RHSA-2024:2010 python3.11-aiohttp 3.9.2-1.el8pc RHSA-2024:2010 python3.11-aiohttp-xmlrpc 1.5.0-5.el8pc RHSA-2024:2010 python3.11-aioredis 2.0.1-5.el8pc RHSA-2024:2010 python3.11-aiosignal 1.3.1-4.el8pc RHSA-2024:2010 python3.11-ansible-builder 3.0.0-1.el8pc RHSA-2024:2010 python3.11-ansible-runner 2.2.1-5.1.el8pc RHSA-2024:2010 python3.11-asgiref 3.6.0-4.el8pc RHSA-2024:2010 python3.11-async-lru 1.0.3-4.el8pc RHSA-2024:2010 python3.11-async-timeout 4.0.2-5.el8pc RHSA-2024:2010 python3.11-asyncio-throttle 1.0.2-6.el8pc RHSA-2024:2010 python3.11-attrs 21.4.0-5.el8pc RHSA-2024:2010 python3.11-backoff 2.2.1-4.el8pc RHSA-2024:2010 python3.11-bindep 2.11.0-4.el8pc RHSA-2024:2010 python3.11-bleach 3.3.1-5.el8pc RHSA-2024:2010 python3.11-bleach-allowlist 1.0.3-6.el8pc RHSA-2024:2010 python3.11-bracex 2.2.1-4.el8pc RHSA-2024:2010 python3.11-brotli 1.0.9-4.el8pc RHSA-2024:2010 python3.11-certifi 2022.12.7-3.el8pc RHSA-2024:2010 python3.11-cffi 1.15.1-4.el8pc RHSA-2024:2010 python3.11-charset-normalizer 2.1.1-4.el8pc RHSA-2024:2010 python3.11-click 8.1.3-4.el8pc RHSA-2024:2010 python3.11-click-shell 2.1-6.el8pc RHSA-2024:2010 python3.11-colorama 0.4.4-6.el8pc RHSA-2024:2010 python3.11-commonmark 0.9.1-8.el8pc RHSA-2024:2010 python3.11-contextlib2 21.6.0-6.el8pc RHSA-2024:2010 python3.11-createrepo_c 1.0.2-5.el8pc RHSA-2024:2010 python3.11-cryptography 41.0.6-1.el8pc RHSA-2024:2010 python3.11-daemon 2.3.1-4.3.el8pc RHSA-2024:2010 python3.11-dataclasses 0.8-6.el8pc RHSA-2024:2010 python3.11-dateutil 2.8.2-5.el8pc RHSA-2024:2010 python3.11-debian 0.1.44-6.el8pc RHSA-2024:2010 python3.11-defusedxml 0.7.1-6.el8pc RHSA-2024:2010 python3.11-deprecated 1.2.13-4.el8pc RHSA-2024:2010 python3.11-diff-match-patch 20200713-6.el8pc RHSA-2024:2010 python3.11-distro 1.7.0-3.el8pc RHSA-2024:2010 python3.11-django 4.2.9-1.el8pc RHSA-2024:2010 python3.11-django-filter 23.2-3.el8pc RHSA-2024:2010 python3.11-django-guid 3.3.0-4.el8pc RHSA-2024:2010 python3.11-django-import-export 3.1.0-3.el8pc RHSA-2024:2010 python3.11-django-lifecycle 1.0.0-3.el8pc RHSA-2024:2010 python3.11-django-readonly-field 1.1.2-3.el8pc RHSA-2024:2010 python3.11-djangorestframework 3.14.0-3.el8pc RHSA-2024:2010 python3.11-djangorestframework-queryfields 1.0.0-7.el8pc RHSA-2024:2010 python3.11-docutils 0.20.1-3.el8pc RHSA-2024:2010 python3.11-drf-access-policy 1.3.0-3.el8pc RHSA-2024:2010 python3.11-drf-nested-routers 0.93.4-5.el8pc RHSA-2024:2010 python3.11-drf-spectacular 0.26.5-4.el8pc RHSA-2024:2010 python3.11-dynaconf 3.1.12-3.el8pc RHSA-2024:2010 python3.11-ecdsa 0.18.0-4.el8pc RHSA-2024:2010 python3.11-enrich 1.2.6-7.el8pc RHSA-2024:2010 python3.11-et-xmlfile 1.1.0-5.el8pc RHSA-2024:2010 python3.11-flake8 5.0.0-2.el8pc RHSA-2024:2010 python3.11-frozenlist 1.3.3-4.el8pc RHSA-2024:2010 python3.11-future 0.18.3-4.el8pc RHSA-2024:2010 python3.11-galaxy-importer 0.4.19-2.el8pc RHSA-2024:2010 python3.11-gitdb 4.0.10-4.el8pc RHSA-2024:2010 python3.11-gitpython 3.1.40-2.el8pc RHSA-2024:2010 python3.11-gnupg 0.5.0-4.el8pc RHSA-2024:2010 python3.11-googleapis-common-protos 1.59.1-4.el8pc RHSA-2024:2010 python3.11-grpcio 1.56.0-4.el8pc RHSA-2024:2010 python3.11-gunicorn 20.1.0-7.el8pc RHSA-2024:2010 python3.11-importlib-metadata 6.0.1-3.el8pc RHSA-2024:2010 python3.11-inflection 0.5.1-6.el8pc RHSA-2024:2010 python3.11-iniparse 0.4-39.el8pc RHSA-2024:2010 python3.11-jinja2 3.1.3-1.el8pc RHSA-2024:2010 python3.11-jq 1.6.0-3.el8pc RHSA-2024:2010 python3.11-json_stream 2.3.2-4.el8pc RHSA-2024:2010 python3.11-json_stream_rs_tokenizer 0.4.25-3.el8pc RHSA-2024:2010 python3.11-jsonschema 4.10.3-3.el8pc RHSA-2024:2010 python3.11-libcomps 0.1.18-8.el8pc RHSA-2024:2010 python3.11-lockfile 0.12.2-4.el8pc RHSA-2024:2010 python3.11-lxml 4.9.2-4.el8pc RHSA-2024:2010 python3.11-markdown 3.4.1-3.el8pc RHSA-2024:2010 python3.11-markuppy 1.14-6.el8pc RHSA-2024:2010 python3.11-markupsafe 2.1.2-4.el8pc RHSA-2024:2010 python3.11-mccabe 0.7.0-3.el8pc RHSA-2024:2010 python3.11-multidict 6.0.4-4.el8pc RHSA-2024:2010 python3.11-odfpy 1.4.1-9.el8pc RHSA-2024:2010 python3.11-openpyxl 3.1.0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_api 1.19.0-3.el8pc RHSA-2024:2010 python3.11-opentelemetry_distro 0.40b0-7.el8pc RHSA-2024:2010 python3.11-opentelemetry_distro_otlp 0.40b0-7.el8pc RHSA-2024:2010 python3.11-opentelemetry_exporter_otlp 1.19.0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_exporter_otlp_proto_common 1.19.0-3.el8pc RHSA-2024:2010 python3.11-opentelemetry_exporter_otlp_proto_grpc 1.19.0-5.el8pc RHSA-2024:2010 python3.11-opentelemetry_exporter_otlp_proto_http 1.19.0-5.el8pc RHSA-2024:2010 python3.11-opentelemetry_instrumentation 0.40b0-5.el8pc RHSA-2024:2010 python3.11-opentelemetry_instrumentation_django 0.40b0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_instrumentation_wsgi 0.40b0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_proto 1.19.0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_sdk 1.19.0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_semantic_conventions 0.40b0-3.el8pc RHSA-2024:2010 python3.11-opentelemetry_util_http 0.40b0-3.el8pc RHSA-2024:2010 python3.11-packaging 21.3-5.el8pc RHSA-2024:2010 python3.11-parsley 1.3-5.el8pc RHSA-2024:2010 python3.11-pbr 5.8.0-6.el8pc RHSA-2024:2010 python3.11-pexpect 4.8.0-5.el8pc RHSA-2024:2010 python3.11-pillow 9.5.0-4.el8pc RHSA-2024:2010 python3.11-productmd 1.33-5.el8pc RHSA-2024:2010 python3.11-protobuf 4.21.6-4.el8pc RHSA-2024:2010 python3.11-psycopg 3.1.9-3.el8pc RHSA-2024:2010 python3.11-ptyprocess 0.7.0-3.el8pc RHSA-2024:2010 python3.11-pulp-ansible 0.20.2-3.el8pc RHSA-2024:2010 python3.11-pulp-certguard 1.7.1-2.el8pc RHSA-2024:2010 python3.11-pulp-cli 0.21.2-5.el8pc RHSA-2024:2010 python3.11-pulp-container 2.16.4-1.el8pc RHSA-2024:2010 python3.11-pulp-deb 3.0.1-1.el8pc RHSA-2024:2010 python3.11-pulp-file 1.15.1-2.el8pc RHSA-2024:2010 python3.11-pulp-glue 0.21.2-3.el8pc RHSA-2024:2010 python3.11-pulp-rpm 3.23.3-1.el8pc RHSA-2024:2010 python3.11-pulp_manifest 3.0.0-4.el8pc RHSA-2024:2010 python3.11-pulpcore 3.39.8-2.el8pc RHSA-2024:2010 python3.11-pyOpenSSL 23.3.0-1.el8pc RHSA-2024:2010 python3.11-pycares 4.1.2-4.el8pc RHSA-2024:2010 python3.11-pycodestyle 2.9.1-2.el8pc RHSA-2024:2010 python3.11-pycparser 2.21-5.el8pc RHSA-2024:2010 python3.11-pycryptodomex 3.20.0-1.el8pc RHSA-2024:2010 python3.11-pyflakes 2.5.0-2.el8pc RHSA-2024:2010 python3.11-pygments 2.17.0-1.el8pc RHSA-2024:2010 python3.11-pygtrie 2.5.0-4.el8pc RHSA-2024:2010 python3.11-pyjwkest 1.4.2-8.el8pc RHSA-2024:2010 python3.11-pyjwt 2.5.0-4.el8pc RHSA-2024:2010 python3.11-pyparsing 3.1.1-3.el8pc RHSA-2024:2010 python3.11-pyrsistent 0.18.1-5.el8pc RHSA-2024:2010 python3.11-pytz 2022.2.1-5.el8pc RHSA-2024:2010 python3.11-redis 4.3.4-4.el8pc RHSA-2024:2010 python3.11-requests 2.31.0-4.el8pc RHSA-2024:2010 python3.11-requirements-parser 0.2.0-6.el8pc RHSA-2024:2010 python3.11-rhsm 1.19.2-6.el8pc RHSA-2024:2010 python3.11-rich 13.3.1-7.el8pc RHSA-2024:2010 python3.11-ruamel-yaml 0.17.21-5.el8pc RHSA-2024:2010 python3.11-ruamel-yaml-clib 0.2.7-4.el8pc RHSA-2024:2010 python3.11-schema 0.7.5-5.el8pc RHSA-2024:2010 python3.11-semantic-version 2.10.0-4.el8pc RHSA-2024:2010 python3.11-six 1.16.0-5.el8pc RHSA-2024:2010 python3.11-smmap 5.0.0-5.el8pc RHSA-2024:2010 python3.11-solv 0.7.22-6.el8pc RHSA-2024:2010 python3.11-sqlparse 0.4.4-3.el8pc RHSA-2024:2010 python3.11-tablib 3.3.0-4.el8pc RHSA-2024:2010 python3.11-tenacity 7.0.0-5.el8pc RHSA-2024:2010 python3.11-toml 0.10.2-5.el8pc RHSA-2024:2010 python3.11-types-cryptography 3.3.23.2-3.el8pc RHSA-2024:2010 python3.11-typing-extensions 4.7.1-4.el8pc RHSA-2024:2010 python3.11-uritemplate 4.1.1-4.el8pc RHSA-2024:2010 python3.11-url-normalize 1.4.3-6.el8pc RHSA-2024:2010 python3.11-urllib3 1.26.18-2.el8pc RHSA-2024:2010 python3.11-urlman 2.0.1-3.el8pc RHSA-2024:2010 python3.11-uuid6 2023.5.2-4.el8pc RHSA-2024:2010 python3.11-wcmatch 8.3-5.el8pc RHSA-2024:2010 python3.11-webencodings 0.5.1-6.el8pc RHSA-2024:2010 python3.11-whitenoise 6.0.0-4.el8pc RHSA-2024:2010 python3.11-wrapt 1.14.1-4.el8pc RHSA-2024:2010 python3.11-xlrd 2.0.1-8.el8pc RHSA-2024:2010 python3.11-xlwt 1.3.0-6.el8pc RHSA-2024:2010 python3.11-yarl 1.8.2-4.el8pc RHSA-2024:2010 python3.11-zipp 3.4.0-7.el8pc RHSA-2024:2010 rubygem-actioncable 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-actionmailbox 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-actionmailer 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-actionpack 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-actiontext 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-actionview 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-activejob 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-activemodel 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-activerecord 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-activerecord-import 1.5.0-1.el8sat RHSA-2024:2010 rubygem-activerecord-session_store 2.0.0-1.el8sat RHSA-2024:2010 rubygem-activestorage 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-activesupport 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-acts_as_list 1.0.3-2.el8sat RHSA-2024:2010 rubygem-addressable 2.8.5-1.el8sat RHSA-2024:2010 rubygem-algebrick 0.7.5-1.el8sat RHSA-2024:2010 rubygem-amazing_print 1.5.0-1.el8sat RHSA-2024:2010 rubygem-ancestry 4.3.3-1.el8sat RHSA-2024:2010 rubygem-anemone 0.7.2-23.el8sat RHSA-2024:2010 rubygem-angular-rails-templates 1.1.0-2.el8sat RHSA-2024:2010 rubygem-ansi 1.5.0-3.el8sat RHSA-2024:2010 rubygem-apipie-bindings 0.6.0-1.el8sat RHSA-2024:2010 rubygem-apipie-dsl 2.6.1-1.el8sat RHSA-2024:2010 rubygem-apipie-params 0.0.5-5.1.el8sat RHSA-2024:2010 rubygem-apipie-rails 1.2.3-1.el8sat RHSA-2024:2010 rubygem-audited 5.4.2-1.el8sat RHSA-2024:2010 rubygem-azure_mgmt_compute 0.22.0-1.el8sat RHSA-2024:2010 rubygem-azure_mgmt_network 0.26.1-2.el8sat RHSA-2024:2010 rubygem-azure_mgmt_resources 0.18.2-1.el8sat RHSA-2024:2010 rubygem-azure_mgmt_storage 0.23.0-1.el8sat RHSA-2024:2010 rubygem-azure_mgmt_subscriptions 0.18.5-1.el8sat RHSA-2024:2010 rubygem-bcrypt 3.1.19-1.el8sat RHSA-2024:2010 rubygem-builder 3.2.4-2.el8sat RHSA-2024:2010 rubygem-bundler_ext 0.4.1-6.el8sat RHSA-2024:2010 rubygem-clamp 1.3.2-1.el8sat RHSA-2024:2010 rubygem-coffee-rails 5.0.0-2.el8sat RHSA-2024:2010 rubygem-coffee-script 2.4.1-5.el8sat RHSA-2024:2010 rubygem-coffee-script-source 1.12.2-5.el8sat RHSA-2024:2010 rubygem-colorize 0.8.1-2.el8sat RHSA-2024:2010 rubygem-concurrent-ruby 1.1.10-1.el8sat RHSA-2024:2010 rubygem-concurrent-ruby-edge 0.6.0-3.el8sat RHSA-2024:2010 rubygem-connection_pool 2.4.1-1.el8sat RHSA-2024:2010 rubygem-crass 1.0.6-2.el8sat RHSA-2024:2010 rubygem-css_parser 1.16.0-1.el8sat RHSA-2024:2010 rubygem-daemons 1.4.1-1.el8sat RHSA-2024:2010 rubygem-deacon 1.0.0-5.el8sat RHSA-2024:2010 rubygem-declarative 0.0.20-1.el8sat RHSA-2024:2010 rubygem-deep_cloneable 3.2.0-1.el8sat RHSA-2024:2010 rubygem-deface 1.5.3-3.el8sat RHSA-2024:2010 rubygem-diffy 3.4.2-1.el8sat RHSA-2024:2010 rubygem-domain_name 0.6.20231109-1.el8sat RHSA-2024:2010 rubygem-dynflow 1.8.2-1.el8sat RHSA-2024:2010 rubygem-erubi 1.12.0-1.el8sat RHSA-2024:2010 rubygem-et-orbi 1.2.7-1.el8sat RHSA-2024:2010 rubygem-excon 0.104.0-1.el8sat RHSA-2024:2010 rubygem-execjs 2.9.1-1.el8sat RHSA-2024:2010 rubygem-facter 4.5.0-1.el8sat RHSA-2024:2010 rubygem-faraday 1.10.2-1.el8sat RHSA-2024:2010 rubygem-faraday-cookie_jar 0.0.6-2.el8sat RHSA-2024:2010 rubygem-faraday-em_http 1.0.0-1.el8sat RHSA-2024:2010 rubygem-faraday-em_synchrony 1.0.0-1.el8sat RHSA-2024:2010 rubygem-faraday-excon 1.1.0-1.el8sat RHSA-2024:2010 rubygem-faraday-httpclient 1.0.1-1.el8sat RHSA-2024:2010 rubygem-faraday-multipart 1.0.4-1.el8sat RHSA-2024:2010 rubygem-faraday-net_http 1.0.1-1.el8sat RHSA-2024:2010 rubygem-faraday-net_http_persistent 1.2.0-1.el8sat RHSA-2024:2010 rubygem-faraday-patron 1.0.0-1.el8sat RHSA-2024:2010 rubygem-faraday-rack 1.0.0-1.el8sat RHSA-2024:2010 rubygem-faraday-retry 1.0.3-1.el8sat RHSA-2024:2010 rubygem-faraday_middleware 1.2.0-1.el8sat RHSA-2024:2010 rubygem-fast_gettext 1.8.0-1.el8sat RHSA-2024:2010 rubygem-ffi 1.16.3-1.el8sat RHSA-2024:2010 rubygem-fog-aws 3.21.0-1.el8sat RHSA-2024:2010 rubygem-fog-core 2.3.0-1.el8sat RHSA-2024:2010 rubygem-fog-json 1.2.0-4.el8sat RHSA-2024:2010 rubygem-fog-kubevirt 1.3.7-1.el8sat RHSA-2024:2010 rubygem-fog-libvirt 0.12.0-1.el8sat RHSA-2024:2010 rubygem-fog-openstack 1.1.0-1.el8sat RHSA-2024:2010 rubygem-fog-ovirt 2.0.2-1.el8sat RHSA-2024:2010 rubygem-fog-vsphere 3.6.2-1.el8sat RHSA-2024:2010 rubygem-fog-xml 0.1.4-1.el8sat RHSA-2024:2010 rubygem-foreman-tasks 9.0.4-1.el8sat RHSA-2024:2010 rubygem-foreman_ansible 13.0.3-1.el8sat RHSA-2024:2010 rubygem-foreman_azure_rm 2.2.10-2.el8sat RHSA-2024:2010 rubygem-foreman_bootdisk 21.2.1-1.el8sat RHSA-2024:2010 rubygem-foreman_discovery 23.0.1-1.1.el8sat RHSA-2024:2010 rubygem-foreman_google 1.0.4-1.el8sat RHSA-2024:2010 rubygem-foreman_hooks 0.3.17-3.1.el8sat RHSA-2024:2010 rubygem-foreman_kubevirt 0.1.9-4.el8sat RHSA-2024:2010 rubygem-foreman_leapp 1.1.1-1.el8sat RHSA-2024:2010 rubygem-foreman_maintain 1.4.4-1.el8sat RHSA-2024:2010 rubygem-foreman_openscap 7.1.1-1.el8sat RHSA-2024:2010 rubygem-foreman_puppet 6.1.1-1.el8sat RHSA-2024:2010 rubygem-foreman_remote_execution 12.0.5-1.el8sat RHSA-2024:2010 rubygem-foreman_remote_execution-cockpit 12.0.5-1.el8sat RHSA-2024:2010 rubygem-foreman_rh_cloud 9.0.55-1.el8sat RHSA-2024:2010 rubygem-foreman_templates 9.4.0-1.el8sat RHSA-2024:2010 rubygem-foreman_theme_satellite 13.2.3-1.el8sat RHSA-2024:2010 rubygem-foreman_virt_who_configure 0.5.20-1.el8sat RHSA-2024:2010 rubygem-foreman_webhooks 3.2.2-1.el8sat RHSA-2024:2010 rubygem-formatador 1.1.0-1.el8sat RHSA-2024:2010 rubygem-friendly_id 5.5.1-1.el8sat RHSA-2024:2010 rubygem-fugit 1.8.1-1.el8sat RHSA-2024:2010 rubygem-fx 0.7.0-1.el8sat RHSA-2024:2010 rubygem-gapic-common 0.12.0-1.el8sat RHSA-2024:2010 rubygem-get_process_mem 0.2.7-2.1.el8sat RHSA-2024:2010 rubygem-gettext_i18n_rails 1.12.0-1.el8sat RHSA-2024:2010 rubygem-git 1.18.0-1.el8sat RHSA-2024:2010 rubygem-gitlab-sidekiq-fetcher 0.9.0-2.el8sat RHSA-2024:2010 rubygem-globalid 1.2.1-1.el8sat RHSA-2024:2010 rubygem-google-apis-compute_v1 0.54.0-1.el8sat RHSA-2024:2010 rubygem-google-apis-core 0.9.1-1.el8sat RHSA-2024:2010 rubygem-google-cloud-common 1.1.0-1.el8sat RHSA-2024:2010 rubygem-google-cloud-compute 0.5.0-1.el8sat RHSA-2024:2010 rubygem-google-cloud-compute-v1 1.7.1-1.el8sat RHSA-2024:2010 rubygem-google-cloud-core 1.6.0-1.el8sat RHSA-2024:2010 rubygem-google-cloud-env 1.6.0-1.el8sat RHSA-2024:2010 rubygem-google-cloud-errors 1.3.0-1.el8sat RHSA-2024:2010 rubygem-google-protobuf 3.24.3-1.el8sat RHSA-2024:2010 rubygem-googleapis-common-protos 1.3.12-1.el8sat RHSA-2024:2010 rubygem-googleapis-common-protos-types 1.4.0-1.el8sat RHSA-2024:2010 rubygem-googleauth 1.3.0-1.el8sat RHSA-2024:2010 rubygem-graphql 1.13.20-1.el8sat RHSA-2024:2010 rubygem-graphql-batch 0.5.3-1.el8sat RHSA-2024:2010 rubygem-grpc 1.58.0-1.el8sat RHSA-2024:2010 rubygem-gssapi 1.3.1-1.el8sat RHSA-2024:2010 rubygem-hammer_cli 3.9.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman 3.9.0.1-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_admin 1.2.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_ansible 0.6.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_azure_rm 0.3.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_bootdisk 0.3.0-3.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_discovery 1.2.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_google 1.0.1-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_kubevirt 0.1.5-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_leapp 0.1.1-2.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_openscap 0.2.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_puppet 0.0.7-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_remote_execution 0.2.3-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_tasks 0.0.19-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_templates 0.3.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_virt_who_configure 0.1.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_webhooks 0.0.4-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_katello 1.11.1.2-1.el8sat RHSA-2024:2010 rubygem-hashie 5.0.0-1.el8sat RHSA-2024:2010 rubygem-highline 2.1.0-1.el8sat RHSA-2024:2010 rubygem-hocon 1.4.0-1.el8sat RHSA-2024:2010 rubygem-http 3.3.0-2.el8sat RHSA-2024:2010 rubygem-http-accept 1.7.0-1.el8sat RHSA-2024:2010 rubygem-http-cookie 1.0.5-1.el8sat RHSA-2024:2010 rubygem-http-form_data 2.1.1-2.el8sat RHSA-2024:2010 rubygem-http_parser.rb 0.6.0-3.1.el8sat RHSA-2024:2010 rubygem-httpclient 2.8.3-4.el8sat RHSA-2024:2010 rubygem-i18n 1.14.1-1.el8sat RHSA-2024:2010 rubygem-infoblox 3.0.0-4.el8sat RHSA-2024:2010 rubygem-jgrep 1.3.3-11.el8sat RHSA-2024:2010 rubygem-journald-logger 3.1.0-1.el8sat RHSA-2024:2010 rubygem-journald-native 1.0.12-1.el8sat RHSA-2024:2010 rubygem-jsonpath 1.1.2-1.el8sat RHSA-2024:2010 rubygem-jwt 2.7.1-1.el8sat RHSA-2024:2010 rubygem-kafo 7.3.0-1.el8sat RHSA-2024:2010 rubygem-kafo_parsers 1.2.1-1.el8sat RHSA-2024:2010 rubygem-kafo_wizards 0.0.2-2.el8sat RHSA-2024:2010 rubygem-katello 4.11.0.9-1.el8sat RHSA-2024:2010 rubygem-kubeclient 4.10.1-1.el8sat RHSA-2024:2010 rubygem-ldap_fluff 0.6.0-1.el8sat RHSA-2024:2010 rubygem-little-plugger 1.1.4-3.el8sat RHSA-2024:2010 rubygem-locale 2.1.3-1.el8sat RHSA-2024:2010 rubygem-logging 2.3.1-1.el8sat RHSA-2024:2010 rubygem-logging-journald 2.1.0-1.el8sat RHSA-2024:2010 rubygem-loofah 2.22.0-1.el8sat RHSA-2024:2010 rubygem-mail 2.8.0.1-1.el8sat RHSA-2024:2010 rubygem-marcel 1.0.2-1.el8sat RHSA-2024:2010 rubygem-memoist 0.16.2-1.el8sat RHSA-2024:2010 rubygem-method_source 1.0.0-1.el8sat RHSA-2024:2010 rubygem-mime-types 3.5.1-1.el8sat RHSA-2024:2010 rubygem-mime-types-data 3.2023.1003-1.el8sat RHSA-2024:2010 rubygem-mini_mime 1.1.5-1.el8sat RHSA-2024:2010 rubygem-mqtt 0.5.0-1.el8sat RHSA-2024:2010 rubygem-ms_rest 0.7.6-1.el8sat RHSA-2024:2010 rubygem-ms_rest_azure 0.12.0-1.el8sat RHSA-2024:2010 rubygem-msgpack 1.7.2-1.el8sat RHSA-2024:2010 rubygem-multi_json 1.15.0-1.el8sat RHSA-2024:2010 rubygem-multipart-post 2.2.3-1.el8sat RHSA-2024:2010 rubygem-mustermann 2.0.2-1.el8sat RHSA-2024:2010 rubygem-net-ldap 0.18.0-1.el8sat RHSA-2024:2010 rubygem-net-ping 2.0.8-1.el8sat RHSA-2024:2010 rubygem-net-scp 4.0.0-1.el8sat RHSA-2024:2010 rubygem-net-ssh 7.2.0-1.el8sat RHSA-2024:2010 rubygem-net-ssh-krb 0.4.0-4.el8sat RHSA-2024:2010 rubygem-net_http_unix 0.2.2-2.el8sat RHSA-2024:2010 rubygem-netrc 0.11.0-6.el8sat RHSA-2024:2010 rubygem-newt 0.9.7-3.1.el8sat RHSA-2024:2010 rubygem-nio4r 2.5.9-1.el8sat RHSA-2024:2010 rubygem-nokogiri 1.15.5-1.el8sat RHSA-2024:2010 rubygem-oauth 1.1.0-1.el8sat RHSA-2024:2010 rubygem-oauth-tty 1.0.5-1.el8sat RHSA-2024:2010 rubygem-openscap 0.4.9-9.el8sat RHSA-2024:2010 rubygem-openscap_parser 1.0.2-2.el8sat RHSA-2024:2010 rubygem-optimist 3.1.0-1.el8sat RHSA-2024:2010 rubygem-os 1.1.4-1.el8sat RHSA-2024:2010 rubygem-ovirt-engine-sdk 4.4.1-1.el8sat RHSA-2024:2010 rubygem-ovirt_provision_plugin 2.0.3-3.el8sat RHSA-2024:2010 rubygem-parallel 1.23.0-1.el8sat RHSA-2024:2010 rubygem-pg 1.5.4-1.el8sat RHSA-2024:2010 rubygem-polyglot 0.3.5-3.1.el8sat RHSA-2024:2010 rubygem-powerbar 2.0.1-3.el8sat RHSA-2024:2010 rubygem-prometheus-client 4.2.2-1.el8sat RHSA-2024:2010 rubygem-promise.rb 0.7.4-3.el8sat RHSA-2024:2010 rubygem-public_suffix 5.0.3-1.el8sat RHSA-2024:2010 rubygem-pulp_ansible_client 0.20.2-1.el8sat RHSA-2024:2010 rubygem-pulp_certguard_client 1.6.4-1.el8sat RHSA-2024:2010 rubygem-pulp_container_client 2.16.3-1.el8sat RHSA-2024:2010 rubygem-pulp_deb_client 3.0.0-1.el8sat RHSA-2024:2010 rubygem-pulp_file_client 1.15.1-1.el8sat RHSA-2024:2010 rubygem-pulp_ostree_client 2.1.3-1.el8sat RHSA-2024:2010 rubygem-pulp_python_client 3.10.0-1.el8sat RHSA-2024:2010 rubygem-pulp_rpm_client 3.23.0-1.el8sat RHSA-2024:2010 rubygem-pulpcore_client 3.39.2-1.el8sat RHSA-2024:2010 rubygem-puma 6.4.2-1.el8sat RHSA-2024:2010 rubygem-puma-status 1.6-1.el8sat RHSA-2024:2010 rubygem-raabro 1.4.0-1.el8sat RHSA-2024:2010 rubygem-rabl 0.16.1-1.el8sat RHSA-2024:2010 rubygem-rack 2.2.8-1.el8sat RHSA-2024:2010 rubygem-rack-cors 1.1.1-1.el8sat RHSA-2024:2010 rubygem-rack-jsonp 1.3.1-10.el8sat RHSA-2024:2010 rubygem-rack-protection 2.2.4-1.el8sat RHSA-2024:2010 rubygem-rack-test 2.1.0-1.el8sat RHSA-2024:2010 rubygem-rails 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-rails-dom-testing 2.2.0-1.el8sat RHSA-2024:2010 rubygem-rails-html-sanitizer 1.6.0-1.el8sat RHSA-2024:2010 rubygem-rails-i18n 7.0.8-1.el8sat RHSA-2024:2010 rubygem-railties 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-rainbow 2.2.2-1.el8sat RHSA-2024:2010 rubygem-rb-inotify 0.10.1-1.el8sat RHSA-2024:2010 rubygem-rbnacl 4.0.2-2.el8sat RHSA-2024:2010 rubygem-rbvmomi2 3.7.0-1.el8sat RHSA-2024:2010 rubygem-rchardet 1.8.0-1.el8sat RHSA-2024:2010 rubygem-recursive-open-struct 1.1.3-1.el8sat RHSA-2024:2010 rubygem-redfish_client 0.5.4-1.el8sat RHSA-2024:2010 rubygem-redis 4.5.1-1.el8sat RHSA-2024:2010 rubygem-representable 3.2.0-1.el8sat RHSA-2024:2010 rubygem-request_store 1.5.1-1.el8sat RHSA-2024:2010 rubygem-responders 3.1.1-1.el8sat RHSA-2024:2010 rubygem-rest-client 2.1.0-1.el8sat RHSA-2024:2010 rubygem-retriable 3.1.2-3.el8sat RHSA-2024:2010 rubygem-rkerberos 0.1.5-20.1.el8sat RHSA-2024:2010 rubygem-roadie 5.1.0-1.el8sat RHSA-2024:2010 rubygem-roadie-rails 3.1.0-1.el8sat RHSA-2024:2010 rubygem-robotex 1.0.0-22.el8sat RHSA-2024:2010 rubygem-rsec 0.4.3-5.el8sat RHSA-2024:2010 rubygem-ruby-libvirt 0.8.0-1.el8sat RHSA-2024:2010 rubygem-ruby2_keywords 0.0.5-1.el8sat RHSA-2024:2010 rubygem-ruby2ruby 2.5.0-1.el8sat RHSA-2024:2010 rubygem-ruby_parser 3.20.3-1.el8sat RHSA-2024:2010 rubygem-rubyipmi 0.11.1-1.el8sat RHSA-2024:2010 rubygem-safemode 1.3.8-1.el8sat RHSA-2024:2010 rubygem-scoped_search 4.1.12-1.el8sat RHSA-2024:2010 rubygem-sd_notify 0.1.1-1.el8sat RHSA-2024:2010 rubygem-secure_headers 6.5.0-1.el8sat RHSA-2024:2010 rubygem-sequel 5.74.0-1.el8sat RHSA-2024:2010 rubygem-server_sent_events 0.1.3-1.el8sat RHSA-2024:2010 rubygem-sexp_processor 4.17.0-1.el8sat RHSA-2024:2010 rubygem-sidekiq 6.5.12-1.el8sat RHSA-2024:2010 rubygem-signet 0.17.0-1.el8sat RHSA-2024:2010 rubygem-sinatra 2.2.4-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_ansible 3.5.5-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_container_gateway 1.1.0-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_dhcp_infoblox 0.0.17-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_dhcp_remote_isc 0.0.5-6.el8sat RHSA-2024:2010 rubygem-smart_proxy_discovery 1.0.5-9.el8sat RHSA-2024:2010 rubygem-smart_proxy_discovery_image 1.6.0-2.el8sat RHSA-2024:2010 rubygem-smart_proxy_dns_infoblox 1.1.0-7.el8sat RHSA-2024:2010 rubygem-smart_proxy_dynflow 0.9.1-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_dynflow_core 0.4.1-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_openscap 0.9.2-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_pulp 3.2.0-3.el8sat RHSA-2024:2010 rubygem-smart_proxy_remote_execution_ssh 0.10.3-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_shellhooks 0.9.2-3.el8sat RHSA-2024:2010 rubygem-snaky_hash 2.0.1-1.el8sat RHSA-2024:2010 rubygem-sprockets 4.2.1-1.el8sat RHSA-2024:2010 rubygem-sprockets-rails 3.4.2-1.el8sat RHSA-2024:2010 rubygem-sqlite3 1.4.2-1.el8sat RHSA-2024:2010 rubygem-sshkey 2.0.0-1.el8sat RHSA-2024:2010 rubygem-statsd-instrument 2.9.2-1.el8sat RHSA-2024:2010 rubygem-stomp 1.4.10-1.el8sat RHSA-2024:2010 rubygem-thor 1.2.2-1.el8sat RHSA-2024:2010 rubygem-tilt 2.3.0-1.el8sat RHSA-2024:2010 rubygem-timeliness 0.3.10-2.el8sat RHSA-2024:2010 rubygem-trailblazer-option 0.1.2-1.el8sat RHSA-2024:2010 rubygem-tzinfo 2.0.6-1.el8sat RHSA-2024:2010 rubygem-uber 0.1.0-3.el8sat RHSA-2024:2010 rubygem-unicode-display_width 2.4.2-1.el8sat RHSA-2024:2010 rubygem-validates_lengths_from_database 0.8.0-1.el8sat RHSA-2024:2010 rubygem-version_gem 1.1.3-1.el8sat RHSA-2024:2010 rubygem-webpack-rails 0.9.11-1.el8sat RHSA-2024:2010 rubygem-webrick 1.8.1-1.el8sat RHSA-2024:2010 rubygem-websocket-driver 0.7.6-1.el8sat RHSA-2024:2010 rubygem-websocket-extensions 0.1.5-2.el8sat RHSA-2024:2010 rubygem-will_paginate 3.3.1-1.el8sat RHSA-2024:2010 rubygem-xmlrpc 0.3.3-1.el8sat RHSA-2024:2010 rubygem-zeitwerk 2.6.12-1.el8sat RHSA-2024:2010 satellite 6.15.0-2.el8sat RHSA-2024:2010 satellite-cli 6.15.0-2.el8sat RHSA-2024:2010 satellite-common 6.15.0-2.el8sat RHSA-2024:2010 satellite-convert2rhel-toolkit 1.0.1-1.el8sat RHSA-2024:2010 satellite-installer 6.15.0.2-1.el8sat RHSA-2024:2010 satellite-lifecycle 0.0.1-1 RHSA-2024:2010 satellite-maintain 0.0.2-1.el8sat RHSA-2024:2010 yggdrasil-worker-forwarder 0.0.3-1.el8sat RHSA-2024:2010 2.2. Red Hat Satellite Capsule 6.15 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the satellite-capsule-6.15-for-rhel-8-x86_64-rpms repository. Table 2.2. Red Hat Satellite Capsule 6.15 for RHEL 8 x86_64 (RPMs) Name Version Advisory ansible-collection-redhat-satellite 4.0.0-2.el8sat RHSA-2024:2010 ansible-collection-redhat-satellite_operations 2.1.0-1.el8sat RHSA-2024:2010 ansible-lint 5.4.0-1.el8pc RHSA-2024:2010 ansible-runner 2.2.1-5.1.el8pc RHSA-2024:2010 ansiblerole-foreman_scap_client 0.2.0-2.el8sat RHSA-2024:2010 ansiblerole-insights-client 1.7.1-2.el8sat RHSA-2024:2010 cjson 1.7.14-5.el8sat RHSA-2024:2010 createrepo_c 1.0.2-5.el8pc RHSA-2024:2010 createrepo_c-libs 1.0.2-5.el8pc RHSA-2024:2010 dynflow-utils 1.6.3-1.el8sat RHSA-2024:2010 foreman-bootloaders-redhat 202102220000-1.el8sat RHSA-2024:2010 foreman-bootloaders-redhat-tftpboot 202102220000-1.el8sat RHSA-2024:2010 foreman-debug 3.9.1.6-1.el8sat RHSA-2024:2010 foreman-discovery-image 4.1.0-31.el8sat RHSA-2024:2010 foreman-discovery-image-service 1.0.0-4.1.el8sat RHSA-2024:2010 foreman-discovery-image-service-tui 1.0.0-4.1.el8sat RHSA-2024:2010 foreman-installer 3.9.0-3.el8sat RHSA-2024:2010 foreman-installer-katello 3.9.0-3.el8sat RHSA-2024:2010 foreman-proxy 3.9.0-1.el8sat RHSA-2024:2010 foreman-proxy-content 4.11.0-1.el8sat RHSA-2024:2010 foreman-proxy-fapolicyd 1.0.1-2.el8sat RHSA-2024:2010 foreman-proxy-journald 3.9.0-1.el8sat RHSA-2024:2010 katello-certs-tools 2.9.0-2.el8sat RHSA-2024:2010 katello-client-bootstrap 1.7.9-1.el8sat RHSA-2024:2010 katello-common 4.11.0-1.el8sat RHSA-2024:2010 katello-debug 4.11.0-1.el8sat RHSA-2024:2010 libcomps 0.1.18-8.el8pc RHSA-2024:2010 libsodium 1.0.17-3.el8sat RHSA-2024:2010 libsolv 0.7.22-6.el8pc RHSA-2024:2010 mosquitto 2.0.17-1.el8sat RHSA-2024:2010 pulpcore-obsolete-packages 1.0-9.el8pc RHSA-2024:2010 pulpcore-selinux 2.0.1-1.el8pc RHSA-2024:2010 puppet-agent 7.28.0-1.el8sat RHSA-2024:2010 puppet-agent-oauth 0.5.10-1.el8sat RHSA-2024:2010 puppet-foreman_scap_client 0.4.0-1.el8sat RHSA-2024:2010 puppetlabs-stdlib 9.4.1-1.el8sat RHSA-2024:2010 puppetserver 7.14.0-1.el8sat RHSA-2024:2010 python3-createrepo_c 1.0.2-5.el8pc RHSA-2024:2010 python3-libcomps 0.1.18-8.el8pc RHSA-2024:2010 python3-solv 0.7.22-6.el8pc RHSA-2024:2010 python3.11-aiodns 3.0.0-6.el8pc RHSA-2024:2010 python3.11-aiofiles 22.1.0-4.el8pc RHSA-2024:2010 python3.11-aiohttp 3.9.2-1.el8pc RHSA-2024:2010 python3.11-aiohttp-xmlrpc 1.5.0-5.el8pc RHSA-2024:2010 python3.11-aioredis 2.0.1-5.el8pc RHSA-2024:2010 python3.11-aiosignal 1.3.1-4.el8pc RHSA-2024:2010 python3.11-ansible-builder 3.0.0-1.el8pc RHSA-2024:2010 python3.11-ansible-runner 2.2.1-5.1.el8pc RHSA-2024:2010 python3.11-asgiref 3.6.0-4.el8pc RHSA-2024:2010 python3.11-async-lru 1.0.3-4.el8pc RHSA-2024:2010 python3.11-async-timeout 4.0.2-5.el8pc RHSA-2024:2010 python3.11-asyncio-throttle 1.0.2-6.el8pc RHSA-2024:2010 python3.11-attrs 21.4.0-5.el8pc RHSA-2024:2010 python3.11-backoff 2.2.1-4.el8pc RHSA-2024:2010 python3.11-bindep 2.11.0-4.el8pc RHSA-2024:2010 python3.11-bleach 3.3.1-5.el8pc RHSA-2024:2010 python3.11-bleach-allowlist 1.0.3-6.el8pc RHSA-2024:2010 python3.11-bracex 2.2.1-4.el8pc RHSA-2024:2010 python3.11-brotli 1.0.9-4.el8pc RHSA-2024:2010 python3.11-certifi 2022.12.7-3.el8pc RHSA-2024:2010 python3.11-cffi 1.15.1-4.el8pc RHSA-2024:2010 python3.11-charset-normalizer 2.1.1-4.el8pc RHSA-2024:2010 python3.11-click 8.1.3-4.el8pc RHSA-2024:2010 python3.11-click-shell 2.1-6.el8pc RHSA-2024:2010 python3.11-colorama 0.4.4-6.el8pc RHSA-2024:2010 python3.11-commonmark 0.9.1-8.el8pc RHSA-2024:2010 python3.11-contextlib2 21.6.0-6.el8pc RHSA-2024:2010 python3.11-createrepo_c 1.0.2-5.el8pc RHSA-2024:2010 python3.11-cryptography 41.0.6-1.el8pc RHSA-2024:2010 python3.11-daemon 2.3.1-4.3.el8pc RHSA-2024:2010 python3.11-dataclasses 0.8-6.el8pc RHSA-2024:2010 python3.11-dateutil 2.8.2-5.el8pc RHSA-2024:2010 python3.11-debian 0.1.44-6.el8pc RHSA-2024:2010 python3.11-defusedxml 0.7.1-6.el8pc RHSA-2024:2010 python3.11-deprecated 1.2.13-4.el8pc RHSA-2024:2010 python3.11-diff-match-patch 20200713-6.el8pc RHSA-2024:2010 python3.11-distro 1.7.0-3.el8pc RHSA-2024:2010 python3.11-django 4.2.9-1.el8pc RHSA-2024:2010 python3.11-django-filter 23.2-3.el8pc RHSA-2024:2010 python3.11-django-guid 3.3.0-4.el8pc RHSA-2024:2010 python3.11-django-import-export 3.1.0-3.el8pc RHSA-2024:2010 python3.11-django-lifecycle 1.0.0-3.el8pc RHSA-2024:2010 python3.11-django-readonly-field 1.1.2-3.el8pc RHSA-2024:2010 python3.11-djangorestframework 3.14.0-3.el8pc RHSA-2024:2010 python3.11-djangorestframework-queryfields 1.0.0-7.el8pc RHSA-2024:2010 python3.11-docutils 0.20.1-3.el8pc RHSA-2024:2010 python3.11-drf-access-policy 1.3.0-3.el8pc RHSA-2024:2010 python3.11-drf-nested-routers 0.93.4-5.el8pc RHSA-2024:2010 python3.11-drf-spectacular 0.26.5-4.el8pc RHSA-2024:2010 python3.11-dynaconf 3.1.12-3.el8pc RHSA-2024:2010 python3.11-ecdsa 0.18.0-4.el8pc RHSA-2024:2010 python3.11-enrich 1.2.6-7.el8pc RHSA-2024:2010 python3.11-et-xmlfile 1.1.0-5.el8pc RHSA-2024:2010 python3.11-flake8 5.0.0-2.el8pc RHSA-2024:2010 python3.11-frozenlist 1.3.3-4.el8pc RHSA-2024:2010 python3.11-future 0.18.3-4.el8pc RHSA-2024:2010 python3.11-galaxy-importer 0.4.19-2.el8pc RHSA-2024:2010 python3.11-gitdb 4.0.10-4.el8pc RHSA-2024:2010 python3.11-gitpython 3.1.40-2.el8pc RHSA-2024:2010 python3.11-gnupg 0.5.0-4.el8pc RHSA-2024:2010 python3.11-googleapis-common-protos 1.59.1-4.el8pc RHSA-2024:2010 python3.11-grpcio 1.56.0-4.el8pc RHSA-2024:2010 python3.11-gunicorn 20.1.0-7.el8pc RHSA-2024:2010 python3.11-importlib-metadata 6.0.1-3.el8pc RHSA-2024:2010 python3.11-inflection 0.5.1-6.el8pc RHSA-2024:2010 python3.11-iniparse 0.4-39.el8pc RHSA-2024:2010 python3.11-jinja2 3.1.3-1.el8pc RHSA-2024:2010 python3.11-jq 1.6.0-3.el8pc RHSA-2024:2010 python3.11-json_stream 2.3.2-4.el8pc RHSA-2024:2010 python3.11-json_stream_rs_tokenizer 0.4.25-3.el8pc RHSA-2024:2010 python3.11-jsonschema 4.10.3-3.el8pc RHSA-2024:2010 python3.11-libcomps 0.1.18-8.el8pc RHSA-2024:2010 python3.11-lockfile 0.12.2-4.el8pc RHSA-2024:2010 python3.11-lxml 4.9.2-4.el8pc RHSA-2024:2010 python3.11-markdown 3.4.1-3.el8pc RHSA-2024:2010 python3.11-markuppy 1.14-6.el8pc RHSA-2024:2010 python3.11-markupsafe 2.1.2-4.el8pc RHSA-2024:2010 python3.11-mccabe 0.7.0-3.el8pc RHSA-2024:2010 python3.11-multidict 6.0.4-4.el8pc RHSA-2024:2010 python3.11-odfpy 1.4.1-9.el8pc RHSA-2024:2010 python3.11-openpyxl 3.1.0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_api 1.19.0-3.el8pc RHSA-2024:2010 python3.11-opentelemetry_distro 0.40b0-7.el8pc RHSA-2024:2010 python3.11-opentelemetry_distro_otlp 0.40b0-7.el8pc RHSA-2024:2010 python3.11-opentelemetry_exporter_otlp 1.19.0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_exporter_otlp_proto_common 1.19.0-3.el8pc RHSA-2024:2010 python3.11-opentelemetry_exporter_otlp_proto_grpc 1.19.0-5.el8pc RHSA-2024:2010 python3.11-opentelemetry_exporter_otlp_proto_http 1.19.0-5.el8pc RHSA-2024:2010 python3.11-opentelemetry_instrumentation 0.40b0-5.el8pc RHSA-2024:2010 python3.11-opentelemetry_instrumentation_django 0.40b0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_instrumentation_wsgi 0.40b0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_proto 1.19.0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_sdk 1.19.0-4.el8pc RHSA-2024:2010 python3.11-opentelemetry_semantic_conventions 0.40b0-3.el8pc RHSA-2024:2010 python3.11-opentelemetry_util_http 0.40b0-3.el8pc RHSA-2024:2010 python3.11-packaging 21.3-5.el8pc RHSA-2024:2010 python3.11-parsley 1.3-5.el8pc RHSA-2024:2010 python3.11-pbr 5.8.0-6.el8pc RHSA-2024:2010 python3.11-pexpect 4.8.0-5.el8pc RHSA-2024:2010 python3.11-pillow 9.5.0-4.el8pc RHSA-2024:2010 python3.11-productmd 1.33-5.el8pc RHSA-2024:2010 python3.11-protobuf 4.21.6-4.el8pc RHSA-2024:2010 python3.11-psycopg 3.1.9-3.el8pc RHSA-2024:2010 python3.11-ptyprocess 0.7.0-3.el8pc RHSA-2024:2010 python3.11-pulp-ansible 0.20.2-3.el8pc RHSA-2024:2010 python3.11-pulp-certguard 1.7.1-2.el8pc RHSA-2024:2010 python3.11-pulp-cli 0.21.2-5.el8pc RHSA-2024:2010 python3.11-pulp-container 2.16.4-1.el8pc RHSA-2024:2010 python3.11-pulp-deb 3.0.1-1.el8pc RHSA-2024:2010 python3.11-pulp-file 1.15.1-2.el8pc RHSA-2024:2010 python3.11-pulp-glue 0.21.2-3.el8pc RHSA-2024:2010 python3.11-pulp-rpm 3.23.3-1.el8pc RHSA-2024:2010 python3.11-pulp_manifest 3.0.0-4.el8pc RHSA-2024:2010 python3.11-pulpcore 3.39.8-2.el8pc RHSA-2024:2010 python3.11-pyOpenSSL 23.3.0-1.el8pc RHSA-2024:2010 python3.11-pycares 4.1.2-4.el8pc RHSA-2024:2010 python3.11-pycodestyle 2.9.1-2.el8pc RHSA-2024:2010 python3.11-pycparser 2.21-5.el8pc RHSA-2024:2010 python3.11-pycryptodomex 3.20.0-1.el8pc RHSA-2024:2010 python3.11-pyflakes 2.5.0-2.el8pc RHSA-2024:2010 python3.11-pygments 2.17.0-1.el8pc RHSA-2024:2010 python3.11-pygtrie 2.5.0-4.el8pc RHSA-2024:2010 python3.11-pyjwkest 1.4.2-8.el8pc RHSA-2024:2010 python3.11-pyjwt 2.5.0-4.el8pc RHSA-2024:2010 python3.11-pyparsing 3.1.1-3.el8pc RHSA-2024:2010 python3.11-pyrsistent 0.18.1-5.el8pc RHSA-2024:2010 python3.11-pytz 2022.2.1-5.el8pc RHSA-2024:2010 python3.11-redis 4.3.4-4.el8pc RHSA-2024:2010 python3.11-requests 2.31.0-4.el8pc RHSA-2024:2010 python3.11-requirements-parser 0.2.0-6.el8pc RHSA-2024:2010 python3.11-rhsm 1.19.2-6.el8pc RHSA-2024:2010 python3.11-rich 13.3.1-7.el8pc RHSA-2024:2010 python3.11-ruamel-yaml 0.17.21-5.el8pc RHSA-2024:2010 python3.11-ruamel-yaml-clib 0.2.7-4.el8pc RHSA-2024:2010 python3.11-schema 0.7.5-5.el8pc RHSA-2024:2010 python3.11-semantic-version 2.10.0-4.el8pc RHSA-2024:2010 python3.11-six 1.16.0-5.el8pc RHSA-2024:2010 python3.11-smmap 5.0.0-5.el8pc RHSA-2024:2010 python3.11-solv 0.7.22-6.el8pc RHSA-2024:2010 python3.11-sqlparse 0.4.4-3.el8pc RHSA-2024:2010 python3.11-tablib 3.3.0-4.el8pc RHSA-2024:2010 python3.11-tenacity 7.0.0-5.el8pc RHSA-2024:2010 python3.11-toml 0.10.2-5.el8pc RHSA-2024:2010 python3.11-types-cryptography 3.3.23.2-3.el8pc RHSA-2024:2010 python3.11-typing-extensions 4.7.1-4.el8pc RHSA-2024:2010 python3.11-uritemplate 4.1.1-4.el8pc RHSA-2024:2010 python3.11-url-normalize 1.4.3-6.el8pc RHSA-2024:2010 python3.11-urllib3 1.26.18-2.el8pc RHSA-2024:2010 python3.11-urlman 2.0.1-3.el8pc RHSA-2024:2010 python3.11-uuid6 2023.5.2-4.el8pc RHSA-2024:2010 python3.11-wcmatch 8.3-5.el8pc RHSA-2024:2010 python3.11-webencodings 0.5.1-6.el8pc RHSA-2024:2010 python3.11-whitenoise 6.0.0-4.el8pc RHSA-2024:2010 python3.11-wrapt 1.14.1-4.el8pc RHSA-2024:2010 python3.11-xlrd 2.0.1-8.el8pc RHSA-2024:2010 python3.11-xlwt 1.3.0-6.el8pc RHSA-2024:2010 python3.11-yarl 1.8.2-4.el8pc RHSA-2024:2010 python3.11-zipp 3.4.0-7.el8pc RHSA-2024:2010 rubygem-activesupport 6.1.7.6-1.el8sat RHSA-2024:2010 rubygem-algebrick 0.7.5-1.el8sat RHSA-2024:2010 rubygem-ansi 1.5.0-3.el8sat RHSA-2024:2010 rubygem-apipie-params 0.0.5-5.1.el8sat RHSA-2024:2010 rubygem-bundler_ext 0.4.1-6.el8sat RHSA-2024:2010 rubygem-clamp 1.3.2-1.el8sat RHSA-2024:2010 rubygem-concurrent-ruby 1.1.10-1.el8sat RHSA-2024:2010 rubygem-concurrent-ruby-edge 0.6.0-3.el8sat RHSA-2024:2010 rubygem-domain_name 0.6.20231109-1.el8sat RHSA-2024:2010 rubygem-dynflow 1.8.2-1.el8sat RHSA-2024:2010 rubygem-excon 0.104.0-1.el8sat RHSA-2024:2010 rubygem-faraday 1.10.2-1.el8sat RHSA-2024:2010 rubygem-faraday-em_http 1.0.0-1.el8sat RHSA-2024:2010 rubygem-faraday-em_synchrony 1.0.0-1.el8sat RHSA-2024:2010 rubygem-faraday-excon 1.1.0-1.el8sat RHSA-2024:2010 rubygem-faraday-httpclient 1.0.1-1.el8sat RHSA-2024:2010 rubygem-faraday-multipart 1.0.4-1.el8sat RHSA-2024:2010 rubygem-faraday-net_http 1.0.1-1.el8sat RHSA-2024:2010 rubygem-faraday-net_http_persistent 1.2.0-1.el8sat RHSA-2024:2010 rubygem-faraday-patron 1.0.0-1.el8sat RHSA-2024:2010 rubygem-faraday-rack 1.0.0-1.el8sat RHSA-2024:2010 rubygem-faraday-retry 1.0.3-1.el8sat RHSA-2024:2010 rubygem-faraday_middleware 1.2.0-1.el8sat RHSA-2024:2010 rubygem-fast_gettext 1.8.0-1.el8sat RHSA-2024:2010 rubygem-ffi 1.16.3-1.el8sat RHSA-2024:2010 rubygem-foreman_maintain 1.4.4-1.el8sat RHSA-2024:2010 rubygem-gssapi 1.3.1-1.el8sat RHSA-2024:2010 rubygem-hashie 5.0.0-1.el8sat RHSA-2024:2010 rubygem-highline 2.1.0-1.el8sat RHSA-2024:2010 rubygem-http-accept 1.7.0-1.el8sat RHSA-2024:2010 rubygem-http-cookie 1.0.5-1.el8sat RHSA-2024:2010 rubygem-i18n 1.14.1-1.el8sat RHSA-2024:2010 rubygem-infoblox 3.0.0-4.el8sat RHSA-2024:2010 rubygem-journald-logger 3.1.0-1.el8sat RHSA-2024:2010 rubygem-journald-native 1.0.12-1.el8sat RHSA-2024:2010 rubygem-jwt 2.7.1-1.el8sat RHSA-2024:2010 rubygem-kafo 7.3.0-1.el8sat RHSA-2024:2010 rubygem-kafo_parsers 1.2.1-1.el8sat RHSA-2024:2010 rubygem-kafo_wizards 0.0.2-2.el8sat RHSA-2024:2010 rubygem-little-plugger 1.1.4-3.el8sat RHSA-2024:2010 rubygem-logging 2.3.1-1.el8sat RHSA-2024:2010 rubygem-logging-journald 2.1.0-1.el8sat RHSA-2024:2010 rubygem-mime-types 3.5.1-1.el8sat RHSA-2024:2010 rubygem-mime-types-data 3.2023.1003-1.el8sat RHSA-2024:2010 rubygem-mqtt 0.5.0-1.el8sat RHSA-2024:2010 rubygem-msgpack 1.7.2-1.el8sat RHSA-2024:2010 rubygem-multi_json 1.15.0-1.el8sat RHSA-2024:2010 rubygem-multipart-post 2.2.3-1.el8sat RHSA-2024:2010 rubygem-mustermann 2.0.2-1.el8sat RHSA-2024:2010 rubygem-net-ssh 7.2.0-1.el8sat RHSA-2024:2010 rubygem-net-ssh-krb 0.4.0-4.el8sat RHSA-2024:2010 rubygem-netrc 0.11.0-6.el8sat RHSA-2024:2010 rubygem-newt 0.9.7-3.1.el8sat RHSA-2024:2010 rubygem-nokogiri 1.15.5-1.el8sat RHSA-2024:2010 rubygem-oauth 1.1.0-1.el8sat RHSA-2024:2010 rubygem-oauth-tty 1.0.5-1.el8sat RHSA-2024:2010 rubygem-openscap 0.4.9-9.el8sat RHSA-2024:2010 rubygem-openscap_parser 1.0.2-2.el8sat RHSA-2024:2010 rubygem-powerbar 2.0.1-3.el8sat RHSA-2024:2010 rubygem-rack 2.2.8-1.el8sat RHSA-2024:2010 rubygem-rack-protection 2.2.4-1.el8sat RHSA-2024:2010 rubygem-rb-inotify 0.10.1-1.el8sat RHSA-2024:2010 rubygem-rbnacl 4.0.2-2.el8sat RHSA-2024:2010 rubygem-redfish_client 0.5.4-1.el8sat RHSA-2024:2010 rubygem-rest-client 2.1.0-1.el8sat RHSA-2024:2010 rubygem-rkerberos 0.1.5-20.1.el8sat RHSA-2024:2010 rubygem-rsec 0.4.3-5.el8sat RHSA-2024:2010 rubygem-ruby-libvirt 0.8.0-1.el8sat RHSA-2024:2010 rubygem-ruby2_keywords 0.0.5-1.el8sat RHSA-2024:2010 rubygem-rubyipmi 0.11.1-1.el8sat RHSA-2024:2010 rubygem-sd_notify 0.1.1-1.el8sat RHSA-2024:2010 rubygem-sequel 5.74.0-1.el8sat RHSA-2024:2010 rubygem-server_sent_events 0.1.3-1.el8sat RHSA-2024:2010 rubygem-sinatra 2.2.4-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_ansible 3.5.5-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_container_gateway 1.1.0-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_dhcp_infoblox 0.0.17-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_dhcp_remote_isc 0.0.5-6.el8sat RHSA-2024:2010 rubygem-smart_proxy_discovery 1.0.5-9.el8sat RHSA-2024:2010 rubygem-smart_proxy_discovery_image 1.6.0-2.el8sat RHSA-2024:2010 rubygem-smart_proxy_dns_infoblox 1.1.0-7.el8sat RHSA-2024:2010 rubygem-smart_proxy_dynflow 0.9.1-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_dynflow_core 0.4.1-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_openscap 0.9.2-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_pulp 3.2.0-3.el8sat RHSA-2024:2010 rubygem-smart_proxy_remote_execution_ssh 0.10.3-1.el8sat RHSA-2024:2010 rubygem-smart_proxy_shellhooks 0.9.2-3.el8sat RHSA-2024:2010 rubygem-snaky_hash 2.0.1-1.el8sat RHSA-2024:2010 rubygem-sqlite3 1.4.2-1.el8sat RHSA-2024:2010 rubygem-statsd-instrument 2.9.2-1.el8sat RHSA-2024:2010 rubygem-tilt 2.3.0-1.el8sat RHSA-2024:2010 rubygem-tzinfo 2.0.6-1.el8sat RHSA-2024:2010 rubygem-version_gem 1.1.3-1.el8sat RHSA-2024:2010 rubygem-webrick 1.8.1-1.el8sat RHSA-2024:2010 rubygem-xmlrpc 0.3.3-1.el8sat RHSA-2024:2010 rubygem-zeitwerk 2.6.12-1.el8sat RHSA-2024:2010 satellite-capsule 6.15.0-2.el8sat RHSA-2024:2010 satellite-common 6.15.0-2.el8sat RHSA-2024:2010 satellite-installer 6.15.0.2-1.el8sat RHSA-2024:2010 satellite-maintain 0.0.2-1.el8sat RHSA-2024:2010 2.3. Red Hat Satellite Maintenance 6.15 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the satellite-maintenance-6.15-for-rhel-8-x86_64-rpms repository. Table 2.3. Red Hat Satellite Maintenance 6.15 for RHEL 8 x86_64 (RPMs) Name Version Advisory rubygem-clamp 1.3.2-1.el8sat RHSA-2024:2010 rubygem-foreman_maintain 1.4.4-1.el8sat RHSA-2024:2010 rubygem-highline 2.1.0-1.el8sat RHSA-2024:2010 satellite-clone 3.5.0-1.el8sat RHSA-2024:2010 satellite-maintain 0.0.2-1.el8sat RHSA-2024:2010 2.4. Red Hat Satellite Utils 6.15 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the satellite-utils-6.15-for-rhel-8-x86_64-rpms repository. Table 2.4. Red Hat Satellite Utils 6.15 for RHEL 8 x86_64 (RPMs) Name Version Advisory foreman-cli 3.9.1.6-1.el8sat RHSA-2024:2010 rubygem-amazing_print 1.5.0-1.el8sat RHSA-2024:2010 rubygem-apipie-bindings 0.6.0-1.el8sat RHSA-2024:2010 rubygem-clamp 1.3.2-1.el8sat RHSA-2024:2010 rubygem-domain_name 0.6.20231109-1.el8sat RHSA-2024:2010 rubygem-fast_gettext 1.8.0-1.el8sat RHSA-2024:2010 rubygem-ffi 1.16.3-1.el8sat RHSA-2024:2010 rubygem-gssapi 1.3.1-1.el8sat RHSA-2024:2010 rubygem-hammer_cli 3.9.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman 3.9.0.1-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_admin 1.2.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_ansible 0.6.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_azure_rm 0.3.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_bootdisk 0.3.0-3.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_discovery 1.2.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_google 1.0.1-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_openscap 0.2.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_remote_execution 0.2.3-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_tasks 0.0.19-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_templates 0.3.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_virt_who_configure 0.1.0-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_foreman_webhooks 0.0.4-1.el8sat RHSA-2024:2010 rubygem-hammer_cli_katello 1.11.1.2-1.el8sat RHSA-2024:2010 rubygem-hashie 5.0.0-1.el8sat RHSA-2024:2010 rubygem-highline 2.1.0-1.el8sat RHSA-2024:2010 rubygem-http-accept 1.7.0-1.el8sat RHSA-2024:2010 rubygem-http-cookie 1.0.5-1.el8sat RHSA-2024:2010 rubygem-jwt 2.7.1-1.el8sat RHSA-2024:2010 rubygem-little-plugger 1.1.4-3.el8sat RHSA-2024:2010 rubygem-locale 2.1.3-1.el8sat RHSA-2024:2010 rubygem-logging 2.3.1-1.el8sat RHSA-2024:2010 rubygem-mime-types 3.5.1-1.el8sat RHSA-2024:2010 rubygem-mime-types-data 3.2023.1003-1.el8sat RHSA-2024:2010 rubygem-multi_json 1.15.0-1.el8sat RHSA-2024:2010 rubygem-netrc 0.11.0-6.el8sat RHSA-2024:2010 rubygem-oauth 1.1.0-1.el8sat RHSA-2024:2010 rubygem-oauth-tty 1.0.5-1.el8sat RHSA-2024:2010 rubygem-powerbar 2.0.1-3.el8sat RHSA-2024:2010 rubygem-rest-client 2.1.0-1.el8sat RHSA-2024:2010 rubygem-snaky_hash 2.0.1-1.el8sat RHSA-2024:2010 rubygem-unicode-display_width 2.4.2-1.el8sat RHSA-2024:2010 rubygem-version_gem 1.1.3-1.el8sat RHSA-2024:2010 satellite-cli 6.15.0-2.el8sat RHSA-2024:2010 2.5. Red Hat Satellite Client 6 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the satellite-client-6-for-rhel-8-x86_64-rpms repository. Table 2.5. Red Hat Satellite Client 6 for RHEL 8 x86_64 (RPMs) Name Version Advisory gofer 2.12.5-7.el8sat RHBA-2022:96562 katello-agent 3.5.7-3.el8sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el8sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el8sat RHBA-2022:96562 puppet-agent 7.16.0-2.el8sat RHBA-2022:96562 python3-gofer 2.12.5-7.el8sat RHBA-2022:96562 python3-gofer-proton 2.12.5-7.el8sat RHBA-2022:96562 python3-qpid-proton 0.33.0-4.el8 RHBA-2022:96562 qpid-proton-c 0.33.0-4.el8 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el8sat RHBA-2022:96562 2.6. Red Hat Satellite Client 6 for RHEL 8 ppc64le (RPMs) The following table outlines the packages included in the satellite-client-6-for-rhel-8-ppc64le-rpms repository. Table 2.6. Red Hat Satellite Client 6 for RHEL 8 ppc64le (RPMs) Name Version Advisory gofer 2.12.5-7.el8sat RHBA-2022:96562 katello-agent 3.5.7-3.el8sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el8sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el8sat RHBA-2022:96562 python3-gofer 2.12.5-7.el8sat RHBA-2022:96562 python3-gofer-proton 2.12.5-7.el8sat RHBA-2022:96562 python3-qpid-proton 0.33.0-4.el8 RHBA-2022:96562 qpid-proton-c 0.33.0-4.el8 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el8sat RHBA-2022:96562 2.7. Red Hat Satellite Client 6 for RHEL 8 s390x (RPMs) The following table outlines the packages included in the satellite-client-6-for-rhel-8-s390x-rpms repository. Table 2.7. Red Hat Satellite Client 6 for RHEL 8 s390x (RPMs) Name Version Advisory gofer 2.12.5-7.el8sat RHBA-2022:96562 katello-agent 3.5.7-3.el8sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el8sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el8sat RHBA-2022:96562 python3-gofer 2.12.5-7.el8sat RHBA-2022:96562 python3-gofer-proton 2.12.5-7.el8sat RHBA-2022:96562 python3-qpid-proton 0.33.0-4.el8 RHBA-2022:96562 qpid-proton-c 0.33.0-4.el8 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el8sat RHBA-2022:96562 2.8. Red Hat Satellite Client 6 for RHEL 8 aarch64 (RPMs) The following table outlines the packages included in the satellite-client-6-for-rhel-8-aarch64-rpms repository. Table 2.8. Red Hat Satellite Client 6 for RHEL 8 aarch64 (RPMs) Name Version Advisory gofer 2.12.5-7.el8sat RHBA-2022:96562 katello-agent 3.5.7-3.el8sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el8sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el8sat RHBA-2022:96562 puppet-agent 7.16.0-2.el8sat RHBA-2022:96562 python3-gofer 2.12.5-7.el8sat RHBA-2022:96562 python3-gofer-proton 2.12.5-7.el8sat RHBA-2022:96562 python3-qpid-proton 0.33.0-4.el8 RHBA-2022:96562 qpid-proton-c 0.33.0-4.el8 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el8sat RHBA-2022:96562 | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/package_manifest/sat-6-15-rhel8 |
Preface | Preface This document describes the options available in the configuration files for each of the major services in Red Hat OpenStack Platform. The content is automatically generated based on the values in the configuration files themselves, and is provided for reference purposes only. Warning Manually editing configuration files is not supported. All configuration changes must be made through the Director. Red Hat provides this guide as a technical reference only. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuration_reference/pr01 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/block_storage_backup_guide/making-open-source-more-inclusive |
Windows Container Support for OpenShift | Windows Container Support for OpenShift OpenShift Container Platform 4.7 Red Hat OpenShift for Windows Containers Guide Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/windows_container_support_for_openshift/index |
Chapter 1. Introduction | Chapter 1. Introduction Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. The Red Hat Ceph Storage documentation is available at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/8 . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/8.0_release_notes/introduction |
8.6. Detecting Tokens | 8.6. Detecting Tokens To see if a token can be detected by Certificate System, use the TokenInfo utility, pointing to the alias directory for the subsystem instance. This is a Certificate System tool which is available after the Certificate System packages are installed. For example: This utility returns all tokens which can be detected by the Certificate System, not only tokens which are installed in the Certificate System. | [
"TokenInfo /var/lib/pki/pki-tomcat/alias"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/detecting_tokens |
Deploying the Red Hat Quay Operator on OpenShift Container Platform | Deploying the Red Hat Quay Operator on OpenShift Container Platform Red Hat Quay 3.12 Deploying the Red Hat Quay Operator on OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/index |
Chapter 1. Migrating your IdM environment from RHEL 8 servers to RHEL 9 servers | Chapter 1. Migrating your IdM environment from RHEL 8 servers to RHEL 9 servers To upgrade a RHEL 8 IdM environment to RHEL 9, you must first add new RHEL 9 IdM replicas to your RHEL 8 IdM environment, and then retire the RHEL 8 servers. The migration involves moving all Identity Management (IdM) data and configuration from a Red Hat Enterprise Linux (RHEL) 8 server to a RHEL 9 server. Important Migrate all servers in an IdM deployment as quickly as possible. Mixing different IdM versions in the same deployment for extended periods of time can lead to incompatibilities or possibly even unrecoverable data corruption. Warning Performing an in-place upgrade of RHEL 8 IdM servers and IdM server nodes to RHEL 9 is not supported. For more information about adding a RHEL 9 IdM replica in FIPS mode to a RHEL 8 IdM deployment in FIPS mode, see the Identity Management section in Considerations in adopting RHEL 9 . After upgrading your IdM replica to RHEL 9.2, the IdM Kerberos Distribution Center (KDC) might fail to issue ticket-granting tickets (TGTs) to users who do not have Security Identifiers (SIDs) assigned to their accounts. Consequently, the users cannot log in to their accounts. To work around the problem, generate SIDs by running # ipa config-mod --enable-sid --add-sids as an IdM administrator on another IdM replica in the topology. Afterward, if users still cannot log in, examine the Directory Server error log. You might have to adjust ID ranges to include user POSIX identities. Migrating directly to RHEL 9 from RHEL 7 or earlier versions is not supported. To properly update your IdM data, you must perform incremental migrations. For example, to migrate a RHEL 7 IdM environment to RHEL 9: Migrate from RHEL 7 servers to RHEL 8 servers. See Migrating to Identity Management on RHEL 8 . Migrate from RHEL 8 servers to RHEL 9 servers, as described in this section. This section describes how to migrate all Identity Management (IdM) data and configuration from a Red Hat Enterprise Linux (RHEL) 8 server to a RHEL 9 server. The migration procedure includes: Configuring a RHEL 9 IdM server and adding it as a replica to your current RHEL 8 IdM environment. For details, see Installing the RHEL 9 Replica . Making the RHEL 9 server the certificate authority (CA) renewal server. For details, see Assigning the CA renewal server role to the RHEL 9 IdM server . Stopping the generation of the certificate revocation list (CRL) on the RHEL 8 server and redirecting CRL requests to the RHEL 9 replica. For details, see Stopping CRL generation on a RHEL 8 IdM CA server . Starting the generation of the CRL on the RHEL 9 server. For details, see Starting CRL generation on the new RHEL 9 IdM CA server . Stopping and decommissioning the original RHEL 8 CA renewal server. For details, see Stopping and decommissioning the RHEL 8 server . In the following procedures: rhel9.example.com is the RHEL 9 system that will become the new CA renewal server. rhel8.example.com is the original RHEL 8 CA renewal server. To identify which Red Hat Enterprise Linux 8 server is the CA renewal server, run the following command on any IdM server: If your IdM deployment does not use an IdM CA, any IdM server running on RHEL 8 can be rhel8.example.com . Note Complete the steps in the following sections only if your IdM deployment uses an embedded certificate authority (CA): Assigning the CA renewal server role to the RHEL 9 IdM server Stopping CRL generation on a RHEL 8 IdM CA server Starting CRL generation on the new RHEL 9 IdM CA server 1.1. Prerequisites for migrating IdM from RHEL 8 to 9 On rhel8.example.com : Upgrade the system to the latest RHEL 8 version. Important If you are migrating to RHEL 9.0, do not update to a newer version than RHEL 8.6. Migrating from RHEL 8.7 is only supported for RHEL 9.1. Update the ipa- * packages to their latest version: Warning When upgrading multiple Identity Management (IdM) servers, wait at least 10 minutes between each upgrade. When two or more servers are upgraded simultaneously or with only short intervals between the upgrades, there is not enough time to replicate the post-upgrade data changes throughout the topology, which can result in conflicting replication events. On rhel9.example.com : The latest version of Red Hat Enterprise Linux is installed on the system. For more information, see Interactively installing RHEL from installation media . Ensure the system is an IdM client enrolled into the domain for which rhel8.example.com IdM server is authoritative. For more information, see Installing an IdM client: Basic scenario . Ensure the system meets the requirements for IdM server installation. See Preparing the system for IdM server installation . Ensure you know the time server rhel8.example.com is synchronized with: Ensure the system is authorized for the installation of an IdM replica. See Authorizing the installation of a replica on an IdM client . Update the ipa- * packages to their latest version: Additional resources To decide which server roles you want to install on the new IdM primary server, rhel9.example.com , see the following links: For details on the CA server role in IdM, see Planning your CA services . For details on the DNS server role in IdM, see Planning your DNS services and host names . For details on integration based on cross-forest trust between an IdM and Active Directory (AD), see Planning a cross-forest trust between IdM and AD . To be able to install specific server roles for IdM in RHEL 9, you need to download packages from specific IdM repositories: Installing packages required for an IdM server . To upgrade a system from RHEL 8 to RHEL 9, see Upgrading from RHEL 8 to RHEL 9 . 1.2. Installing the RHEL 9 replica List which server roles are present in your RHEL 8 environment: Optional: If you want to use the same per-server forwarders for rhel9.example.com that rhel8.example.com is using, view the per-server forwarders for rhel8.example.com : Review the replication agreements topology using the steps in either Viewing replication topology using the WebUI or Viewing topology suffixes using the CLI and Viewing topology segments using the CLI . Install the IdM server software on rhel9.example.com to configure it as a replica of the RHEL 8 IdM server, including all the server roles present on rhel8.example.com . To install the roles from the example above, use these options with the ipa-replica-install command: --setup-ca to set up the Certificate System component --setup-dns and --forwarder to configure an integrated DNS server and set a per-server forwarder to take care of DNS queries that go outside the IdM domain Note Additionally, if your IdM deployment is in a trust relationship with Active Directory (AD), add the --setup-adtrust option to the ipa-replica-install command to configure AD trust capability on rhel9.example.com . --ntp-server to specify an NTP server or --ntp-pool to specify a pool of NTP servers To set up an IdM server with the IP address of 192.0.2.1 that uses a per-server forwarder with the IP address of 192.0.2.20 and synchronizes with the ntp.example.com NTP server: You do not need to specify the RHEL 8 IdM server itself because if DNS is working correctly, rhel9.example.com will find it using DNS autodiscovery. Optional: Add an _ntp._udp service (SRV) record for your external NTP time server to the DNS of the newly-installed IdM server, rhel9.example.com . The presence of the SRV record for the time server in IdM DNS ensures that future RHEL 9 replica and client installations are automatically configured to synchronize with the time server used by rhel9.example.com . This is because ipa-client-install looks for the _ntp._udp DNS entry unless --ntp-server or --ntp-pool options are provided on the install command-line interface (CLI). Create any replication agreements needed to re-create the topology using the steps in Setting up replication between two servers using the Web UI or Setting up replication between two servers using the CLI . Verification Verify that the IdM services are running on rhel9.example.com : Verify that server roles for rhel9.example.com are the same as for rhel8.example.com : Optional: Display details about the replication agreement between rhel8.example.com and rhel9.example.com : Optional: If your IdM deployment is in a trust relationship with AD, verify that it is working: Verify the Kerberos configuration Attempt to resolve an AD user on rhel9.example.com : Verify that rhel9.example.com is synchronized with the NTP server: Additional resources DNS configuration priorities Time service requirements for IdM 1.3. Assigning the CA renewal server role to the RHEL 9 IdM server If your IdM deployment uses an embedded certificate authority (CA), assign the CA renewal server role to the Red Hat Enterprise Linux (RHEL) 9 IdM server. On rhel9.example.com , configure rhel9.example.com as the new CA renewal server: Configure rhel9.example.com to handle CA subsystem certificate renewal: The output confirms that the update was successful. On rhel9.example.com , enable the certificate updater task: Open the /etc/pki/pki-tomcat/ca/CS.cfg configuration file for editing. Remove the ca.certStatusUpdateInterval entry, or set it to the desired interval in seconds. The default value is 600 . Save and close the /etc/pki/pki-tomcat/ca/CS.cfg configuration file. Restart IdM services: On rhel8.example.com , disable the certificate updater task: Open the /etc/pki/pki-tomcat/ca/CS.cfg configuration file for editing. Change ca.certStatusUpdateInterval to 0 , or add the following entry if it does not exist: Save and close the /etc/pki/pki-tomcat/ca/CS.cfg configuration file. Restart IdM services: 1.4. Stopping CRL generation on a RHEL 8 IdM CA server If your IdM deployment uses an embedded certificate authority (CA), stop generating the Certificate Revocation List (CRL) on the IdM CRL publisher server. Prerequisites You must be logged in as root. Procedure Optional: Verify that rhel8.example.com is generating the CRL: Stop generating the CRL on the rhel8.example.com server: Optional: Check if the rhel8.example.com server stopped generating the CRL: The rhel8.example.com server stopped generating the CRL. The step is to enable generating the CRL on rhel9.example.com . 1.5. Starting CRL generation on the new RHEL 9 IdM CA server If your IdM deployment uses an embedded certificate authority (CA), start Certificate Revocation List (CRL) generation on the new Red Hat Enterprise Linux (RHEL) 9 IdM CA server. Prerequisites You must be logged in as root on the rhel9.example.com machine. Procedure To start generating the CRL on rhel9.example.com , use the ipa-crlgen-manage enable command: Verification To check if CRL generation is enabled, use the ipa-crlgen-manage status command: 1.6. Stopping and decommissioning the RHEL 8 server Make sure that all data, including the latest changes, have been correctly migrated from rhel8.example.com to rhel9.example.com . For example: Add a new user on rhel8.example.com : Check that the user has been replicated to rhel9.example.com : Ensure that a Distributed Numeric Assignment (DNA) ID range is allocated to rhel9.example.com . Use one of the following methods: Activate the DNA plug-in on rhel9.example.com directly by creating another test user: Assign a specific DNA ID range to rhel9.example.com : On rhel8.example.com , display the IdM ID range: On rhel8.example.com , display the allocated DNA ID ranges: Reduce the DNA ID range allocated to rhel8.example.com so that a section becomes available to rhel9.example.com : Assign the remaining part of the IdM ID range to rhel9.example.com : Stop all IdM services on rhel8.example.com to force domain discovery to the new rhel9.example.com server. After this, the ipa utility will contact the new server through a remote procedure call (RPC). Remove the RHEL 8 server from the topology by executing the removal commands on the RHEL 9 server. For details, see Uninstalling an IdM server . | [
"ipa config-show | grep \"CA renewal\" IPA CA renewal master: rhel8.example.com",
"dnf update ipa- *",
"ntpstat synchronised to NTP server ( ntp.example.com ) at stratum 3 time correct to within 42 ms polling server every 1024 s",
"dnf update ipa- *",
"ipa server-role-find --status enabled --server rhel8.example.com ---------------------- 3 server roles matched ---------------------- Server name: rhel8.example.com Role name: CA server Role status: enabled Server name: rhel8.example.com Role name: DNS server Role status: enabled [... output truncated ...]",
"ipa dnsserver-show rhel8.example.com ----------------------------- 1 DNS server matched ----------------------------- Server name: rhel8.example.com SOA mname: rhel8.example.com. Forwarders: 192.0.2.20 Forward policy: only -------------------------------------------------- Number of entries returned 1 --------------------------------------------------",
"ipa-replica-install --setup-ca --ip-address 192.0.2.1 --setup-dns --forwarder 192.0.2.20 --ntp-server ntp.example.com",
"ipactl status Directory Service: RUNNING [... output truncated ...] ipa: INFO: The ipactl command was successful",
"kinit admin ipa server-role-find --status enabled --server rhel9.example.com ---------------------- 2 server roles matched ---------------------- Server name: rhel9.example.com Role name: CA server Role status: enabled Server name: rhel9.example.com Role name: DNS server Role status: enabled",
"ipa-csreplica-manage list --verbose rhel9.example.com Directory Manager password: rhel8.example.com last init status: None last init ended: 1970-01-01 00:00:00+00:00 last update status: Error (0) Replica acquired successfully: Incremental update succeeded last update ended: 2019-02-13 13:55:13+00:00",
"id [email protected]",
"chronyc tracking Reference ID : CB00710F ( ntp.example.com ) Stratum : 3 Ref time (UTC) : Wed Feb 16 09:49:17 2022 [... output truncated ...]",
"ipa config-mod --ca-renewal-master-server rhel9.example.com IPA masters: rhel8.example.com, rhel9.example.com IPA CA servers: rhel8.example.com, rhel9.example.com IPA CA renewal master: rhel9.example.com",
"[user@rhel9 ~]USD ipactl restart",
"ca.certStatusUpdateInterval=0",
"[user@rhel8 ~]USD ipactl restart",
"ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2021-10-31 12:00:00 Last CRL Number: 6 The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage disable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd CRL generation disabled on the local host. Please make sure to configure CRL generation on another master with ipa-crlgen-manage enable. The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage status",
"ipa-crlgen-manage enable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd Forcing CRL update CRL generation enabled on the local host. Please make sure to have only a single CRL generation master. The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2021-10-31 12:10:00 Last CRL Number: 7 The ipa-crlgen-manage command was successful",
"ipa user-add random_user First name: random Last name: user",
"ipa user-find random_user -------------- 1 user matched -------------- User login: random_user First name: random Last name: user",
"ipa user-add another_random_user First name: another Last name: random_user",
"ipa idrange-find ---------------- 3 ranges matched ---------------- Range name: EXAMPLE.COM_id_range First Posix ID of the range: 196600000 Number of IDs in the range: 200000 First RID of the corresponding RID range: 1000 First RID of the secondary RID range: 100000000 Range type: local domain range",
"ipa-replica-manage dnarange-show rhel8.example.com: 196600026-196799999 rhel9.example.com: No range set",
"ipa-replica-manage dnarange-set rhel8.example.com 196600026-196699999",
"ipa-replica-manage dnarange-set rhel9.example.com 196700000-196799999",
"ipactl stop Stopping CA Service Stopping pki-ca: [ OK ] Stopping HTTP Service Stopping httpd: [ OK ] Stopping MEMCACHE Service Stopping ipa_memcached: [ OK ] Stopping DNS Service Stopping named: [ OK ] Stopping KPASSWD Service Stopping Kerberos 5 Admin Server: [ OK ] Stopping KDC Service Stopping Kerberos 5 KDC: [ OK ] Stopping Directory Service Shutting down dirsrv: EXAMPLE-COM... [ OK ] PKI-IPA... [ OK ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/migrating_to_identity_management_on_rhel_9/assembly_migrating-your-idm-environment-from-rhel-8-servers-to-rhel-9-servers_migrating-to-idm-on-rhel-9 |
Chapter 23. Manipulating the Domain XML | Chapter 23. Manipulating the Domain XML This chapter explains in detail the components of guest virtual machine XML configuration files, also known as domain XML . In this chapter, the term domain refers to the root <domain> element required for all guest virtual machines. The domain XML has two attributes: type and id . type specifies the hypervisor used for running the domain. The allowed values are driver-specific, but include KVM and others. id is a unique integer identifier for the running guest virtual machine. Inactive machines have no id value. The sections in this chapter will describe the components of the domain XML. Additional chapters in this manual may see this chapter when manipulation of the domain XML is required. Important Use only supported management interfaces (such as virsh and the Virtual Machine Manager ) and commands (such as virt-xml ) to edit the components of the domain XML file. Do not open and edit the domain XML file directly with a text editor. If you absolutely must edit the domain XML file directly, use the virsh edit command. Note This chapter is based on the libvirt upstream documentation . 23.1. General Information and Metadata This information is in this part of the domain XML: <domain type='kvm' id='3'> <name>fv0</name> <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid> <title>A short description - title - of the domain</title> <description>A human readable description</description> <metadata> <app1:foo xmlns:app1="http://app1.org/app1/">..</app1:foo> <app2:bar xmlns:app2="http://app1.org/app2/">..</app2:bar> </metadata> ... </domain> Figure 23.1. Domain XML metadata The components of this section of the domain XML are as follows: Table 23.1. General metadata elements Element Description <name> Assigns a name for the virtual machine. This name should consist only of alpha-numeric characters and is required to be unique within the scope of a single host physical machine. It is often used to form the file name for storing the persistent configuration files. <uuid> Assigns a globally unique identifier for the virtual machine. The format must be RFC 4122-compliant, for example 3e3fce45-4f53-4fa7-bb32-11f34168b82b . If omitted when defining or creating a new machine, a random UUID is generated. It is also possible to provide the UUID using a sysinfo specification. <title> Creates space for a short description of the domain. The title should not contain any new lines. <description> Different from the title, this data is not used by libvirt. It can contain any information the user chooses to display. <metadata> Can be used by applications to store custom metadata in the form of XML nodes/trees. Applications must use custom name spaces on XML nodes/trees, with only one top-level element per name space (if the application needs structure, they should have sub-elements to their name space element). | [
"<domain type='kvm' id='3'> <name>fv0</name> <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid> <title>A short description - title - of the domain</title> <description>A human readable description</description> <metadata> <app1:foo xmlns:app1=\"http://app1.org/app1/\">..</app1:foo> <app2:bar xmlns:app2=\"http://app1.org/app2/\">..</app2:bar> </metadata> </domain>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-Manipulating_the_domain_xml |
Chapter 5. Backing up the Original Manager | Chapter 5. Backing up the Original Manager Back up the original Manager using the engine-backup command, and copy the backup file to a separate location so that it can be accessed at any point during the process. For more information about engine-backup --mode=backup options, see Backing Up and Restoring the Red Hat Virtualization Manager in the Administration Guide . Procedure Log in to the original Manager and stop the ovirt-engine service: Note Though stopping the original Manager from running is not obligatory, it is recommended as it ensures no changes are made to the environment after the backup is created. Additionally, it prevents the original Manager and the new Manager from simultaneously managing existing resources. Run the engine-backup command, specifying the name of the backup file to create, and the name of the log file to create to store the backup log: # engine-backup --mode=backup --file= file_name --log= log_file_name Copy the files to an external server. In the following example, storage.example.com is the fully qualified domain name of a network storage server that will store the backup until it is needed, and /backup/ is any designated folder or path. # scp -p file_name log_file_name storage.example.com:/backup/ If you do not require the Manager machine for other purposes, unregister it from Red Hat Subscription Manager: # subscription-manager unregister After backing up the Manager, deploy a new self-hosted engine and restore the backup on the new virtual machine. | [
"systemctl stop ovirt-engine systemctl disable ovirt-engine",
"engine-backup --mode=backup --file= file_name --log= log_file_name",
"scp -p file_name log_file_name storage.example.com:/backup/",
"subscription-manager unregister"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/Backing_up_the_Original_Manager_migrating_to_SHE |
Chapter 7. References | Chapter 7. References 7.1. Red Hat Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On Moving Resources Due to Failure Is there a way to manage constraints when running pcs resource move? 7.2. SAP SAP HANA Administration Guide for SAP HANA Platform Disaster Recovery Scenarios for Multitarget System Replication SAP HANA System Replication Configuration Parameter Example: Checking the Status on the Primary and Secondary Systems General Prerequisites for Configuring SAP HANA System Replication Change Log Modes Failed to re-register former primary site as new secondary site due to missing log Checking the Status with landscapeHostConfiguration.py How to Setup SAP HANA Multi-Target System Replication SAP HANA Multitarget System Replication | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/asmb_add_resources_configuring-hana-scale-up-multitarget-system-replication-disaster-recovery |
4.10. ePowerSwitch | 4.10. ePowerSwitch Table 4.11, "ePowerSwitch" lists the fence device parameters used by fence_eps , the fence agent for ePowerSwitch. Table 4.11. ePowerSwitch luci Field cluster.conf Attribute Description Name name A name for the ePowerSwitch device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Name of Hidden Page hidden_page The name of the hidden page for the device. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Figure 4.9, "ePowerSwitch" shows the configuration screen for adding an ePowerSwitch fence device. Figure 4.9. ePowerSwitch The following command creates a fence device instance for an ePowerSwitch device: The following is the cluster.conf entry for the fence_eps device: | [
"ccs -f cluster.conf --addfencedev epstest1 agent=fence_eps ipaddr=192.168.0.1 login=root passwd=password123 hidden_page=hidden.htm",
"<fencedevices> <fencedevice agent=\"fence_eps\" hidden_page=\"hidden.htm\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"epstest1\" passwd=\"password123\"/> </fencedevices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-epower-CA |
Kafka configuration properties | Kafka configuration properties Red Hat Streams for Apache Kafka 2.9 Use configuration properties to configure Kafka components | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.