title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
5.2. Creating Certificate Signing Requests
5.2. Creating Certificate Signing Requests Traditionally, the following methods are used to generate Certificate requests (CSRs): Generating CSRs using command line utilities Generating CSRs inside a supporting browser Generating CSRs inside an application, such as the installer of a server Some of these methods support direct submission of the CSRs, while some do not. Starting from RHCS 9.7, Server-Side key generation is supported to overcome the inconvenience brought on by the removal of the key generation support inside newer versions of browsers, such as Firefox v69 and up, as well as Chrome. For this reason, in this section, we will not discuss browser support for key generation. Although there is no reason to believe that older versions of those browsers should not continue to function as specified in older RHCS documentation. CSRs generated from an application generally take the form of PKCS#10. Provided that they are generated correctly, they should be supported by RHCS. In the following subsections, we are going to go over the following methods supported by RHCS: Command-line utilities Server-Side Key Generation 5.2.1. Generating CSRs Using Command-Line Utilities Red Hat Certificate System supports using the following utilities to create CSRs: certutil : Supports creating PKCS #10 requests. PKCS10Client : Supports creating PKCS #10 requests. CRMFPopClient : Supports creating CRMF requests. pki client-cert-request : Supports both PKCS#10 and CRMF requests. The following sections provide some examples on how to use these utilities with the feature-rich enrollment profile framework. 5.2.1.1. Creating a CSR Using certutil This section describes examples on how to use the certutil utility to create a CSR. For further details about using certutil , see: The certutil (1) man page The output of the certutil --help command 5.2.1.1.1. Using certutil to Create a CSR with EC Keys The following procedure demonstrates how to use the certutil utility to create an Elliptic Curve (EC) key pair and CSR: Change to the certificate database directory of the user or entity for which the certificate is being requested, for example: Create the binary CSR and store it in the /user_or_entity_database_directory/request.csr file: Enter the required NSS database password when prompted. For further details about the parameters, see the certutil (1) man page. Convert the created binary format CSR to PEM format: Optionally, verify that the CSR file is correct: This is a PKCS#10 PEM certificate request. 5.2.1.1.2. Using certutil to Create a CSR With User-defined Extensions The following procedure demonstrates how to create a CSR with user-defined extensions using the certutil utility. Note that the enrollment requests are constrained by the enrollment profiles defined by the CA. See Example B.3, "Multiple User Supplied Extensions in CSR" . Change to the certificate database directory of the user or entity for which the certificate is being requested, for example: Create the CSR with user-defined Key Usage extension as well as user-defined Extended Key Usage extension and store it in the /user_or_entity_database_directory/request.csr file: Enter the required NSS database password when prompted. For further details about the parameters, see the certutil (1) man page. Optionally, verify that the CSR file is correct: This is a PKCS#10 PEM certificate request. 5.2.1.2. Creating a CSR Using PKCS10Client This section describes examples how to use the PKCS10Client utility to create a CSR. For further details about using PKCS10Client , see: The PKCS10Client (1) man page The output of the PKCS10Client --help command 5.2.1.2.1. Using PKCS10Client to Create a CSR The following procedure explains how to use the PKCS10Client utility to create an Elliptic Curve (EC) key pair and CSR: Change to the certificate database directory of the user or entity for which the certificate is being requested, for example: Create the CSR and store it in the /user_or_entity_database_directory/example.csr file: For further details about the parameters, see the PKCS10Client (1) man page. Optionally, verify that the CSR is correct: 5.2.1.2.2. Using PKCS10Client to Create a CSR for SharedSecret-based CMC The following procedure explains how to use the PKCS10Client utility to create an RSA key pair and CSR for SharedSecret-based CMC. Use it only with the CMC Shared Secret authentication method which is, by default, handled by the caFullCMCSharedTokenCert and caECFullCMCSharedTokenCert profiles. Change to the certificate database directory of the user or entity for which the certificate is being requested, for example: Create the CSR and store it in the /user_or_entity_database_directory/example.csr file: For further details about the parameters, see the PKCS10Client (1) man page. Optionally, verify that the CSR is correct: 5.2.1.3. Creating a CSR Using CRMFPopClient Certificate Request Message Format (CRMF) is a CSR format accepted in CMC that allows key archival information to be securely embedded in the request. This section describes examples how to use the CRMFPopClient utility to create a CSR. For further details about using CRMFPopClient , see the CRMFPopClient (1) man page. 5.2.1.3.1. Using CRMFPopClient to Create a CSR with Key Archival The following procedure explains how to use the CRMFPopClient utility to create an RSA key pair and a CSR with the key archival option: Change to the certificate database directory of the user or entity for which the certificate is being requested, for example: Retrieve the KRA transport certificate: Export the KRA transport certificate: Create the CSR and store it in the /user_or_entity_database_directory/example.csr file: To create an Elliptic Curve (EC) key pair and CSR, pass the -a ec -t false options to the command. For further details about the parameters, see the CRMFPopClient (1) man page. Optionally, verify that the CSR is correct: 5.2.1.3.2. Using CRMFPopClient to Create a CSR for SharedSecret-based CMC The following procedure explains how to use the CRMFPopClient utility to create an RSA key pair and CSR for SharedSecret-based CMC. Use it only with the CMC Shared Secret authentication method which is, by default, handled by the caFullCMCSharedTokenCert and caECFullCMCSharedTokenCert profiles. Change to the certificate database directory of the user or entity for which the certificate is being requested, for example: Retrieve the KRA transport certificate: Export the KRA transport certificate: Create the CSR and store it in the /user_or_entity_database_directory/example.csr file: To create an EC key pair and CSR, pass the -a ec -t false options to the command. For further details about the parameters, see the output of the CRMFPopClient --help command. Optionally, verify that the CSR is correct: 5.2.1.4. Creating a CSR using client-cert-request in the PKI CLI The pki command-line tool can also be used with the client-cert-request command to generate a CSR. However, unlike the previously discussed tools, CSR generated with pki are submitted directly to the CA. Both PKCS#10 or CRMF requests can be generated. Example on generating a PKCS#10 request: Example on generating a CRMF request: A request id will be returned upon success. Once a request is submitted, an agent could approve it by using the pki ca-cert-request-approve command. For example: For more information, see the man page by running the pki client-cert-request --help command. 5.2.2. Generating CSRs Using Server-Side Key Generation Many newer versions of browsers, including Firefox v69 and up, as well as Chrome, have removed the functionality to generate PKI keys and the support for CRMF for key archival. On RHEL, CLIs such as CRMFPopClient (see CRMFPopClient --help ) or pki (see pki client-cert-request --help ) could be used as a workaround. Server-Side Keygen enrollment has been around for a long time since the introduction of Token Key Management System (TMS), where keys could be generated on a KRA instead of locally on smart cards. Red Hat Certificate System now adopts a similar mechanism to resolve the browser keygen deficiency issue. Keys are generated on the server (specifically, on the KRA) and then transferred securely back to the client in PKCS#12. Note It is highly recommended to employ the Server-Side Keygen mechanism only for encryption certificates. 5.2.2.1. Functionality Highlights Certificate request keys are generated on the KRA (Note: a KRA must be installed to work with the CA) The profile default plugin, serverKeygenUserKeyDefaultImpl , provides selection to enable or disable key archival (i.e. the enableArchival parameter) Support for both RSA and EC keys Support for both manual (agent) approval and automatic approval (e.g. directory password-based) 5.2.2.2. Enrolling a Certificate Using Server-Side Keygen The default Sever-Side Keygen enrollment profile can be found on the EE page, under the List Certificate Profiles tab: Manual User Dual-Use Certificate Enrollment Using server-side Key generation Figure 5.1. Server-Side Keygen Enrollment that requires agent manual approval Directory-authenticated User Dual-Use Certificate Enrollment Using server-side Key generation Figure 5.2. Server-Side Keygen Enrollment that will be automatically approved upon successful LDAP uid/pwd authentication Regardless of how the request is approved, the Server-Side Keygen Enrollment mechanism requires the End Entity user to enter a password for the PKCS#12 package which will contain the issued certificate as well as the encrypted private key generated by the server once issued. Important Users should not share their passwords with anyone. Not even the CA or KRA agents. When the enrollment request is approved, the PKCS#12 package will be generated and, In case of manual approval, the PKCS#12 file will be returned to the CA agent that approves the request; the agent is then expected to forward the PKCS#12 file to the user. In case of automatic approval, the PKCS#12 file will be returned to the user who submitted the request Figure 5.3. Enrollment manually approved by an agent Once the PKCS#12 file is received, the user could use a CLI such as pkcs12util to import this file into their own user internal cert/key database for each application. E.g. the Firefox nss database of the user. 5.2.2.3. Key Recovery If the enableArchival parameter is set to true in the certificate enrollment profile, then the private keys are archived at the time of Server-Side Keygen enrollment. The archived private keys could then be recovered by the authorized KRA agents. 5.2.2.4. Additional Information 5.2.2.4.1. KRA Request Records Note Due to the nature of this mechanism, in case the enableArchival parameter is set to true in the profile, there are two KRA requests records per Server-Side keygen request: One for the request type asymkeyGenRequest This request type cannot be filtered using List Requests on the KRA agent page; you can select Show All Requests to see them listed. One for the request type recovery 5.2.2.4.2. Audit Records Some audit records could be observed if enabled: CA SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST SERVER_SIDE_KEYGEN_ENROLL_KEY_RETRIEVAL_REQUEST KRA SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST_PROCESSED SERVER_SIDE_KEYGEN_ENROLL_KEY_RETRIEVAL_REQUEST_PROCESSED (not yet implemented)
[ "cd /user_or_entity_database_directory/", "certutil -d . -R -k ec -q nistp256 -s \"CN= subject_name \" -o /user_or_entity_database_directory/request-bin.csr", "BtoA /user_or_entity_database_directory/request-bin.csr /user_or_entity_database_directory/request.csr", "cat /user_or_entity_database_directory/request.csr MIICbTCCAVUCAQAwKDEQMA4GA1UEChMHRXhhbXBsZTEUMBIGA1UEAxMLZXhhbXBs", "cd /user_or_entity_database_directory/", "certutil -d . -R -k rsa -g 1024 -s \"CN= subject_name \" --keyUsage keyEncipherment,dataEncipherment,critical --extKeyUsage timeStamp,msTrustListSign,critical -a -o /user_or_entity_database_directory/request.csr", "cat /user_or_entity_database_directory/request.csr Certificate request generated by Netscape certutil Phone: (not specified) Common Name: user 4-2-1-2 Email: (not specified) Organization: (not specified) State: (not specified) Country: (not specified)", "cd /user_or_entity_database_directory/", "PKCS10Client -d . -p NSS_password -a ec -c nistp256 -o /user_or_entity_database_directory/example.csr -n \"CN= subject_name \"", "cat /user_or_entity_database_directory/example.csr -----BEGIN CERTIFICATE REQUEST----- MIICzzCCAbcCAQAwgYkx -----END CERTIFICATE REQUEST-----", "cd /user_or_entity_database_directory/", "PKCS10Client -d . -p NSS_password -o /user_or_entity_database_directory/example.csr -y true -n \"CN= subject_name \"", "cat /user_or_entity_database_directory/example.csr -----BEGIN CERTIFICATE REQUEST----- MIICzzCCAbcCAQAwgYkx -----END CERTIFICATE REQUEST-----", "cd /user_or_entity_database_directory/", "pki ca-cert-find --name \" DRM Transport Certificate \" --------------- 1 entries found --------------- Serial Number: 0x7 Subject DN: CN= DRM Transport Certificate,O=EXAMPLE Status: VALID Type: X.509 version 3 Key A lgorithm: PKCS #1 RSA with 2048-bit key Not Valid Before: Thu Oct 22 18:26:11 CEST 2015 Not Valid After: Wed Oct 11 18:26:11 CEST 2017 Issued On: Thu Oct 22 18:26:11 CEST 2015 Issued By: caadmin ---------------------------- Number of entries returned 1", "pki ca-cert-show 0x7 --output kra.transport", "CRMFPopClient -d . -p password -n \"cn= subject_name \" -q POP_SUCCESS -b kra.transport -w \"AES/CBC/PKCS5Padding\" -v -o /user_or_entity_database_directory/example.csr", "cat /user_or_entity_database_directory/example.csr -----BEGIN CERTIFICATE REQUEST----- MIICzzCCAbcCAQAwgYkx -----END CERTIFICATE REQUEST-----", "cd /user_or_entity_database_directory/", "pki ca-cert-find --name \" DRM Transport Certificate \" --------------- 1 entries found --------------- Serial Number: 0x7 Subject DN: CN= DRM Transport Certificate,O=EXAMPLE Status: VALID Type: X.509 version 3 Key A lgorithm: PKCS #1 RSA with 2048-bit key Not Valid Before: Thu Oct 22 18:26:11 CEST 2015 Not Valid After: Wed Oct 11 18:26:11 CEST 2017 Issued On: Thu Oct 22 18:26:11 CEST 2015 Issued By: caadmin ---------------------------- Number of entries returned 1", "pki ca-cert-show 0x7 --output kra.transport", "CRMFPopClient -d . -p password -n \"cn= subject_name \" -q POP_SUCCESS -b kra.transport -w \"AES/CBC/PKCS5Padding\" -y -v -o /user_or_entity_database_directory/example.csr", "cat /user_or_entity_database_directory/example.csr -----BEGIN CERTIFICATE REQUEST----- MIICzzCCAbcCAQAwgYkx -----END CERTIFICATE REQUEST-----", "pki -d user token db directory -P https -p 8443 -h host.test.com -c user token db passwd client-cert-request \"uid=test2\" --length 4096 --type pkcs10", "pki -d user token db directory -P https -p 8443 -h host.test.com -c user token db passwd client-cert-request \"uid=test2\" --length 4096 --type crmf", "pki -d agent token db directory -P https -p 8443 -h host.test.com -c agent token db passwd -n <CA agent cert nickname> ca-cert-request-approve request id" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/creating_certificate_signing_requests
3.8. Choose a Boot Method
3.8. Choose a Boot Method You can use several methods to boot Red Hat Enterprise Linux. Installing from a DVD requires that you have purchased a Red Hat Enterprise Linux product, you have a Red Hat Enterprise Linux 6.9 DVD, and you have a DVD drive on a system that supports booting from it. Refer to Chapter 2, Making Media for instructions to make an installation DVD. Your BIOS may need to be changed to allow booting from your DVD/CD-ROM drive. For more information about changing your BIOS, refer to Section 7.1.1, "Booting the Installation Program on x86, AMD64, and Intel 64 Systems" . Other than booting from an installation DVD, you can also boot the Red Hat Enterprise Linux installation program from minimal boot media in the form of a bootable CD or USB flash drive. After you boot the system with a piece of minimal boot media, you complete the installation from a different installation source, such as a local hard drive or a location on a network. Refer to Section 2.2, "Making Minimal Boot Media" for instructions on making boot CDs and USB flash drives. Finally, you can boot the installer over the network from a preboot execution environment (PXE) server. Refer to Chapter 30, Setting Up an Installation Server . Again, after you boot the system, you complete the installation from a different installation source, such as a local hard drive or a location on a network.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch03s08
Chapter 15. Troubleshooting
Chapter 15. Troubleshooting There are cases where the Assisted Installer cannot begin the installation or the cluster fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure. 15.1. Prerequisites You have created an infrastructure environment using the API or have created a cluster using the UI. 15.2. Troubleshooting discovery ISO issues The Assisted Installer uses an ISO image to run an agent that registers the host to the cluster and performs hardware and network validations before attempting to install OpenShift. You can follow these procedures to troubleshoot problems related to the host discovery. Once you start the host with the discovery ISO image, the Assisted Installer discovers the host and presents it in the Assisted Service UI. See Configuring the discovery image for additional details. 15.3. Minimal ISO Image The minimal ISO image should be used when bandwidth over the virtual media connection is limited. It includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The resulting ISO image is about 100MB in size compared to 1GB for the full ISO image. 15.3.1. Troubleshooting minimal ISO boot failures If your environment requires static network configuration to access the Assisted Installer service, any issues with that configuration may prevent the Minimal ISO from booting properly. If the boot screen shows that the host has failed to download the root file system image, verify that any additional network configuration is correct. Switching to a Full ISO image will also allow for easier debugging. Example rootfs download failure 15.4. Verify the discovery agent is running Prerequisites You have created an Infrastructure Environment by using the API or have created a cluster by using the UI. You booted a host with the Infrastructure Environment discovery ISO and the host failed to register. You have ssh access to the host. You provided an SSH public key in the "Add hosts" dialog before generating the Discovery ISO so that you can SSH into your machine without a password. Procedure Verify that your host machine is powered on. If you selected DHCP networking , check that the DHCP server is enabled. If you selected Static IP, bridges and bonds networking, check that your configurations are correct. Verify that you can access your host machine using SSH, a console such as the BMC, or a virtual machine console: USD ssh core@<host_ip_address> You can specify private key file using the -i parameter if it isn't stored in the default directory. USD ssh -i <ssh_private_key_file> core@<host_ip_address> If you fail to ssh to the host, the host failed during boot or it failed to configure the network. Upon login you should see this message: Example login If you are not seeing this message it means that the host didn't boot with the assisted-installer ISO. Make sure you configured the boot order properly (The host should boot once from the live-ISO). Check the agent service logs: USD sudo journalctl -u agent.service In the following example, the errors indicate there is a network issue: Example agent service log screenshot of agent service log If there is an error pulling the agent image, check the proxy settings. Verify that the host is connected to the network. You can use nmcli to get additional information about your network configuration. 15.5. Verify the agent can access the assisted-service Prerequisites You have created an Infrastructure Environment by using the API or have created a cluster by using the UI. You booted a host with the Infrastructure Environment discovery ISO and the host failed to register. You verified the discovery agent is running. Procedure Check the agent logs to verify the agent can access the Assisted Service: USD sudo journalctl TAG=agent The errors in the following example indicate that the agent failed to access the Assisted Service. Example agent log Check the proxy settings you configured for the cluster. If configured, the proxy must allow access to the Assisted Service URL. 15.6. Correcting a host's boot order Once the installation that runs as part of the Discovery Image completes, the Assisted Installer reboots the host. The host must boot from its installation disk to continue forming the cluster. If you have not correctly configured the host's boot order, it will boot from another disk instead, interrupting the installation. If the host boots the discovery image again, the Assisted Installer will immediately detect this event and set the host's status to Installing Pending User Action . Alternatively, if the Assisted Installer does not detect that the host has booted the correct disk within the allotted time, it will also set this host status. Procedure Reboot the host and set its boot order to boot from the installation disk. If you didn't select an installation disk, the Assisted Installer selected one for you. To view the selected installation disk, click to expand the host's information in the host inventory, and check which disk has the "Installation disk" role. 15.7. Rectifying partially-successful installations There are cases where the Assisted Installer declares an installation to be successful even though it encountered errors: If you requested to install OLM operators and one or more failed to install, log into the cluster's console to remediate the failures. If you requested to install more than two worker nodes and at least one failed to install, but at least two succeeded, add the failed workers to the installed cluster.
[ "ssh core@<host_ip_address>", "ssh -i <ssh_private_key_file> core@<host_ip_address>", "sudo journalctl -u agent.service", "sudo journalctl TAG=agent" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/assembly_troubleshooting
Chapter 14. Pruning objects to reclaim resources
Chapter 14. Pruning objects to reclaim resources Over time, API objects created in OpenShift Container Platform can accumulate in the cluster's etcd data store through normal user operations, such as when building and deploying applications. Cluster administrators can periodically prune older versions of objects from the cluster that are no longer required. For example, by pruning images you can delete older images and layers that are no longer in use, but are still taking up disk space. 14.1. Basic pruning operations The CLI groups prune operations under a common parent command: USD oc adm prune <object_type> <options> This specifies: The <object_type> to perform the action on, such as groups , builds , deployments , or images . The <options> supported to prune that object type. 14.2. Pruning groups To prune groups records from an external provider, administrators can run the following command: USD oc adm prune groups \ --sync-config=path/to/sync/config [<options>] Table 14.1. oc adm prune groups flags Options Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --blacklist Path to the group blacklist file. --whitelist Path to the group whitelist file. --sync-config Path to the synchronization configuration file. Procedure To see the groups that the prune command deletes, run the following command: USD oc adm prune groups --sync-config=ldap-sync-config.yaml To perform the prune operation, add the --confirm flag: USD oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm 14.3. Pruning deployment resources You can prune resources associated with deployments that are no longer required by the system, due to age and status. The following command prunes replication controllers associated with DeploymentConfig objects: USD oc adm prune deployments [<options>] Note To also prune replica sets associated with Deployment objects, use the --replica-sets flag. This flag is currently a Technology Preview feature. Table 14.2. oc adm prune deployments flags Option Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --keep-complete=<N> Per the DeploymentConfig object, keep the last N replication controllers that have a status of Complete and replica count of zero. The default is 5 . --keep-failed=<N> Per the DeploymentConfig object, keep the last N replication controllers that have a status of Failed and replica count of zero. The default is 1 . --keep-younger-than=<duration> Do not prune any replication controller that is younger than <duration> relative to the current time. Valid units of measurement include nanoseconds ( ns ), microseconds ( us ), milliseconds ( ms ), seconds ( s ), minutes ( m ), and hours ( h ). The default is 60m . --orphans Prune all replication controllers that no longer have a DeploymentConfig object, has status of Complete or Failed , and has a replica count of zero. --replica-sets=true|false If true , replica sets are included in the pruning process. The default is false . Important This flag is a Technology Preview feature. Procedure To see what a pruning operation would delete, run the following command: USD oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m To actually perform the prune operation, add the --confirm flag: USD oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm 14.4. Pruning builds To prune builds that are no longer required by the system due to age and status, administrators can run the following command: USD oc adm prune builds [<options>] Table 14.3. oc adm prune builds flags Option Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --orphans Prune all builds whose build configuration no longer exists, status is complete, failed, error, or canceled. --keep-complete=<N> Per build configuration, keep the last N builds whose status is complete. The default is 5 . --keep-failed=<N> Per build configuration, keep the last N builds whose status is failed, error, or canceled. The default is 1 . --keep-younger-than=<duration> Do not prune any object that is younger than <duration> relative to the current time. The default is 60m . Procedure To see what a pruning operation would delete, run the following command: USD oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m To actually perform the prune operation, add the --confirm flag: USD oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm Note Developers can enable automatic build pruning by modifying their build configuration. Additional resources Performing advanced builds Pruning builds 14.5. Automatically pruning images Images from the OpenShift image registry that are no longer required by the system due to age, status, or exceed limits are automatically pruned. Cluster administrators can configure the Pruning Custom Resource, or suspend it. Prerequisites Cluster administrator permissions. Install the oc CLI. Procedure Verify that the object named imagepruners.imageregistry.operator.openshift.io/cluster contains the following spec and status fields: spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: "True" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: "Periodic image pruner has been created." - type: Scheduled status: "True" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: "Image pruner job has been scheduled." - type: Failed staus: "False" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: "Most recent image pruning job succeeded." 1 schedule : CronJob formatted schedule. This is an optional field, default is daily at midnight. 2 suspend : If set to true , the CronJob running pruning is suspended. This is an optional field, default is false . The initial value on new clusters is false . 3 keepTagRevisions : The number of revisions per tag to keep. This is an optional field, default is 3 . The initial value is 3 . 4 keepYoungerThanDuration : Retain images younger than this duration. This is an optional field. If a value is not specified, either keepYoungerThan or the default value 60m (60 minutes) is used. 5 keepYoungerThan : Deprecated. The same as keepYoungerThanDuration , but the duration is specified as an integer in nanoseconds. This is an optional field. When keepYoungerThanDuration is set, this field is ignored. 6 resources : Standard pod resource requests and limits. This is an optional field. 7 affinity : Standard pod affinity. This is an optional field. 8 nodeSelector : Standard pod node selector. This is an optional field. 9 tolerations : Standard pod tolerations. This is an optional field. 10 successfulJobsHistoryLimit : The maximum number of successful jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . 11 failedJobsHistoryLimit : The maximum number of failed jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . 12 observedGeneration : The generation observed by the Operator. 13 conditions : The standard condition objects with the following types: Available : Indicates if the pruning job has been created. Reasons can be Ready or Error. Scheduled : Indicates if the pruning job has been scheduled. Reasons can be Scheduled, Suspended, or Error. Failed : Indicates if the most recent pruning job failed. Important The Image Registry Operator's behavior for managing the pruner is orthogonal to the managementState specified on the Image Registry Operator's ClusterOperator object. If the Image Registry Operator is not in the Managed state, the image pruner can still be configured and managed by the Pruning Custom Resource. However, the managementState of the Image Registry Operator alters the behavior of the deployed image pruner job: Managed : the --prune-registry flag for the image pruner is set to true . Removed : the --prune-registry flag for the image pruner is set to false , meaning it only prunes image metadata in etcd. 14.6. Manually pruning images The pruning custom resource enables automatic image pruning for the images from the OpenShift image registry. However, administrators can manually prune images that are no longer required by the system due to age, status, or exceed limits. There are two methods to manually prune images: Running image pruning as a Job or CronJob on the cluster. Running the oc adm prune images command. Prerequisites To prune images, you must first log in to the CLI as a user with an access token. The user must also have the system:image-pruner cluster role or greater (for example, cluster-admin ). Expose the image registry. Procedure To manually prune images that are no longer required by the system due to age, status, or exceed limits, use one of the following methods: Run image pruning as a Job or CronJob on the cluster by creating a YAML file for the pruner service account, for example: USD oc create -f <filename>.yaml Example output kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: "0 0 * * *" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: "quay.io/openshift/origin-cli:4.1" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner Run the oc adm prune images [<options>] command: USD oc adm prune images [<options>] Pruning images removes data from the integrated registry unless --prune-registry=false is used. Pruning images with the --namespace flag does not remove images, only image streams. Images are non-namespaced resources. Therefore, limiting pruning to a particular namespace makes it impossible to calculate its current usage. By default, the integrated registry caches metadata of blobs to reduce the number of requests to storage, and to increase the request-processing speed. Pruning does not update the integrated registry cache. Images that still contain pruned layers after pruning will be broken because the pruned layers that have metadata in the cache will not be pushed. Therefore, you must redeploy the registry to clear the cache after pruning: USD oc rollout restart deployment/image-registry -n openshift-image-registry If the integrated registry uses a Redis cache, you must clean the database manually. If redeploying the registry after pruning is not an option, then you must permanently disable the cache. oc adm prune images operations require a route for your registry. Registry routes are not created by default. The Prune images CLI configuration options table describes the options you can use with the oc adm prune images <options> command. Table 14.4. Prune images CLI configuration options Option Description --all Include images that were not pushed to the registry, but have been mirrored by pullthrough. This is on by default. To limit the pruning to images that were pushed to the integrated registry, pass --all=false . --certificate-authority The path to a certificate authority file to use when communicating with the OpenShift Container Platform-managed registries. Defaults to the certificate authority data from the current user's configuration file. If provided, a secure connection is initiated. --confirm Indicate that pruning should occur, instead of performing a test-run. This requires a valid route to the integrated container image registry. If this command is run outside of the cluster network, the route must be provided using --registry-url . --force-insecure Use caution with this option. Allow an insecure connection to the container registry that is hosted via HTTP or has an invalid HTTPS certificate. --keep-tag-revisions=<N> For each imagestream, keep up to at most N image revisions per tag (default 3 ). --keep-younger-than=<duration> Do not prune any image that is younger than <duration> relative to the current time. Alternately, do not prune any image that is referenced by any other object that is younger than <duration> relative to the current time (default 60m ). --prune-over-size-limit Prune each image that exceeds the smallest limit defined in the same project. This flag cannot be combined with --keep-tag-revisions nor --keep-younger-than . --registry-url The address to use when contacting the registry. The command attempts to use a cluster-internal URL determined from managed images and image streams. In case it fails (the registry cannot be resolved or reached), an alternative route that works needs to be provided using this flag. The registry hostname can be prefixed by https:// or http:// , which enforces particular connection protocol. --prune-registry In conjunction with the conditions stipulated by the other options, this option controls whether the data in the registry corresponding to the OpenShift Container Platform image API object is pruned. By default, image pruning processes both the image API objects and corresponding data in the registry. This option is useful when you are only concerned with removing etcd content, to reduce the number of image objects but are not concerned with cleaning up registry storage, or if you intend to do that separately by hard pruning the registry during an appropriate maintenance window for the registry. 14.6.1. Image prune conditions You can apply conditions to your manually pruned images. To remove any image managed by OpenShift Container Platform, or images with the annotation openshift.io/image.managed : Created at least --keep-younger-than minutes ago and are not currently referenced by any: Pods created less than --keep-younger-than minutes ago Image streams created less than --keep-younger-than minutes ago Running pods Pending pods Replication controllers Deployments Deployment configs Replica sets Build configurations Builds Jobs Cronjobs Stateful sets --keep-tag-revisions most recent items in stream.status.tags[].items That are exceeding the smallest limit defined in the same project and are not currently referenced by any: Running pods Pending pods Replication controllers Deployments Deployment configs Replica sets Build configurations Builds Jobs Cronjobs Stateful sets There is no support for pruning from external registries. When an image is pruned, all references to the image are removed from all image streams that have a reference to the image in status.tags . Image layers that are no longer referenced by any images are removed. Note The --prune-over-size-limit flag cannot be combined with the --keep-tag-revisions flag nor the --keep-younger-than flags. Doing so returns information that this operation is not allowed. Separating the removal of OpenShift Container Platform image API objects and image data from the registry by using --prune-registry=false , followed by hard pruning the registry, can narrow timing windows and is safer when compared to trying to prune both through one command. However, timing windows are not completely removed. For example, you can still create a pod referencing an image as pruning identifies that image for pruning. You should still keep track of an API object created during the pruning operations that might reference images so that you can mitigate any references to deleted content. Re-doing the pruning without the --prune-registry option or with --prune-registry=true does not lead to pruning the associated storage in the image registry for images previously pruned by --prune-registry=false . Any images that were pruned with --prune-registry=false can only be deleted from registry storage by hard pruning the registry. 14.6.2. Running the image prune operation Procedure To see what a pruning operation would delete: Keeping up to three tag revisions, and keeping resources (images, image streams, and pods) younger than 60 minutes: USD oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m Pruning every image that exceeds defined limits: USD oc adm prune images --prune-over-size-limit To perform the prune operation with the options from the step: USD oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm USD oc adm prune images --prune-over-size-limit --confirm 14.6.3. Using secure or insecure connections The secure connection is the preferred and recommended approach. It is done over HTTPS protocol with a mandatory certificate verification. The prune command always attempts to use it if possible. If it is not possible, in some cases it can fall-back to insecure connection, which is dangerous. In this case, either certificate verification is skipped or plain HTTP protocol is used. The fall-back to insecure connection is allowed in the following cases unless --certificate-authority is specified: The prune command is run with the --force-insecure option. The provided registry-url is prefixed with the http:// scheme. The provided registry-url is a local-link address or localhost . The configuration of the current user allows for an insecure connection. This can be caused by the user either logging in using --insecure-skip-tls-verify or choosing the insecure connection when prompted. Important If the registry is secured by a certificate authority different from the one used by OpenShift Container Platform, it must be specified using the --certificate-authority flag. Otherwise, the prune command fails with an error. 14.6.4. Image pruning problems Images not being pruned If your images keep accumulating and the prune command removes just a small portion of what you expect, ensure that you understand the image prune conditions that must apply for an image to be considered a candidate for pruning. Ensure that images you want removed occur at higher positions in each tag history than your chosen tag revisions threshold. For example, consider an old and obsolete image named sha256:abz . By running the following command in your namespace, where the image is tagged, the image is tagged three times in a single image stream named myapp : USD oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}'\ '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image "sha256:<hash>"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\n'\ '{{end}}{{end}}{{end}}{{end}}' Example output myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1 When default options are used, the image is never pruned because it occurs at position 0 in a history of myapp:v2.1-may-2016 tag. For an image to be considered for pruning, the administrator must either: Specify --keep-tag-revisions=0 with the oc adm prune images command. Warning This action removes all the tags from all the namespaces with underlying images, unless they are younger or they are referenced by objects younger than the specified threshold. Delete all the istags where the position is below the revision threshold, which means myapp:v2.1 and myapp:v2.1-may-2016 . Move the image further in the history, either by running new builds pushing to the same istag , or by tagging other image. This is not always desirable for old release tags. Tags having a date or time of a particular image's build in their names should be avoided, unless the image must be preserved for an undefined amount of time. Such tags tend to have just one image in their history, which prevents them from ever being pruned. Using a secure connection against insecure registry If you see a message similar to the following in the output of the oc adm prune images command, then your registry is not secured and the oc adm prune images client attempts to use a secure connection: error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client The recommended solution is to secure the registry. Otherwise, you can force the client to use an insecure connection by appending --force-insecure to the command; however, this is not recommended. Using an insecure connection against a secured registry If you see one of the following errors in the output of the oc adm prune images command, it means that your registry is secured using a certificate signed by a certificate authority other than the one used by oc adm prune images client for connection verification: error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response "\x15\x03\x01\x00\x02\x02" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response "\x15\x03\x01\x00\x02\x02"] By default, the certificate authority data stored in the user's configuration files is used; the same is true for communication with the master API. Use the --certificate-authority option to provide the right certificate authority for the container image registry server. Using the wrong certificate authority The following error means that the certificate authority used to sign the certificate of the secured container image registry is different from the authority used by the client: error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority Make sure to provide the right one with the flag --certificate-authority . As a workaround, the --force-insecure flag can be added instead. However, this is not recommended. Additional resources Accessing the registry Exposing the registry See Image Registry Operator in OpenShift Container Platform for information on how to create a registry route. 14.7. Hard pruning the registry The OpenShift Container Registry can accumulate blobs that are not referenced by the OpenShift Container Platform cluster's etcd. The basic pruning images procedure, therefore, is unable to operate on them. These are called orphaned blobs . Orphaned blobs can occur from the following scenarios: Manually deleting an image with oc delete image <sha256:image-id> command, which only removes the image from etcd, but not from the registry's storage. Pushing to the registry initiated by daemon failures, which causes some blobs to get uploaded, but the image manifest (which is uploaded as the very last component) does not. All unique image blobs become orphans. OpenShift Container Platform refusing an image because of quota restrictions. The standard image pruner deleting an image manifest, but is interrupted before it deletes the related blobs. A bug in the registry pruner, which fails to remove the intended blobs, causing the image objects referencing them to be removed and the blobs becoming orphans. Hard pruning the registry, a separate procedure from basic image pruning, allows cluster administrators to remove orphaned blobs. You should hard prune if you are running out of storage space in your OpenShift Container Registry and believe you have orphaned blobs. This should be an infrequent operation and is necessary only when you have evidence that significant numbers of new orphans have been created. Otherwise, you can perform standard image pruning at regular intervals, for example, once a day (depending on the number of images being created). Procedure To hard prune orphaned blobs from the registry: Log in. Log in to the cluster with the CLI as kubeadmin or another privileged user that has access to the openshift-image-registry namespace. Run a basic image prune . Basic image pruning removes additional images that are no longer needed. The hard prune does not remove images on its own. It only removes blobs stored in the registry storage. Therefore, you should run this just before the hard prune. Switch the registry to read-only mode. If the registry is not running in read-only mode, any pushes happening at the same time as the prune will either: fail and cause new orphans, or succeed although the images cannot be pulled (because some of the referenced blobs were deleted). Pushes will not succeed until the registry is switched back to read-write mode. Therefore, the hard prune must be carefully scheduled. To switch the registry to read-only mode: In configs.imageregistry.operator.openshift.io/cluster , set spec.readOnly to true : USD oc patch configs.imageregistry.operator.openshift.io/cluster -p '{"spec":{"readOnly":true}}' --type=merge Add the system:image-pruner role. The service account used to run the registry instances requires additional permissions to list some resources. Get the service account name: USD service_account=USD(oc get -n openshift-image-registry \ -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry) Add the system:image-pruner cluster role to the service account: USD oc adm policy add-cluster-role-to-user \ system:image-pruner -z \ USD{service_account} -n openshift-image-registry Optional: Run the pruner in dry-run mode. To see how many blobs would be removed, run the hard pruner in dry-run mode. No changes are actually made. The following example references an image registry pod called image-registry-3-vhndw : USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check' Alternatively, to get the exact paths for the prune candidates, increase the logging level: USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check' Example output time="2017-06-22T11:50:25.066156047Z" level=info msg="start prune (dry-run mode)" distribution_version="v2.4.1+unknown" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time="2017-06-22T11:50:25.092257421Z" level=info msg="Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:25.092395621Z" level=info msg="Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:25.092492183Z" level=info msg="Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.673946639Z" level=info msg="Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.674024531Z" level=info msg="Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.674675469Z" level=info msg="Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 ... Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data Run the hard prune. Execute the following command inside one running instance of a image-registry pod to run the hard prune. The following example references an image registry pod called image-registry-3-vhndw : USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete' Example output Deleted 13374 blobs Freed up 2.835 GiB of disk space Switch the registry back to read-write mode. After the prune is finished, the registry can be switched back to read-write mode. In configs.imageregistry.operator.openshift.io/cluster , set spec.readOnly to false : USD oc patch configs.imageregistry.operator.openshift.io/cluster -p '{"spec":{"readOnly":false}}' --type=merge 14.8. Pruning cron jobs Cron jobs can perform pruning of successful jobs, but might not properly handle failed jobs. Therefore, the cluster administrator should perform regular cleanup of jobs manually. They should also restrict the access to cron jobs to a small group of trusted users and set appropriate quota to prevent the cron job from creating too many jobs and pods. Additional resources Running tasks in pods using jobs Resource quotas across multiple projects Using RBAC to define and apply permissions
[ "oc adm prune <object_type> <options>", "oc adm prune groups --sync-config=path/to/sync/config [<options>]", "oc adm prune groups --sync-config=ldap-sync-config.yaml", "oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm", "oc adm prune deployments [<options>]", "oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m", "oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm", "oc adm prune builds [<options>]", "oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m", "oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm", "spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"", "oc create -f <filename>.yaml", "kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner", "oc adm prune images [<options>]", "oc rollout restart deployment/image-registry -n openshift-image-registry", "oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m", "oc adm prune images --prune-over-size-limit", "oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm", "oc adm prune images --prune-over-size-limit --confirm", "oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'", "myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1", "error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client", "error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]", "error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority", "oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge", "service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)", "oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'", "time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'", "Deleted 13374 blobs Freed up 2.835 GiB of disk space", "oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/building_applications/pruning-objects
Chapter 1. Introduction
Chapter 1. Introduction JBoss EAP 7 can be used with the Microsoft Azure platform, as long as you use it within the specific supported configurations for running JBoss EAP in Azure. If you are configuring a clustered JBoss EAP environment, you must apply the specific configurations necessary to use JBoss EAP clustering features in Azure. This guide details the supported configurations of using JBoss EAP in Microsoft Azure, as well as the specific JBoss EAP configuration required to enable JBoss EAP clustering in Azure. All other JBoss EAP features not mentioned in this guide operate normally in Azure as with any other JBoss EAP installation. See the other JBoss EAP documentation for non-Azure-specific configuration instructions. 1.1. Subscription models for JBoss EAP on Azure You can choose between two subscription models for deploying JBoss EAP on Azure: bring your own subscription (BYOS) and pay-as-you-go (PAYG). The following are the differences between the two offerings: BYOS If you already have an existing Red Hat subscription, you can use it for deploying JBoss EAP on Azure with the BYOS model. For more information, see About Red Hat Cloud Access . PAYG If you do not have an existing Red Hat subscription, you can choose to pay based on the number of computing resources you used by using the PAYG model. 1.2. About Red Hat Cloud Access If you have an existing Red Hat subscription, Red Hat Cloud Access provides support for JBoss EAP on Red Hat certified cloud infrastructure providers, such as Amazon EC2 and Microsoft Azure. Red Hat Cloud Access allows you to cost-effectively move your subscriptions between traditional servers and public cloud-based resources. You can find more information about Red Hat Cloud Access on the Customer Portal .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_in_microsoft_azure/introduction
4.10. Maintaining SELinux Labels
4.10. Maintaining SELinux Labels These sections describe what happens to SELinux contexts when copying, moving, and archiving files and directories. Also, it explains how to preserve contexts when copying and archiving. 4.10.1. Copying Files and Directories When a file or directory is copied, a new file or directory is created if it does not exist. That new file or directory's context is based on default-labeling rules, not the original file or directory's context unless options were used to preserve the original context. For example, files created in user home directories are labeled with the user_home_t type: If such a file is copied to another directory, such as /etc , the new file is created in accordance to default-labeling rules for /etc . Copying a file without additional options may not preserve the original context: When file1 is copied to /etc , if /etc/file1 does not exist, /etc/file1 is created as a new file. As shown in the example above, /etc/file1 is labeled with the etc_t type, in accordance to default-labeling rules. When a file is copied over an existing file, the existing file's context is preserved, unless the user specified cp options to preserve the context of the original file, such as --preserve=context . SELinux policy may prevent contexts from being preserved during copies. Procedure 4.11. Copying Without Preserving SELinux Contexts This procedure shows that when copying a file with the cp command, if no options are given, the type is inherited from the targeted, parent directory. Create a file in a user's home directory. The file is labeled with the user_home_t type: The /var/www/html/ directory is labeled with the httpd_sys_content_t type, as shown with the following command: When file1 is copied to /var/www/html/ , it inherits the httpd_sys_content_t type: Procedure 4.12. Preserving SELinux Contexts When Copying This procedure shows how to use the --preserve=context option to preserve contexts when copying. Create a file in a user's home directory. The file is labeled with the user_home_t type: The /var/www/html/ directory is labeled with the httpd_sys_content_t type, as shown with the following command: Using the --preserve=context option preserves SELinux contexts during copy operations. As shown below, the user_home_t type of file1 was preserved when the file was copied to /var/www/html/ : Procedure 4.13. Copying and Changing the Context This procedure show how to use the --context option to change the destination copy's context. The following example is performed in the user's home directory: Create a file in a user's home directory. The file is labeled with the user_home_t type: Use the --context option to define the SELinux context: Without --context , file2 would be labeled with the unconfined_u:object_r:user_home_t context: Procedure 4.14. Copying a File Over an Existing File This procedure shows that when a file is copied over an existing file, the existing file's context is preserved unless an option is used to preserve contexts. As root, create a new file, file1 in the /etc directory. As shown below, the file is labeled with the etc_t type: Create another file, file2 , in the /tmp directory. As shown below, the file is labeled with the user_tmp_t type: Overwrite file1 with file2 : After copying, the following command shows file1 labeled with the etc_t type, not the user_tmp_t type from /tmp/file2 that replaced /etc/file1 : Important Copy files and directories, rather than moving them. This helps ensure they are labeled with the correct SELinux contexts. Incorrect SELinux contexts can prevent processes from accessing such files and directories. 4.10.2. Moving Files and Directories Files and directories keep their current SELinux context when they are moved. In many cases, this is incorrect for the location they are being moved to. The following example demonstrates moving a file from a user's home directory to the /var/www/html/ directory, which is used by the Apache HTTP Server. Since the file is moved, it does not inherit the correct SELinux context: Procedure 4.15. Moving Files and Directories Change into your home directory and create file in it. The file is labeled with the user_home_t type: Enter the following command to view the SELinux context of the /var/www/html/ directory: By default, /var/www/html/ is labeled with the httpd_sys_content_t type. Files and directories created under /var/www/html/ inherit this type, and as such, they are labeled with this type. As root, move file1 to /var/www/html/ . Since this file is moved, it keeps its current user_home_t type: By default, the Apache HTTP Server cannot read files that are labeled with the user_home_t type. If all files comprising a web page are labeled with the user_home_t type, or another type that the Apache HTTP Server cannot read, permission is denied when attempting to access them using web browsers, such as Mozilla Firefox . Important Moving files and directories with the mv command may result in the incorrect SELinux context, preventing processes, such as the Apache HTTP Server and Samba, from accessing such files and directories. 4.10.3. Checking the Default SELinux Context Use the matchpathcon utility to check if files and directories have the correct SELinux context. This utility queries the system policy and then provides the default security context associated with the file path. [6] The following example demonstrates using matchpathcon to verify that files in /var/www/html/ directory are labeled correctly: Procedure 4.16. Checking the Default SELinux Conxtext with matchpathcon As the root user, create three files ( file1 , file2 , and file3 ) in the /var/www/html/ directory. These files inherit the httpd_sys_content_t type from /var/www/html/ : As root, change the file1 type to samba_share_t . Note that the Apache HTTP Server cannot read files or directories labeled with the samba_share_t type. The matchpathcon -V option compares the current SELinux context to the correct, default context in SELinux policy. Enter the following command to check all files in the /var/www/html/ directory: The following output from the matchpathcon command explains that file1 is labeled with the samba_share_t type, but should be labeled with the httpd_sys_content_t type: To resolve the label problem and allow the Apache HTTP Server access to file1 , as root, use the restorecon utility: 4.10.4. Archiving Files with tar The tar utility does not retain extended attributes by default. Since SELinux contexts are stored in extended attributes, contexts can be lost when archiving files. Use the tar --selinux command to create archives that retain contexts and to restore files from the archives. If a tar archive contains files without extended attributes, or if you want the extended attributes to match the system defaults, use the restorecon utility: Note that depending on the directory, you may need to be the root user to run the restorecon . The following example demonstrates creating a tar archive that retains SELinux contexts: Procedure 4.17. Creating a tar Archive Change to the /var/www/html/ directory and view its SELinux context: As root, create three files ( file1 , file2 , and file3 ) in /var/www/html/ . These files inherit the httpd_sys_content_t type from /var/www/html/ : As root, enter the following command to create a tar archive named test.tar . Use the --selinux to retain the SELinux context: As root, create a new directory named test/ , and then allow all users full access to it: Copy the test.tar file into test/ : Change into test/ directory. Once in this directory, enter the following command to extract the tar archive. Specify the --selinux option again otherwise the SELinux context will be changed to default_t : View the SELinux contexts. The httpd_sys_content_t type has been retained, rather than being changed to default_t , which would have happened had the --selinux not been used: If the test/ directory is no longer required, as root, enter the following command to remove it, as well as all files in it: See the tar (1) manual page for further information about tar , such as the --xattrs option that retains all extended attributes. 4.10.5. Archiving Files with star The star utility does not retain extended attributes by default. Since SELinux contexts are stored in extended attributes, contexts can be lost when archiving files. Use the star -xattr -H=exustar command to create archives that retain contexts. The star package is not installed by default. To install star , run the yum install star command as the root user. The following example demonstrates creating a star archive that retains SELinux contexts: Procedure 4.18. Creating a star Archive As root, create three files ( file1 , file2 , and file3 ) in the /var/www/html/ . These files inherit the httpd_sys_content_t type from /var/www/html/ : Change into /var/www/html/ directory. Once in this directory, as root, enter the following command to create a star archive named test.star : As root, create a new directory named test/ , and then allow all users full access to it: Enter the following command to copy the test.star file into test/ : Change into test/ . Once in this directory, enter the following command to extract the star archive: View the SELinux contexts. The httpd_sys_content_t type has been retained, rather than being changed to default_t , which would have happened had the -xattr -H=exustar option not been used: If the test/ directory is no longer required, as root, enter the following command to remove it, as well as all files in it: If star is no longer required, as root, remove the package: See the star (1) manual page for further information about star . [6] See the matchpathcon (8) manual page for further information about matchpathcon .
[ "~]USD touch file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]# cp file1 /etc/", "~]USD ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD touch file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -dZ /var/www/html/ drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/", "~]# cp file1 /var/www/html/", "~]USD ls -Z /var/www/html/file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/file1", "~]USD touch file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -dZ /var/www/html/ drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/", "~]# cp --preserve=context file1 /var/www/html/", "~]USD ls -Z /var/www/html/file1 -rw-r--r-- root root unconfined_u:object_r:user_home_t:s0 /var/www/html/file1", "~]USD touch file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD cp --context=system_u:object_r:samba_share_t:s0 file1 file2", "~]USD ls -Z file1 file2 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1 -rw-rw-r-- user1 group1 system_u:object_r:samba_share_t:s0 file2", "~]# touch /etc/file1", "~]USD ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD touch /tmp/file2", "~USD ls -Z /tmp/file2 -rw-r--r-- root root unconfined_u:object_r:user_tmp_t:s0 /tmp/file2", "~]# cp /tmp/file2 /etc/file1", "~]USD ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD touch file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -dZ /var/www/html/ drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/", "~]# mv file1 /var/www/html/", "~]# ls -Z /var/www/html/file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 /var/www/html/file1", "~]# touch /var/www/html/file{1,2,3}", "~]# ls -Z /var/www/html/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3", "~]# chcon -t samba_share_t /var/www/html/file1", "~]USD matchpathcon -V /var/www/html/* /var/www/html/file1 has context unconfined_u:object_r:samba_share_t:s0, should be system_u:object_r:httpd_sys_content_t:s0 /var/www/html/file2 verified. /var/www/html/file3 verified.", "/var/www/html/file1 has context unconfined_u:object_r:samba_share_t:s0, should be system_u:object_r:httpd_sys_content_t:s0", "~]# restorecon -v /var/www/html/file1 restorecon reset /var/www/html/file1 context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0", "~]USD tar -xvf archive.tar | restorecon -f -", "~]USD cd /var/www/html/", "html]USD ls -dZ /var/www/html/ drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 .", "html]# touch file{1,2,3}", "html]USD ls -Z /var/www/html/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3", "html]# tar --selinux -cf test.tar file{1,2,3}", "~]# mkdir /test", "~]# chmod 777 /test/", "~]USD cp /var/www/html/test.tar /test/", "~]USD cd /test/", "test]USD tar --selinux -xvf test.tar", "test]USD ls -lZ /test/ -rw-r--r-- user1 group1 unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- user1 group1 unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- user1 group1 unconfined_u:object_r:httpd_sys_content_t:s0 file3 -rw-r--r-- user1 group1 unconfined_u:object_r:default_t:s0 test.tar", "~]# rm -ri /test/", "~]# touch /var/www/html/file{1,2,3}", "~]# ls -Z /var/www/html/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3", "~]USD cd /var/www/html", "html]# star -xattr -H=exustar -c -f=test.star file{1,2,3} star: 1 blocks + 0 bytes (total of 10240 bytes = 10.00k).", "~]# mkdir /test", "~]# chmod 777 /test/", "~]USD cp /var/www/html/test.star /test/", "~]USD cd /test/", "test]USD star -x -f=test.star star: 1 blocks + 0 bytes (total of 10240 bytes = 10.00k).", "~]USD ls -lZ /test/ -rw-r--r-- user1 group1 unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- user1 group1 unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- user1 group1 unconfined_u:object_r:httpd_sys_content_t:s0 file3 -rw-r--r-- user1 group1 unconfined_u:object_r:default_t:s0 test.star", "~]# rm -ri /test/", "~]# yum remove star" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-Maintaining_SELinux_Labels_
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_openwire_jms_client/using_your_subscription
Chapter 4. Modifying a compute machine set
Chapter 4. Modifying a compute machine set You can modify a compute machine set, such as adding labels, changing the instance type, or changing block storage. Note If you need to scale a compute machine set without making other changes, see Manually scaling a compute machine set . 4.1. Modifying a compute machine set by using the CLI You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI. By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. Note Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources. You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas. If you need to scale a compute machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods. The output examples in this procedure use the values for an AWS cluster. Prerequisites Your OpenShift Container Platform cluster uses the Machine API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m Edit a compute machine set by running the following command: USD oc edit machinesets.machine.openshift.io <machine_set_name> \ -n openshift-machine-api Note the value of the spec.replicas field, because you need it when scaling the machine set to apply the changes. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a compute machine set that has a replicas value of 2 . Update the compute machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine.machine.openshift.io/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 . Verification To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine.machine.openshift.io <machine_name_updated_1> \ -n openshift-machine-api To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output while deletion is in progress for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s Example output when deletion is complete for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s Additional resources Lifecycle hooks for the machine deletion phase Scaling a compute machine set manually Controlling pod placement using the scheduler
[ "oc get machinesets.machine.openshift.io -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m", "oc edit machinesets.machine.openshift.io <machine_set_name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h", "oc annotate machine.machine.openshift.io/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s", "oc scale --replicas=2 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc describe machine.machine.openshift.io <machine_name_updated_1> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s", "NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/modifying-machineset
1.2. Default Cgroup Hierarchies
1.2. Default Cgroup Hierarchies By default, systemd automatically creates a hierarchy of slice , scope and service units to provide a unified structure for the cgroup tree. With the systemctl command, you can further modify this structure by creating custom slices, as shown in Section 2.1, "Creating Control Groups" . Also, systemd automatically mounts hierarchies for important kernel resource controllers (see Available Controllers in Red Hat Enterprise Linux 7 ) in the /sys/fs/cgroup/ directory. Warning The deprecated cgconfig tool from the libcgroup package is available to mount and handle hierarchies for controllers not yet supported by systemd (most notably the net-prio controller). Never use libcgropup tools to modify the default hierarchies mounted by systemd since it would lead to unexpected behavior. The libcgroup library will be removed in future versions of Red Hat Enterprise Linux. For more information on how to use cgconfig , see Chapter 3, Using libcgroup Tools . Systemd Unit Types All processes running on the system are child processes of the systemd init process. Systemd provides three unit types that are used for the purpose of resource control (for a complete list of systemd 's unit types, see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrator's Guide ): Service - A process or a group of processes, which systemd started based on a unit configuration file. Services encapsulate the specified processes so that they can be started and stopped as one set. Services are named in the following way: name . service Where name stands for the name of the service. Scope - A group of externally created processes. Scopes encapsulate processes that are started and stopped by arbitrary processes through the fork() function and then registered by systemd at runtime. For instance, user sessions, containers, and virtual machines are treated as scopes. Scopes are named as follows: name . scope Here, name stands for the name of the scope. Slice - A group of hierarchically organized units. Slices do not contain processes, they organize a hierarchy in which scopes and services are placed. The actual processes are contained in scopes or in services. In this hierarchical tree, every name of a slice unit corresponds to the path to a location in the hierarchy. The dash (" - ") character acts as a separator of the path components. For example, if the name of a slice looks as follows: parent - name . slice it means that a slice called parent - name . slice is a subslice of the parent . slice . This slice can have its own subslice named parent - name - name2 . slice , and so on. There is one root slice denoted as: -.slice Service, scope, and slice units directly map to objects in the cgroup tree. When these units are activated, they map directly to cgroup paths built from the unit names. For example, the ex.service residing in the test-waldo.slice is mapped to the cgroup test.slice/test-waldo.slice/ex.service/ . Services, scopes, and slices are created manually by the system administrator or dynamically by programs. By default, the operating system defines a number of built-in services that are necessary to run the system. Also, there are four slices created by default: -.slice - the root slice; system.slice - the default place for all system services; user.slice - the default place for all user sessions; machine.slice - the default place for all virtual machines and Linux containers. Note that all user sessions are automatically placed in a separated scope unit, as well as virtual machines and container processes. Furthermore, all users are assigned with an implicit subslice. Besides the above default configuration, the system administrator can define new slices and assign services and scopes to them. The following tree is a simplified example of a cgroup tree. This output was generated with the systemd-cgls command described in Section 2.4, "Obtaining Information about Control Groups" : As you can see, services and scopes contain processes and are placed in slices that do not contain processes of their own. The only exception is PID 1 that is located in the special systemd slice marked as -.slice . Also note that -.slice is not shown as it is implicitly identified with the root of the entire tree. Service and slice units can be configured with persistent unit files as described in Section 2.3.2, "Modifying Unit Files" , or created dynamically at runtime by API calls to PID 1 (see the section called "Online Documentation" for API reference). Scope units can be created only dynamically. Units created dynamically with API calls are transient and exist only during runtime. Transient units are released automatically as soon as they finish, get deactivated, or the system is rebooted.
[ "├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20 ├─user.slice │ └─user-1000.slice │ └─session-1.scope │ ├─11459 gdm-session-worker [pam/gdm-password] │ ├─11471 gnome-session --session gnome-classic │ ├─11479 dbus-launch --sh-syntax --exit-with-session │ ├─11480 /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session │ │ └─system.slice ├─systemd-journald.service │ └─422 /usr/lib/systemd/systemd-journald ├─bluetooth.service │ └─11691 /usr/sbin/bluetoothd -n ├─systemd-localed.service │ └─5328 /usr/lib/systemd/systemd-localed ├─colord.service │ └─5001 /usr/libexec/colord ├─sshd.service │ └─1191 /usr/sbin/sshd -D │" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-Default_Cgroup_Hierarchies
Chapter 13. Configuring Applications for Single Sign-On
Chapter 13. Configuring Applications for Single Sign-On Some common applications, such as browsers and email clients, can be configured to use Kerberos tickets, SSL certifications, or tokens as a means of authenticating users. The precise procedures to configure any application depend on that application itself. The examples in this chapter (Mozilla Thunderbird and Mozilla Firefox) are intended to give you an idea of how to configure a user application to use Kerberos or other credentials. 13.1. Configuring Firefox to Use Kerberos for Single Sign-On Firefox can use Kerberos for single sign-on (SSO) to intranet sites and other protected websites. For Firefox to use Kerberos, it first has to be configured to send Kerberos credentials to the appropriate KDC. Even after Firefox is configured to pass Kerberos credentials, it still requires a valid Kerberos ticket to use. To generate a Kerberos ticket, use the kinit command and supply the user password for the user on the KDC. To configure Firefox to use Kerberos for SSO: In the address bar of Firefox, type about:config to display the list of current configuration options. In the Filter field, type negotiate to restrict the list of options. Double-click the network.negotiate-auth.trusted-uris entry. Enter the name of the domain against which to authenticate, including the preceding period (.). If you want to add multiple domains, enter them in a comma-separated list. Figure 13.1. Manual Firefox Configuration Important It is not recommended to configure delegation using the network.negotiate-auth.delegation-uris entry in the Firefox configuration options because this enables every Kerberos-aware server to act as the user. Note For more information, see Configuring the Browser for Kerberos Authentication in the Linux Domain Identity, Authentication, and Policy Guide ..
[ "[jsmith@host ~] USD kinit Password for [email protected]:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/configuring_applications_for_sso
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_rest_api/red-hat-data-grid
1.2. Installing Querying for Red Hat JBoss Data Grid
1.2. Installing Querying for Red Hat JBoss Data Grid In Red Hat JBoss Data Grid, the JAR files required to perform queries are packaged within the JBoss Data Grid Library and Remote Client-Server mode downloads. For details about downloading and installing JBoss Data Grid, see the Getting Started Guide 's Download and Install JBoss Data Grid chapter. Warning The Infinispan query API directly exposes the Hibernate Search and the Lucene APIs and cannot be embedded within the infinispan-embedded-query.jar file. Do not include other versions of Hibernate Search and Lucene in the same deployment as infinispan-embedded-query . This action will cause classpath conflicts and result in unexpected behavior. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/downloading_infinispan_query
6.2. Authentication Modules for Data Source Security
6.2. Authentication Modules for Data Source Security In some use cases, a user might need to supply credentials to data sources based on the logged in user, rather than shared credentials for all the logged users. To support this feature, JBoss EAP and JBoss Data Virtualization provide additional authentication modules to be used in conjunction with the main security domain: Caller Identity Login Module Role-Based Credential Map Identity Login Module
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/data_source_security
2.2. Client Access Control
2.2. Client Access Control libvirt 's client access control framework allows system administrators to setup fine-grained permission rules across client users, managed objects, and API operations. This allows client connections to be locked down to a minimal set of privileges. In the default configuration, the libvirtd daemon has three levels of access control: All connections start off in an unauthenticated state, where the only API operations allowed are those required to complete authentication. After successful authentication, a connection either has full, unrestricted access to all libvirt API calls, or is locked down to only "read only" operations, according to what socket the client connection originated on. The access control framework allows authenticated connections to have fine-grained permission rules to be defined by the administrator. Every API call in libvirt has a set of permissions that will be validated against the object being used. Further permissions will also be checked if certain flags are set in the API call. In addition to checks on the object passed in to an API call, some methods will filter their results. 2.2.1. Access Control Drivers The access control framework is designed as a pluggable system to enable future integration with arbitrary access control technologies. By default, the none driver is used, which performs no access control checks at all. Currently, libvirt provides support for using polkit as a real access control driver. To learn how to use the polkit access driver see the configuration documentation . The access driver is configured in the /etc/libvirt/libvirtd.conf configuration file, using the access_drivers parameter. This parameter accepts an array of access control driver names. If more than one access driver is requested, then all must succeed in order for access to be granted. To enable 'polkit' as the driver, use the augtool command: To set the driver back to the default (no access control), enter the following command: For the changes made to libvirtd.conf to take effect, restart the libvirtd service. 2.2.2. Objects and Permissions libvirt applies access control to all the main object types in its API. Each object type, in turn, has a set of permissions defined. To determine what permissions are checked for a specific API call, consult the API reference manual documentation for the API in question. For the complete list of objects and permissions, see libvirt.org . 2.2.3. Security Concerns when Adding Block Devices to a Guest The host physical machine should not use file system labels to identify file systems in the fstab file, the initrd file or on the kernel command line. Doing so presents a security risk if guest virtual machines have write access to whole partitions or LVM volumes, because a guest virtual machine could potentially write a file-system label belonging to the host physical machine, to its own block device storage. Upon reboot of the host physical machine, the host physical machine could then mistakenly use the guest virtual machine's disk as a system disk, which would compromise the host physical machine system. It is preferable to use the UUID of a device to identify it in the /etc/fstab file, the /dev/initrd file, or on the kernel command line. Guest virtual machines should not be given write access to entire disks or block devices (for example, /dev/sdb ). Guest virtual machines with access to entire block devices may be able to modify volume labels, which can be used to compromise the host physical machine system. Use partitions (for example, /dev/sdb1 ) or LVM volumes to prevent this problem. See LVM Administration with CLI Commands or LVM Configuration Examples for information on LVM administration and configuration examples. If you are using raw access to partitions, for example /dev/sdb1 or raw disks such as /dev/sdb, you should configure LVM to only scan disks that are safe, using the global_filter setting. See the Logical Volume Manager Administration Guide for an example of an LVM configuration script using the global_filter command.
[ "augtool -s set '/files/etc/libvirt/libvirtd.conf/access_drivers[1]' polkit", "augtool -s rm /files/etc/libvirt/libvirtd.conf/access_drivers", "systemctl restart libvirtd.service" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/sect-securing_the_host_physical_machine_and_improving_performance-client_access_control
Chapter 5. Customizing the storage service for HCI
Chapter 5. Customizing the storage service for HCI Red Hat OpenStack Platform (RHOSP) director provides the necessary heat templates and environment files to enable a basic Ceph Storage configuration. Director uses the /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml environment file to add additional configuration to the Ceph cluster deployed by openstack overcloud ceph deploy . For more information about containerized services in RHOSP, see Configuring a basic overcloud with the CLI tools in Director Installation and Usage . 5.1. Configuring Compute service resources for HCI Colocating Ceph OSD and Compute services on hyperconverged nodes risks resource contention between Red Hat Ceph Storage and Compute services. This occurs because the services are not aware of the colcation. Resource contention can result in service degradation, which offsets the benefits of hyperconvergence. Configuring the resources used by the Compute service mitigates resource contention and improves HCI performance. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Add the NovaReservedHostMemory parameter to the ceph-overrides.yaml file. The following is a usage example. The NovaReservedHostMemory parameter overrides the default value of reserved_host_memory_mb in /etc/nova/nova.conf . This parameter is set to stop Nova scheduler giving memory, that a Ceph OSD needs, to a virtual machine. The example above reserves 5 GB per OSD for 10 OSDs per host in addition to the default reserved memory for the hypervisor. In an IOPS-optimized cluster, you can improve performance by reserving more memory per OSD. The 5 GB number is provided as a starting point that you can further refine as necessary. Important Include this file when you use the openstack overcloud deploy command. 5.2. Configuring a custom environment file Director applies basic, default settings to the deployed Red Hat Ceph Storage cluster. You must define additional configuration in a custom environment file. Procedure Log in to the undercloud as the stack user. Create a file to define the custom configuration. vi /home/stack/templates/storage-config.yaml Add a parameter_defaults section to the file. Add the custom configuration parameters. For more information about parameter definitions, see Overcloud Parameters . Note Parameters defined in a custom configuration file override any corresponding default settings in /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml . Save the file. Additional resources The custom configuration is applied during overcloud deployment. 5.3. Enabling Ceph Metadata Server The Ceph Metadata Server (MDS) runs the ceph-mds daemon. This daemon manages metadata related to files stored on CephFS. CephFS can be consumed natively or through the NFS protocol. Note Red Hat supports deploying Ceph MDS with the native CephFS and CephFS NFS back ends for the Shared File Systems service (manila). Procedure To enable Ceph MDS, use the following environment file when you deploy the overcloud: Note By default, Ceph MDS is deployed on the Controller node. You can deploy Ceph MDS on its own dedicated node. Additional resources Red Hat Ceph Storage File System Guide 5.4. Ceph Object Gateway object storage The Ceph Object Gateway (RGW) provides an interface to access object storage capabilities within a Red Hat Ceph Storage cluster. When you use director to deploy Ceph, director automatically enables RGW. This is a direct replacement for the Object Storage service (swift). Services that normally use the Object Storage service can use RGW instead without additional configuration. The Object Storage service remains available as an object storage option for upgraded Ceph clusters. There is no requirement for a separate RGW environment file to enable it. For more information about environment files for other object storage options, see Section 5.5, "Deployment options for Red Hat OpenStack Platform object storage" . By default, Ceph Storage allows 250 placement groups per Object Storage Daemon (OSD). When you enable RGW, Ceph Storage creates the following six additional pools required by RGW: .rgw.root <zone_name>.rgw.control <zone_name>.rgw.meta <zone_name>.rgw.log <zone_name>.rgw.buckets.index <zone_name>.rgw.buckets.data Note In your deployment, <zone_name> is replaced with the name of the zone to which the pools belong. Additional resources For more information about RGW, see the Red Hat Ceph Storage Object Gateway Guide . For more information about using RGW instead of Swift, see the Block Storage Backup Guide . 5.5. Deployment options for Red Hat OpenStack Platform object storage There are three options for deploying overcloud object storage: Ceph Object Gateway (RGW) To deploy RGW as described in Section 5.4, "Ceph Object Gateway object storage" , include the following environment file during overcloud deployment: This environment file configures both Ceph block storage (RBD) and RGW. Object Storage service (swift) To deploy the Object Storage service (swift) instead of RGW, include the following environment file during overcloud deployment: The cephadm-rbd-only.yaml file configures Ceph RBD but not RGW. Note If you used the Object Storage service (swift) before upgrading your Red Hat Ceph Storage cluster, you can continue to use the Object Storage service (swift) instead of RGW by replacing the environments/ceph-ansible/ceph-ansible.yaml file with the environments/cephadm/cephadm-rbd-only.yaml during the upgrade. For more information, see Keeping Red Hat OpenStack Platform Updated . Red Hat OpenStack Platform does not support migration from the Object Storage service (swift) to Ceph Object Gateway (RGW). No object storage To deploy Ceph with RBD but not with RGW or the Object Storage service (swift), include the following environment files during overcloud deployment: The cephadm-rbd-only.yaml file configures RBD but not RGW. The disable-swift.yaml file ensures that the Object Storage service (swift) does not deploy. 5.6. Configuring the Block Storage Backup Service to use Ceph The Block Storage Backup service (cinder-backup) is disabled by default. It must be enabled to use it with Ceph. Procedure To enable the Block Storage Backup service (cinder-backup), use the following environment file when you deploy the overcloud: 5.7. Configuring multiple bonded interfaces for Ceph nodes Use a bonded interface to combine multiple NICs and add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can create multiple bonded interfaces on each node to expand redundancy capability. Use a bonded interface for each network connection the node requires. This provides both redundancy and a dedicated connection for each network. See Provisioning the overcloud networks in the Director Installation and Usage guide for information and procedures. 5.8. Initiating overcloud deployment for HCI To implement the changes you made to your Red Hat OpenStack Platform (RHOSP) environment, you must deploy the overcloud. Prerequisites Before undercloud installation, set generate_service_certificate=false in the undercloud.conf file. Otherwise, you must configure SSL/TLS on the overcloud as described in Enabling SSL/TLS on overcloud public endpoints in the Security and Hardening Guide . Note If you want to add Ceph Dashboard during your overcloud deployment, see Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Procedure Deploy the overcloud. The deployment command requires additional arguments, for example: The example command uses the following options: --templates - Creates the overcloud from the default heat template collection, /usr/share/openstack-tripleo-heat-templates/ . -r /home/stack/templates/roles_data_custom.yaml - Specifies a customized roles definition file. -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml - Sets the director to finalize the previously deployed Ceph Storage cluster. This environment file deploys RGW by default. It also creates pools, keys, and daemons. -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml - Enables the Ceph Metadata Server. -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml - Enables the Block Storage Backup service. -e /home/stack/templates/storage-config.yaml - Adds the environment file that contains your custom Ceph Storage configuration. -e /home/stack/templates/deployed-ceph.yaml - Adds the environment file that contains your Ceph cluster settings, as output by the openstack overcloud ceph deploy command run earlier. --ntp-server pool.ntp.org - Sets the NTP server. Note For a full list of options, run the openstack help overcloud deploy command. Additional resources For more information, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide.
[ "source ~/stackrc", "parameter_defaults: ComputeHCIParameters: NovaReservedHostMemory: 75000", "parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd", "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml", "-e environments/cephadm/cephadm.yaml", "-e environments/cephadm/cephadm-rbd-only.yaml", "-e environments/cephadm/cephadm-rbd-only.yaml -e environments/disable-swift.yaml", "`/usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml`.", "openstack overcloud deploy --templates -r /home/stack/templates/roles_data_custom.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml -e /home/stack/templates/storage-config.yaml -e /home/stack/templates/deployed-ceph.yaml --ntp-server pool.ntp.org" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/hyperconverged_infrastructure_guide/assembly_customizing-hci-storage-service
5.278. redhat-rpm-config
5.278. redhat-rpm-config 5.278.1. RHBA-2012:0911 - redhat-rpm-config bug fix and enhancement update An updated redhat-rpm-config package that fixes several bugs and adds an enhancement is now available for Red Hat Enterprise Linux 6. The redhat-rpm-config package is used during building of RPM packages to apply various default distribution options determined by Red Hat. It also provides a few Red Hat RPM macro customizations, such as those used during the building of Driver Update packages. Bug Fixes BZ# 680029 Previously, the %kernel_module_package macro did not handle the "-v" and "-r" optional version and release override parameters correctly. Consequently, the specified version and release number were not used when the RPM package was built. This bug has been fixed and these parameters are now handled properly. BZ# 713638 Previously, a script, which generates "modalias"-style dependencies for Driver Update packages in Red Hat Enterprise Linux 6, was not executable and thus could not function properly. This bug has been fixed and these dependencies are now generated as expected. BZ# 713992 When the kabi-whitelists package is installed, the %kernel_module_package macro did not automatically perform a check against the Red Hat kernel ABI interface (kABI). Consequently, when a package was being built, the macro did not warn when the resulting modules used kernel symbols that were exported but not part of the kABI. With this update, the abi_check.py script has been added to perform the check and return a warning during the build process if kabi-whitelists is not installed, thus fixing this bug. BZ# 767738 In certain cases, the dependency-generation scripts that produce information about automatic kernel symbol during the build process of a Driver Update package generated incorrect dependencies. This bug has been fixed and dependencies are now generated correctly. Enhancement BZ# 652084 The path of the autoconf configuration script invoked by the %configure macro can now be customized by overriding the %_configure macro. In addition, this can be of use when building out-of-tree packages. Users of redhat-rpm-config are advised to upgrade to this updated package, which fixes these bugs and adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/redhat-rpm-config
Chapter 79. JaegerTracing schema reference
Chapter 79. JaegerTracing schema reference The type JaegerTracing has been deprecated. Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the JaegerTracing type from OpenTelemetryTracing . It must have the value jaeger for the type JaegerTracing . Property Property type Description type string Must be jaeger .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-JaegerTracing-reference
3.4. Deploy a VDB via Management Console
3.4. Deploy a VDB via Management Console Prerequisites Red Hat JBoss Data Virtualization must be installed. The JBoss Enterprise Application Platform (EAP) server must be running. You must have a JBoss EAP Management User registered. Procedure 3.2. Deploy a VDB via Management Console Launch the console in a Web browser Open http://localhost:9990/console/ in a web browser. Authenticate to the console Enter your JBoss EAP administrator username and password when prompted. Open the Deployments panel In the Runtime view, select Server Manage Deployments . Add the virtual database Select the Add button. Select Choose File and choose the VDB file you want to deploy. Select to review the deployment names then select Save . Select En/Disable to enable the VDB.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/deploy_a_vdb_via_management_console
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_security_compliance/providing-feedback-on-red-hat-documentation_security-compliance
B.3. LVM Profiles
B.3. LVM Profiles An LVM profile is a set of selected customizable configuration settings that can be used to achieve certain characteristics in various environments or uses. Normally, the name of the profile should reflect that environment or use. An LVM profile overrides existing configuration. There are two groups of LVM profiles that LVM recognizes: command profiles and metadata profiles. A command profile is used to override selected configuration settings at the global LVM command level. The profile is applied at the beginning of LVM command execution and it is used throughout the time of the LVM command execution. You apply a command profile by specifying the --commandprofile ProfileName option when executing an LVM command. A metadata profile is used to override selected configuration settings at the volume group/logical volume level. It is applied independently for each volume group/logical volume that is being processed. As such, each volume group/logical volume can store the profile name used in its metadata so that time the volume group/logical volume is processed, the profile is applied automatically. If the volume group and any of its logical volumes have different profiles defined, the profile defined for the logical volume is preferred. You can attach a metadata profile to a volume group or logical volume by specifying the --metadataprofile ProfileName option when you create the volume group or logical volume with the vgcreate or lvcreate command. You can attach or detach a metadata profile to an existing volume group or logical volume by specifying the --metadataprofile ProfileName or the --detachprofile option of the lvchange or vgchange command. You can specify the -o vg_profile and -o lv_profile output options of the vgs and lvs commands to display the metadata profile currently attached to a volume group or a logical volume. The set of options allowed for command profiles and the set of options allowed for metadata profiles are mutually exclusive. The settings that belong to either of these two sets cannot be mixed together and the LVM tools will reject such profiles. LVM provides a few predefined configuration profiles. The LVM profiles are stored in the /etc/lvm/profile directory by default. This location can be changed by using the profile_dir setting in the /etc/lvm/lvm.conf file. Each profile configuration is stored in ProfileName .profile file in the profile directory. When referencing the profile in an LVM command, the .profile suffix is omitted. You can create additional profiles with different values. For this purpose, LVM provides the command_profile_template.profile file (for command profiles) and the metadata_profile_template.profile file (for metadata profiles) which contain all settings that are customizable by profiles of each type. You can copy these template profiles and edit them as needed. Alternatively, you can use the lvmconfig command to generate a new profile for a given section of the profile file for either profile type. The following command creates a new command profile named ProfileName .profile consisting of the settings in section . The following command creates a new metadata profile named ProfileName .profile consisting of the settings in section . If the section is not specified, all settings that can be customized by a profile are reported.
[ "lvmconfig --file ProfileName .profile --type profilable-command section", "lvmconfig --file ProfileName .profile --type profilable-metadata section" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_profiles
Appendix A. Troubleshooting
Appendix A. Troubleshooting This section provides the multiple troubleshooting scenarios while using the dashboard. A.1. Dashboard response is slow If the dashboard response is slow, clear the browser cache and reload the dashboard. A.2. Dashboard shows a service is down Dashboard is only a replica of the cluster. If the service is down, check the service status on the node as dashboard displays information collected via node-exporter running on the node. The issue may be in the cluster, configuration, or network. A.3. Task failure on Dashboard While performing any task on the dashboard, if there is any failure, check the respective Ceph daemons. For more information refer to the Troubleshooting Guide A.4. Images cannot be viewed An image can only be viewed under Block > Images if the pool it is in has the RBD application enabled on it. Additional resources For more information, refer to the Troubleshooting Guide
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/dashboard_guide/troubleshooting-scenarios_dash
11.10. Additional Resources
11.10. Additional Resources The following are resources which explain more about network interfaces. Installed Documentation /usr/share/doc/initscripts- version /sysconfig.txt - A guide to available options for network configuration files, including IPv6 options not covered in this chapter. Online Resources http://linux-ip.net/gl/ip-cref/ - This document contains a wealth of information about the ip command, which can be used to manipulate routing tables, among other things. Red Hat Access Labs - The Red Hat Access Labs includes a " Network Bonding Helper " . See Also Appendix E, The proc File System - Describes the sysctl utility and the virtual files within the /proc/ directory, which contain networking parameters and statistics among other things.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-networkscripts-resources
Preface
Preface The Red Hat build of Cryostat is a container-native implementation of JDK Flight Recorder (JFR) that you can use to securely monitor the Java Virtual Machine (JVM) performance in workloads that run on an OpenShift Container Platform cluster. You can use Cryostat 2.4 to start, stop, retrieve, archive, import, and export JFR data for JVMs inside your containerized applications by using a web console or an HTTP API. Depending on your use case, you can store and analyze your recordings directly on your Red Hat OpenShift cluster by using the built-in tools that Cryostat provides or you can export recordings to an external monitoring application to perform a more in-depth analysis of your recorded data. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/configuring_advanced_cryostat_configurations/preface-cryostat
Chapter 7. Working with heat templates
Chapter 7. Working with heat templates The custom configurations in this guide use heat templates and environment files to define certain aspects of the overcloud. This chapter contains a basic introduction to the structure of heat templates in the context of Red Hat OpenStack Platform. The purpose of a template is to define and create a stack, which is a collection of resources that heat creates, and the configuration of the resources. Resources are objects in OpenStack and can include compute resources, network configurations, security groups, scaling rules, and custom resources. The structure of a heat template has three main sections: Parameters Parameters are settings passed to heat. Use these parameters to define and customize both default and non-default values. Define these parameters in the parameters section of a template. Resources Resources are the specific objects that you want to create and configure as part of a stack. OpenStack contains a set of core resources that span across all components. Define resources in the resources section of a template. Output These are values passed from heat after the stack creation. You can access these values either through the heat API or through the client tools. Define these values in the output section of a template. When heat processes a template, it creates a stack for the template and a set of child stacks for resource templates. This hierarchy of stacks descends from the main stack that you define with your template. You can view the stack hierarchy with the following command: 7.1. Core heat templates Red Hat OpenStack Platform contains a core heat template collection for the overcloud. You can find this collection in the /usr/share/openstack-tripleo-heat-templates directory. There are many heat templates and environment files in this collection. This section contains information about the main files and directories that you can use to customize your deployment. overcloud.j2.yaml This file is the main template file used to create the overcloud environment. This file uses Jinja2 syntax and iterates over certain sections in the template to create custom roles. The Jinja2 formatting is rendered into YAML during the overcloud deployment process. overcloud-resource-registry-puppet.j2.yaml This file is the main environment file that you use to create the overcloud environment. This file contains a set of configurations for Puppet modules on the overcloud image. After the director writes the overcloud image to each node, heat starts the Puppet configuration for each node using the resources registered in this environment file. This file uses Jinja2 syntax and iterates over certain sections in the template to create custom roles. The Jinja2 formatting is rendered into YAML during the overcloud deployment process. roles_data.yaml This file contains definitions of the roles in an overcloud, and maps services to each role. network_data.yaml This file contains definitions of the networks in an overcloud and their properties, including subnets, allocation pools, and VIP status. The default network_data.yaml file contains only the default networks: External, Internal Api, Storage, Storage Management, Tenant, and Management. You can create a custom network_data.yaml file and include it in the openstack overcloud deploy command with the -n option. plan-environment.yaml This file contains definitions of the metadata for your overcloud plan, including the plan name, the main template that you want to use, and environment files that you want to apply to the overcloud. capabilities-map.yaml This file contains a mapping of environment files for an overcloud plan. Use this file to describe and enable environment files in the director web UI. If you include custom environment files in the environments directory but do not define these files in the capabilities-map.yaml file, you can find these environment files in the Other sub-tab of the Overall Settings page on the web UI. environments This directory contains additional heat environment files that you can use with your overcloud creation. These environment files enable extra functions for your Red Hat OpenStack Platform environment. For example, you can use the cinder-netapp-config.yaml environment file to enable NetApp back end storage for the Block Storage service (cinder). If you include custom environment files in the environments directory but do not define these files in the capabilities-map.yaml file, you can find these environment files in the Other sub-tab of the Overall Settings page on the web UI. network This directory contains a set of heat templates that you can use to create isolated networks and ports. puppet This directory contains puppet templates. The overcloud-resource-registry-puppet.j2.yaml environment file uses the files in the puppet directory to drive the application of the Puppet configuration on each node. puppet/services This directory contains heat templates for all services in the composable service architecture. extraconfig This directory contains templates that you can use to enable extra functionality. For example, you can use the extraconfig/pre_deploy/rhel-registration directory to register your nodes with the Red Hat Content Delivery network, or with your own Red Hat Satellite server.
[ "heat stack-list --show-nested" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/standalone_deployment_guide/working-with-heat-templates
Chapter 2. CSIDriver [storage.k8s.io/v1]
Chapter 2. CSIDriver [storage.k8s.io/v1] Description CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata. metadata.Name indicates the name of the CSI driver that this object refers to; it MUST be the same name returned by the CSI GetPluginName() call for that driver. The driver name must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and alphanumerics between. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CSIDriverSpec is the specification of a CSIDriver. 2.1.1. .spec Description CSIDriverSpec is the specification of a CSIDriver. Type object Property Type Description attachRequired boolean attachRequired indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting. The CSI external-attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete. If the CSIDriverRegistry feature gate is enabled and the value is specified to false, the attach operation will be skipped. Otherwise the attach operation will be called. This field is immutable. fsGroupPolicy string fsGroupPolicy defines if the underlying volume supports changing ownership and permission of the volume before being mounted. Refer to the specific FSGroupPolicy values for additional details. This field is immutable. Defaults to ReadWriteOnceWithFSType, which will examine each volume to determine if Kubernetes should modify ownership and permissions of the volume. With the default policy the defined fsGroup will only be applied if a fstype is defined and the volume's access mode contains ReadWriteOnce. podInfoOnMount boolean podInfoOnMount indicates this CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount operations, if set to true. If set to false, pod information will not be passed on mount. Default is false. The CSI driver specifies podInfoOnMount as part of driver deployment. If true, Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume() calls. The CSI driver is responsible for parsing and validating the information passed in as VolumeContext. The following VolumeConext will be passed if podInfoOnMount is set to true. This list might grow, but the prefix will be used. "csi.storage.k8s.io/pod.name": pod.Name "csi.storage.k8s.io/pod.namespace": pod.Namespace "csi.storage.k8s.io/pod.uid": string(pod.UID) "csi.storage.k8s.io/ephemeral": "true" if the volume is an ephemeral inline volume defined by a CSIVolumeSource, otherwise "false" "csi.storage.k8s.io/ephemeral" is a new feature in Kubernetes 1.16. It is only required for drivers which support both the "Persistent" and "Ephemeral" VolumeLifecycleMode. Other drivers can leave pod info disabled and/or ignore this field. As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when deployed on such a cluster and the deployment determines which mode that is, for example via a command line parameter of the driver. This field is immutable. requiresRepublish boolean requiresRepublish indicates the CSI driver wants NodePublishVolume being periodically called to reflect any possible change in the mounted volume. This field defaults to false. Note: After a successful initial NodePublishVolume call, subsequent calls to NodePublishVolume should only update the contents of the volume. New mount points will not be seen by a running container. seLinuxMount boolean seLinuxMount specifies if the CSI driver supports "-o context" mount option. When "true", the CSI driver must ensure that all volumes provided by this CSI driver can be mounted separately with different -o context options. This is typical for storage backends that provide volumes as filesystems on block devices or as independent shared volumes. Kubernetes will call NodeStage / NodePublish with "-o context=xyz" mount option when mounting a ReadWriteOncePod volume used in Pod that has explicitly set SELinux context. In the future, it may be expanded to other volume AccessModes. In any case, Kubernetes will ensure that the volume is mounted only with a single SELinux context. When "false", Kubernetes won't pass any special SELinux mount options to the driver. This is typical for volumes that represent subdirectories of a bigger shared filesystem. Default is "false". storageCapacity boolean storageCapacity indicates that the CSI volume driver wants pod scheduling to consider the storage capacity that the driver deployment will report by creating CSIStorageCapacity objects with capacity information, if set to true. The check can be enabled immediately when deploying a driver. In that case, provisioning new volumes with late binding will pause until the driver deployment has published some suitable CSIStorageCapacity object. Alternatively, the driver can be deployed with the field unset or false and it can be flipped later when storage capacity information has been published. This field was immutable in Kubernetes ⇐ 1.22 and now is mutable. tokenRequests array tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. tokenRequests[] object TokenRequest contains parameters of a service account token. volumeLifecycleModes array (string) volumeLifecycleModes defines what kind of volumes this CSI volume driver supports. The default if the list is empty is "Persistent", which is the usage defined by the CSI specification and implemented in Kubernetes via the usual PV/PVC mechanism. The other mode is "Ephemeral". In this mode, volumes are defined inline inside the pod spec with CSIVolumeSource and their lifecycle is tied to the lifecycle of that pod. A driver has to be aware of this because it is only going to get a NodePublishVolume call for such a volume. For more information about implementing this mode, see https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html A driver can support one or more of these modes and more modes may be added in the future. This field is beta. This field is immutable. 2.1.2. .spec.tokenRequests Description tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. Type array 2.1.3. .spec.tokenRequests[] Description TokenRequest contains parameters of a service account token. Type object Required audience Property Type Description audience string audience is the intended audience of the token in "TokenRequestSpec". It will default to the audiences of kube apiserver. expirationSeconds integer expirationSeconds is the duration of validity of the token in "TokenRequestSpec". It has the same default value of "ExpirationSeconds" in "TokenRequestSpec". 2.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csidrivers DELETE : delete collection of CSIDriver GET : list or watch objects of kind CSIDriver POST : create a CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers GET : watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csidrivers/{name} DELETE : delete a CSIDriver GET : read the specified CSIDriver PATCH : partially update the specified CSIDriver PUT : replace the specified CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers/{name} GET : watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/storage.k8s.io/v1/csidrivers HTTP method DELETE Description delete collection of CSIDriver Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIDriver Table 2.3. HTTP responses HTTP code Reponse body 200 - OK CSIDriverList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIDriver Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body CSIDriver schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty 2.2.2. /apis/storage.k8s.io/v1/watch/csidrivers HTTP method GET Description watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/storage.k8s.io/v1/csidrivers/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the CSIDriver HTTP method DELETE Description delete a CSIDriver Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIDriver Table 2.11. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIDriver Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIDriver Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body CSIDriver schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty 2.2.4. /apis/storage.k8s.io/v1/watch/csidrivers/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the CSIDriver HTTP method GET Description watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage_apis/csidriver-storage-k8s-io-v1
function::sock_fam_str2num
function::sock_fam_str2num Name function::sock_fam_str2num - Given a protocol family name (string), return the corresponding Synopsis Arguments family The family name. Description protocol family number.
[ "function sock_fam_str2num:long(family:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-sock-fam-str2num
Chapter 3. Using realmd to Connect to an Active Directory Domain
Chapter 3. Using realmd to Connect to an Active Directory Domain The realmd system provides a clear and simple way to discover and join identity domains to achieve direct domain integration. It configures underlying Linux system services, such as SSSD or Winbind, to connect to the domain. Chapter 2, Using Active Directory as an Identity Provider for SSSD describes how to use the System Security Services Daemon (SSSD) on a local system and Active Directory as a back-end identity provider. Ensuring that the system is properly configured for this can be a complex task: there are a number of different configuration parameters for each possible identity provider and for SSSD itself. In addition, all domain information must be available in advance and then properly formatted in the SSSD configuration for SSSD to integrate the local system with AD. The realmd system simplifies that configuration. It can run a discovery search to identify available AD and Identity Management domains and then join the system to the domain, as well as set up the required client services used to connect to the given identity domain and manage user access. Additionally, because SSSD as an underlying service supports multiple domains, realmd can discover and support multiple domains as well. 3.1. Supported Domain Types and Clients The realmd system supports the following domain types: Microsoft Active Directory Red Hat Enterprise Linux Identity Management The following domain clients are supported by realmd : SSSD for both Red Hat Enterprise Linux Identity Management and Microsoft Active Directory Winbind for Microsoft Active Directory
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/ch-Configuring_Authentication
Installing on GCP
Installing on GCP OpenShift Container Platform 4.15 Installing OpenShift Container Platform on Google Cloud Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_gcp/index
Deploying OpenShift Data Foundation using IBM Power
Deploying OpenShift Data Foundation using IBM Power Red Hat OpenShift Data Foundation 4.18 Instructions on deploying Red Hat OpenShift Data Foundation on IBM Power Red Hat Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_power/index
Chapter 2. Integrating OpenStack Identity (keystone) with Red Hat Identity Manager (IdM)
Chapter 2. Integrating OpenStack Identity (keystone) with Red Hat Identity Manager (IdM) When you integrate OpenStack Identity (keystone) with Red Hat Identity Manager (IdM), OpenStack Identity authenticates certain Red Hat Identity Management (IdM) users but retains authorization settings and critical service accounts in the Identity Service database. As a result, Identity Service has read-only access to IdM for user account authentication, while retaining management over the privileges assigned to authenticated accounts. You can also use tripleo-ipa or novajoin to enroll your nodes with IdM. Note The configuration files for this integration are managed by Puppet. Therefore, any custom configuration that you add might be overwritten the time you run the openstack overcloud deploy command. You can use director to configure LDAP authentication instead of manually editing the configuration files. Review the following key terms before you plan and configure the IdM integration: Authentication - The process of using a password to verify that the user is who they claim to be. Authorization - Validating that authenticated users have proper permissions to the systems they're attempting to access. Domain - Refers to the additional back ends configured in Identity Service. For example, Identity Service can be configured to authenticate users from external IdM environments. The resulting collection of users can be thought of as a domain . The process to integrate OpenStack Identity with IdM includes the following stages: Enroll the undercloud and overcloud in IdM with novajoin Implement TLS-e on the undercloud and overcloud with Ansible Configure IdM server credentials and export the LDAPS certificate Install and configure the LDAPS certificate in OpenStack Configure director to use one or more LDAP backends Configure Controller nodes to access the IdM backend Configure IdM user or group access to OpenStack projects Verify that the domain and user lists are created correctly Optional: Create credential files for non-admin users 2.1. Planning the Red Hat Identity Manager (IdM) integration When you plan your OpenStack Identity integration with Red Hat Identity Manager (IdM), ensure that both services are configured and operational and review the impact of the integration on user management and firewall settings. Prerequisites Red Hat Identity Management is configured and operational. Red Hat OpenStack Platform is configured and operational. DNS name resolution is fully functional and all hosts are registered appropriately. Permissions and roles This integration allows IdM users to authenticate to OpenStack and access resources. OpenStack service accounts (such as keystone and glance), and authorization management (permissions and roles) will remain in the Identity Service database. Permissions and roles are assigned to the IdM accounts using Identity Service management tools. High availability options This configuration creates a dependency on the availability of a single IdM server: Project users will be affected if Identity Service is unable to authenticate to the IdM Server. You can configure keystone to query a different IdM server, should one become unavailable, or you can use a load balancer. Do not use a load balancer when you use IdM with SSSD, as this configuration has failover implemented on the client. Outage requirements The Identity Service will need to be restarted in order to add the IdM back end. Users will be unable to access the dashboard until their accounts have been created in IdM. To reduce downtime, consider pre-staging the IdM accounts well in advance of this change. Firewall configuration Communication between IdM and OpenStack consists of the following: Authenticating users IdM retrieval of the certificate revocation list (CRL) from the controllers every two hours Certmonger requests for new certificates upon expiration Note A periodic certmonger task will continue to request new certificates if the initial request fails. If firewalls are filtering traffic between IdM and OpenStack, you will need to allow access through the following port: Source Destination Type Port OpenStack Controller Node Red Hat Identity Management LDAPS TCP 636 2.2. Identity Management (IdM) server recommendations for OpenStack Red Hat provides the following information to help you integrate your IdM server and OpenStack environment. For information on preparing Red Hat Enterprise Linux for an IdM installation, see Installing Identity Management . Run the ipa-server-install command to install and configure IdM. You can use command parameters to skip interactive prompts. Use the following recommendations so that your IdM server can integrate with your Red Hat OpenStack Platform environment: Table 2.1. Parameter recommendations Option Recommendation --admin-password Note the value you provide. You will need this password when configuring Red Hat OpenStack Platform to work with IdM. --ip-address Note the value you provide. The undercloud and overcloud nodes require network access to this ip address. --setup-dns Use this option to install an integrated DNS service on the IdM server. The undercloud and overcloud nodes use the IdM server for domain name resolution. --auto-forwarders Use this option to use the addresses in /etc/resolv.conf as DNS forwarders. --auto-reverse Use this option to resolve reverse records and zones for the IdM server IP addresses. If neither reverse records or zones are resolvable, IdM creates the reverse zones. This simplifies the IdM deployment. --ntp-server , --ntp-pool You can use both or either of these options to configure your NTP source. Both the IdM server and your OpenStack environment must have correct and synchronized time. You must open the firewall ports required by IdM to enable communication with Red Hat OpenStack Platform nodes. For more information, see Opening the ports required by IdM . Additional resources Configuring and Managing Identity Management Red Hat Identity Management Documentation 2.3. Implementing TLS-e with Ansible You can use the new tripleo-ipa method to enable SSL/TLS on overcloud endpoints, called TLS everywhere (TLS-e). Due to the number of certificates required, Red Hat OpenStack Platform integrates with Red Hat Identity management (IdM). When you use tripleo-ipa to configure TLS-e, IdM is the certificate authority. Prerequisites Ensure that all configuration steps for the undercloud, such as the creation of the stack user, are complete. For more details, see Director Installation and Usage for more details Procedure Use the following procedure to implement TLS-e on a new installation of Red Hat OpenStack Platform, or an existing deployment that you want to configure with TLS-e. You must use this method if you deploy Red Hat OpenStack Platform with TLS-e on pre-provisioned nodes. Note If you are implementing TLS-e for an existing environment, you are required to run commands such as openstack undercloud install , and openstack overcloud deploy . These procedures are idempotent and only adjust your existing deployment configuration to match updated templates and configuration files. Configure the /etc/resolv.conf file: Set the appropriate search domains and the nameserver on the undercloud in /etc/resolv.conf . For example, if the deployment domain is example.com , and the domain of the FreeIPA server is bigcorp.com , then add the following lines to /etc/resolv.conf: Install required software: Export environmental variables with values specific to your environment.: 1 2 The IdM user credentials are an administrative user that can add new hosts and services. 3 The value of the UNDERCLOUD_FQDN parameter matches the first hostname-to-IP address mapping in /etc/hosts . Run the undercloud-ipa-install.yaml ansible playbook on the undercloud: Add the following parameters to undercloud.conf [Optional] If your IPA realm does not match your IPA domain, set the value of the certmonger_krb_realm parameter: Set the value of the certmonger_krb_realm in /home/stack/hiera_override.yaml : Set the value of the custom_env_files parameter in undercloud.conf to /home/stack/hiera_override.yaml : Deploy the undercloud: Verification Verify that the undercloud was enrolled correctly by completing the following steps: List the hosts in IdM: Confirm that /etc/novajoin/krb5.keytab exists on the undercloud. Note The novajoin directory name is for legacy naming purposes only. Configuring TLS-e on the overcloud When you deploy the overcloud with TLS everywhere (TLS-e), IP addresses from the Undercloud and Overcloud will automatically be registered with IdM. Before deploying the overcloud, create a YAML file tls-parameters.yaml with contents similar to the following. The values you select will be specific for your environment: The shown value of the OS::TripleO::Services::IpaClient parameter overrides the default setting in the enable-internal-tls.yaml file. You must ensure the tls-parameters.yaml file follows enable-internal-tls.yaml in the openstack overcloud deploy command. For more information about the parameters that you use to implement TLS-e, see Parameters for tripleo-ipa Deploy the overcloud. You will need to include the tls-parameters.yaml in the deployment command: Confirm each endpoint is using HTTPS by querying keystone for a list of endpoints: 2.4. Enrolling nodes in Red Hat Identity Manager (IdM) with novajoin Novajoin is the default tool that you use to enroll your nodes with Red Hat Identity Manager (IdM) as part of the deployment process. Red Hat recommends the new ansible-based tripleo-ipa solution over the default novajoin solution to configure your undercloud and overcloud with TLS-e. For more information see Implementing TLS-e with Ansible . You must perform the enrollment process before you proceed with the rest of the IdM integration. The enrollment process includes the following steps: Adding the undercloud node to the certificate authority (CA) Adding the undercloud node to IdM Optional: Setting the IdM server as the DNS server for the overcloud Preparing the environment files and deploying the overcloud Testing the overcloud enrollment in IdM and in RHOSP Optional: Adding DNS entries for novajoin in IdM Note IdM enrollment with novajoin is currently only available for the undercloud and overcloud nodes. Novajoin integration for overcloud instances is expected to be supported in a later release. 2.4.1. Adding the undercloud node to the certificate authority Before you deploy the overcloud, add the undercloud to the certificate authority (CA) by installing the python3-novajoin package on the undercloud node and running the novajoin-ipa-setup script. Procedure On the undercloud node, install the python3-novajoin package: On the undercloud node, run the novajoin-ipa-setup script, and adjust the values to suit your deployment: Use the resulting One-Time Password (OTP) to enroll the undercloud. 2.4.2. Adding the undercloud node to Red Hat Identity Manager (IdM) After you add the undercloud node to the certificate authority (CA), register the undercloud with IdM and configure novajoin. Configure the following settings in the [DEFAULT] section of the undercloud.conf file. Procedure Enable the novajoin service: Set a One-Time Password (OTP) so that you can register the undercloud node with IdM: Set the overcloud's domain name to be served by neutron's DHCP server: Set the hostname for the undercloud: Set IdM as the nameserver for the undercloud: For larger environments, review the novajoin connection timeout values. In the undercloud.conf file, add a reference to a new file called undercloud-timeout.yaml : Add the following options to undercloud-timeout.yaml . You can specify the timeout value in seconds, for example, 5 : Optional: If you want the local openSSL certificate authority to generate the SSL certificates for the public endpoints in director, set the generate_service_certificate parameter to true : Save the undercloud.conf file. Run the undercloud deployment command to apply the changes to your existing undercloud: 2.4.3. Setting Red Hat Identity Manager (IdM) as the DNS server for the overcloud To enable automatic detection of your IdM environment and easier enrollment, set IdM as your DNS server. This procedure is optional but recommended. Procedure Connect to your undercloud: Configure the control plane subnet to use IdM as the DNS name server: Set the DnsServers parameter in an environment file to use your IdM server: This parameter is usually defined in a custom network-environment.yaml file. 2.4.4. Preparing environment files and deploying the overcloud with novajoin enrollment To deploy the overcloud with IdM integration, you create and edit environment files to configure the overcloud to use the custom domain parameters CloudDomain and CloudName based on the domains that you define in the overcloud. You then deploy the overcloud with all the environment files and any additional environment files that you need for the deployment. Procedure Create a copy of the /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml environment file: Edit the /home/stack/templates/custom-domain.yaml environment file and set the CloudDomain and CloudName* values to suit your deployment: Choose the implementation of TLS appropriate for your environment: Use the enable-tls.yaml environment file to protect external endpoints with your custom certificate: Copy /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml to /home/stack/templates . Modify the /home/stack/enable-tls.yaml environment file to include your custom certificate and key. Include the following environment files in your deployment to protect internal and external endpoints: enable-internal-tls.yaml tls-every-endpoints-dns.yaml custom-domain.yaml enable-tls.yaml Use the haproxy-public-tls-certmonger.yaml environment file to protect external endpoints with an IdM issued certificate. For this implementation, you must create DNS entries for the VIP endpoints used by novajoin: You must create DNS entries for the VIP endpoints used by novajoin. Identify the overcloud networks located in your custom network-environment.yaml file in `/home/stack/templates : Create a list of virtual IP addresses for each overcloud network in a heat template, for example, /home/stack/public_vip.yaml . Add DNS entries to the IdM for each of the VIPs, and zones as needed: Include the following environment files in your deployment to protect internal and external endpoints: enable-internal-tls.yaml tls-everywhere-endpoints-dns.yaml haproxy-public-tls-certmonger.yaml custom-domain.yaml public_vip.yaml Note You cannot use novajoin to implement TLS everywhere (TLS-e) on a pre-existing deployment. Additional resources Implementing TLS-e with Ansible 2.4.5. Testing overcloud enrollment in Red Hat Identity Manager (IdM) After you complete the undercloud and overcloud enrollment in IdM with novajoin, you can test that the enrollment is successful by searching for an overcloud node in IdM and checking that the host entry includes Keytab:True . You can also log in to the overcloud node and confirm that the sssd command can query IdM users. Locate an overcloud node in IdM and confirm that the host entry includes Keytab:True : Log in to the overcloud node and confirm that sssd can query IdM users. For example, to query an IdM user named susan : 2.5. Encrypting memcached traffic under TLS everywhere (TLS-e) This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . You can now encrypt memcached traffic with TLS-e. This feature works with both novajoin and tripleo-ipa: Create an environment file called memcached.yaml with the following contents to add TLS support for memcached: Include the memcached.yaml environment file in the overcloud deployment process: Additional Resources For more information about deploying TLSe with tripleo-ipa, see Implementing TLS-e with Ansible . For more information about deploying TLSe with novajoin, see Enrolling nodes in Red Hat Identity Manager (IdM) with novajoin 2.6. Configuring Red Hat Identity Manager (IdM) server credentials To configure the Red Hat Identity Manager (IdM) to integrate with OpenStack Identity, set up an LDAP account for Identity service to use, create a user group for Red Hat OpenStack users, and set up the password for the lookup account. Prerequisites Red Hat Identity Manager (IdM) is configured and operational. Red Hat OpenStack Platform (RHOSP) is configured and operational. DNS name resolution is fully functional and all hosts are registered appropriately. IdM authentication traffic is encrypted with LDAPS, using port 636. Recommended: Implement IdM with a high availability or load balancing solution to avoid a single point of failure. Procedure Perform this procedure on the IdM server. Create the LDAP lookup account to use in OpenStack Identity Service to query the IdM LDAP service: Note Review the password expiration settings of this account, once created. Create a group for RHOSP users, called grp-openstack . Only members of this group can have permissions assigned in OpenStack Identity. Set the svc-ldap account password and add it to the grp-openstack group: Login as svc-ldap user and change the password when prompted: 2.7. Installing the Red Hat Identity Manager (IdM) LDAPS certificate OpenStack Identity (keystone) uses LDAPS queries to validate user accounts. To encrypt this traffic, keystone uses the certificate file defined by keystone.conf . To install the LDAPS certificate, copy the certificate from the Red Hat Identity Manager (IdM) server to a location where keystone will be able to reference it, and convert the certificate from .crt to .pem format. Note When using multiple domains for LDAP authentication, you might receive various errors, such as Unable to retrieve authorized projects , or Peer's Certificate issuer is not recognized . This can arise if keystone uses the incorrect certificate for a certain domain. As a workaround, merge all of the LDAPS public keys into a single .crt bundle, and configure all of your keystone domains to use this file. Prerequisites IdM server credentials are configured. Procedure In your IdM environment, locate the LDAPS certificate. This file can be located using /etc/openldap/ldap.conf : Copy the file to the Controller node that runs the keystone service. For example, the scp command copies the ca.crt file to the node node.lab.local : Copy the ca.crt file to the certificate directory. This is the location that the keystone service will use to access the certificate: Optional: If you need to run diagnostic commands, such as ldapsearch , you also need to add the certificate to the RHEL certificate store: 3. On the Controller node, convert the .crt to .pem format: Install the .pem on the Controller node. For example, in Red Hat Enterprise Linux: 2.8. Configuring director to use domain-specific LDAP backends To configure director to use one or more LDAP backends, set the KeystoneLDAPDomainEnable flag to true in your heat templates, and set up environment files with the information about each LDAP backend. Director then uses a separate LDAP backend for each keystone domain. Note The default directory for domain configuration files is set to /etc/keystone/domains/ . You can override this by setting the required path with the keystone::domain_config_directory hiera key and adding it as an ExtraConfig parameter within an environment file. Procedure In the heat template for your deployment, set the KeystoneLDAPDomainEnable flag to true . This configures the domain_specific_drivers_enabled option in keystone within the identity configuration group. Add a specification of the LDAP backend configuration by setting the KeystoneLDAPBackendConfigs parameter in tripleo-heat-templates , where you can then specify your required LDAP options. Create a copy of the keystone_domain_specific_ldap_backend.yaml environment file: Edit the /home/stack/templates/keystone_domain_specific_ldap_backend.yaml environment file and set the values to suit your deployment. For example, this parameter create a LDAP configuration for a keystone domain named testdomain : Note The keystone_domain_specific_ldap_backend.yaml environment file contains the following deprecated write parameters: user_allow_create user_allow_update user_allow_delete The values for these parameters have no effect on the deployment, and can be safely removed. Optional: Add more domains to the environment file. For example: This results in two domains named domain1 and domain2 ; each will have a different LDAP domain with its own configuration. 2.9. Granting the admin user access to the OpenStack Identity domain To allow the admin user to access the OpenStack Identity (keystone) domain and see the Domain tab, get the ID of the domain and the admin user, and then assign the admin role to the user in the domain. Note This does not grant the OpenStack admin account any permissions on the external service domain. In this case, the term domain refers to OpenStack's usage of the keystone domain. Procedure This procedure uses the LAB domain. Replace the domain name with the actual name of the domain that you are configuring. Get the ID of the LAB domain: Get the ID of the admin user from the default domain: Get the ID of the admin role: The output depends on the external service you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Use the domain and admin IDs to construct the command that adds the admin user to the admin role of the keystone LAB domain: 2.10. Granting external groups access to Red Hat OpenStack Platform projects To grant multiple authenticated users access to Red Hat OpenStack Platform (RHOSP) resources, you can authorize certain groups from the external user management service to grant access to RHOSP projects, instead of requiring OpenStack administrators to manually allocate each user to a role in a project. As a result, all members of these groups can access pre-determined projects. Prerequisites Ensure that the external service administrator completed the following steps: Creating a group named grp-openstack-admin . Creating a group named grp-openstack-demo . Adding your RHOSP users to one of these groups as needed. Adding your users to the grp-openstack group. Create the OpenStack Identity domain. This procedure uses the LAB domain. Create or choose a RHOSP project. This procedure uses a project called demo that was created with the openstack project create --domain default --description "Demo Project" demo command. Procedure Retrieve a list of user groups from the OpenStack Identity domain: The command output depends on the external user management service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Retrieve a list of roles: The command output depends on the external user management service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Grant the user groups access to RHOSP projects by adding them to one or more of these roles. For example, if you want users in the grp-openstack-demo group to be general users of the demo project, you must add the group to the member or _member_ role, depending on the external service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Result Members of grp-openstack-demo can log in to the dashboard by entering their username and password and entering LAB in the Domain field: Note If users receive the error Error: Unable to retrieve container list. , and expect to be able to manage containers, then they must be added to the SwiftOperator role. Additional resources Section 2.11, "Granting external users access to Red Hat OpenStack Platform projects" 2.11. Granting external users access to Red Hat OpenStack Platform projects To grant specific authenticated users from the grp-openstack group access to OpenStack resources, you can grant these users direct access to Red Hat OpenStack Platform (RHOSP) projects. Use this process in cases where you want to grant access to individual users instead of granting access to groups. Prerequisites Ensure that the external service administrator completed the following steps: Adding your RHOSP users to the grp-openstack group. Creating the OpenStack Identity domain. This procedure uses the LAB domain. Create or choose a RHOSP project. This procedure uses a project called demo that was created with the openstack project create --domain default --description "Demo Project" demo command. Procedure Retrieve a list of users from the OpenStack Identity domain: Retrieve a list of roles: The command output depends on the external user management service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Grant users access to RHOSP projects by adding them to one or more of these roles. For example, if you want user1 to be a general user of the demo project, you add them to the member or _member_ role, depending on the external service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): If you want user1 to be an administrative user of the demo project, add the user to the admin role: Result The user1 user is able to log in to the dashboard by entering their external username and password and entering LAB in the Domain field: Note If users receive the error Error: Unable to retrieve container list. , and expect to be able to manage containers, then they must be added to the SwiftOperator role. Additional resources Section 2.10, "Granting external groups access to Red Hat OpenStack Platform projects" 2.12. Viewing the list of OpenStack Identity domains and users Use the openstack domain list command to list the available entries. Configuring multiple domains in Identity Service enables a new Domain field in the dashboard login page. Users are expected to enter the domain that matches their login credentials. Important After you complete the integration, you need to decide whether to create new projects in the Default domain or in newly created keystone domains. You must consider your workflow and how you administer user accounts. If possible, use the Default domain as an internal domain to manage service accounts and the admin project, and keep your external users in a separate domain. In this example, external accounts need to specify the LAB domain. The built-in keystone accounts, such as admin , must specify Default as their domain. Procedure Show the list of domains: Show the list of users in a specific domain. This command example specifies the --domain LAB and returns users in the LAB domain that are members of the grp-openstack group: You can also append --domain Default to show the built-in keystone accounts: 2.13. Creating a credentials file for a non-admin user After you configure users and domains for OpenStack Identity, you might need to create a credentials file for a non-admin user. Procedure Create a credentials (RC) file for a non-admin user. This example uses the user1 user in the file. 2.14. Testing OpenStack Identity integration with an external user management service To test that OpenStack Identity (keystone) successfully integrated with Active Directory Domain Service (AD DS), test user access to dashboard features. Prerequisites Integration with an external user management service, such as Active Directory (AD) or Red Hat Identity Manager (IdM) Procedure Create a test user in the external user management service, and add the user to the grp-openstack group. In Red Hat OpenStack Platform, add the user to the _member_ role of the demo project. Log in to the dashboard with the credentials of the AD test user. Click on each of the tabs to confirm that they are presented successfully without error messages. Use the dashboard to build a test instance. Note If you experience issues with these steps, log in to the dashboard with the admin account and perform the subsequent steps as that user. If the test is successful, it means that OpenStack is still working as expected and that an issue exists somewhere in the integration settings between OpenStack Identity and Active Directory. Additional resources Section 1.10, "Troubleshooting Active Directory integration" 2.15. Troubleshooting Red Hat Identity Manager (IdM) integration If you encounter errors when using the Red Hat Identity Manager (IdM) integration with OpenStack Identity, you might need to test the LDAP connection or test the certificate trust configuration. You might also need to check that the LDAPS port is accessible. Note Depending on the error type and location, perform only the relevant steps in this procedure. Procedure Test the LDAP connection by using the ldapsearch command to remotely perform test queries against the IdM server. A successful result here indicates that network connectivity is working, and the IdM services are up. In this example, a test query is performed against the server idm.lab.local on port 636 : Note ldapsearch is a part of the openldap-clients package. You can install this using # dnf install openldap-clients . Use the nc command to check that LDAPS port 636 is remotely accessible. In this example, a probe is performed against the server idm.lab.local . Press ctrl-c to exit the prompt. Failure to establish a connection could indicate a firewall configuration issue.
[ "search example.com bigcorp.com nameserver USDIDM_SERVER_IP_ADDR", "sudo dnf install -y python3-ipalib python3-ipaclient krb5-devel", "export IPA_DOMAIN=bigcorp.com export IPA_REALM=BIGCORP.COM export IPA_ADMIN_USER=USDIPA_USER 1 export IPA_ADMIN_PASSWORD=USDIPA_PASSWORD 2 export IPA_SERVER_HOSTNAME=ipa.bigcorp.com export UNDERCLOUD_FQDN=undercloud.example.com 3 export USER=stack export CLOUD_DOMAIN=example.com", "ansible-playbook --ssh-extra-args \"-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null\" /usr/share/ansible/tripleo-playbooks/undercloud-ipa-install.yaml", "undercloud_nameservers = USDIDM_SERVER_IP_ADDR overcloud_domain_name = example.com", "parameter_defaults: certmonger_krb_realm = EXAMPLE.COMPANY.COM", "custom_env_files = /home/stack/hiera_override.yaml", "openstack undercloud install", "kinit admin ipa host-find", "ls /etc/novajoin/krb5.keytab", "parameter_defaults: DnsSearchDomains: [\"example.com\"] DnsServers: [\"192.168.1.13\"] CloudDomain: example.com CloudName: overcloud.example.com CloudNameInternal: overcloud.internalapi.example.com CloudNameStorage: overcloud.storage.example.com CloudNameStorageManagement: overcloud.storagemgmt.example.com CloudNameCtlplane: overcloud.ctlplane.example.com IdMServer: freeipa-0.redhat.local IdMDomain: redhat.local IdMInstallClientPackages: False resource_registry: OS::TripleO::Services::IpaClient: /usr/share/openstack-tripleo-heat-templates/deployment/ipa/ipaservices-baremetal-ansible.yaml", "DEFAULT_TEMPLATES=/usr/share/openstack-tripleo-heat-templates/ CUSTOM_TEMPLATES=/home/stack/templates openstack overcloud deploy -e USD{DEFAULT_TEMPLATES}/environments/ssl/tls-everywhere-endpoints-dns.yaml -e USD{DEFAULT_TEMPLATES}/environments/services/haproxy-public-tls-certmonger.yaml -e USD{DEFAULT_TEMPLATES}/environments/ssl/enable-internal-tls.yaml -e USD{CUSTOM_TEMPLATES}/tls-parameters.yaml", "openstack endpoint list", "sudo dnf install python3-novajoin", "sudo /usr/libexec/novajoin-ipa-setup --principal admin --password <IdM admin password> --server <IdM server hostname> --realm <realm> --domain <overcloud cloud domain> --hostname <undercloud hostname> --precreate", "[DEFAULT] enable_novajoin = true", "ipa_otp = <otp>", "overcloud_domain_name = <domain>", "undercloud_hostname = <undercloud FQDN>", "undercloud_nameservers = <IdM IP>", "hieradata_override = /home/stack/undercloud-timeout.yaml", "nova::api::vendordata_dynamic_connect_timeout: <timeout value> nova::api::vendordata_dynamic_read_timeout: <timeout value>", "generate_service_certificate = true", "openstack undercloud install", "source ~/stackrc", "openstack subnet set ctlplane-subnet --dns-nameserver <idm_server_address>", "parameter_defaults: DnsServers: [\"<idm_server_address>\"]", "cp /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml /home/stack/templates/custom-domain.yaml", "parameter_defaults: CloudDomain: lab.local CloudName: overcloud.lab.local CloudNameInternal: overcloud.internalapi.lab.local CloudNameStorage: overcloud.storage.lab.local CloudNameStorageManagement: overcloud.storagemgmt.lab.local CloudNameCtlplane: overcloud.ctlplane.lab.local", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml -e /home/stack/templates/custom-domain.yaml -e /home/stack/templates/enable-tls.yaml", "parameter_defaults: ControlPlaneDefaultRoute: 192.168.24.1 ExternalAllocationPools: - end: 10.0.0.149 start: 10.0.0.101 InternalApiAllocationPools: - end: 172.17.1.149 start: 172.17.1.10 StorageAllocationPools: - end: 172.17.3.149 start: 172.17.3.10 StorageMgmtAllocationPools: - end: 172.17.4.149 start: 172.17.4.10", "parameter_defaults: ControlFixedIPs: [{'ip_address':'192.168.24.101'}] PublicVirtualFixedIPs: [{'ip_address':'10.0.0.101'}] InternalApiVirtualFixedIPs: [{'ip_address':'172.17.1.101'}] StorageVirtualFixedIPs: [{'ip_address':'172.17.3.101'}] StorageMgmtVirtualFixedIPs: [{'ip_address':'172.17.4.101'}] RedisVirtualFixedIPs: [{'ip_address':'172.17.1.102'}]", "ipa dnsrecord-add lab.local overcloud --a-rec 10.0.0.101 ipa dnszone-add ctlplane.lab.local ipa dnsrecord-add ctlplane.lab.local overcloud --a-rec 192.168.24.101 ipa dnszone-add internalapi.lab.local ipa dnsrecord-add internalapi.lab.local overcloud --a-rec 172.17.1.101 ipa dnszone-add storage.lab.local ipa dnsrecord-add storage.lab.local overcloud --a-rec 172.17.3.101 ipa dnszone-add storagemgmt.lab.local ipa dnsrecord-add storagemgmt.lab.local overcloud --a-rec 172.17.4.101", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml -e /home/stack/templates/custom-domain.yaml -e /home/stack/templates/public-vip.yaml", "ipa host-show overcloud-node-01 Host name: overcloud-node-01.lab.local Principal name: host/[email protected] Principal alias: host/[email protected] SSH public key fingerprint: <snip> Password: False Keytab: True Managed by: overcloud-node-01.lab.local", "getent passwd susan uid=1108400007(susan) gid=1108400007(bob) groups=1108400007(susan)", "parameter_defaults: MemcachedTLS: true MemcachedPort: 11212", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml -e /home/stack/memcached.yaml", "kinit admin ipa user-add First name: OpenStack Last name: LDAP User [radministrator]: svc-ldap", "ipa group-add --desc=\"OpenStack Users\" grp-openstack", "ipa passwd svc-ldap ipa group-add-member --users=svc-ldap grp-openstack", "kinit svc-ldap", "TLS_CACERT /etc/ipa/ca.crt", "scp /etc/ipa/ca.crt [email protected]:/root/", "cp ca.crt /etc/pki/ca-trust/source/anchors", "openssl x509 -in ca.crt -out ca.pem -outform PEM", "cp ca.pem /etc/pki/ca-trust/source/anchors/ update-ca-trust", "cp /usr/share/openstack-tripleo-heat-templates/environments/services/keystone_domain_specific_ldap_backend.yaml /home/stack/templates/", "parameter_defaults: KeystoneLDAPDomainEnable: true KeystoneLDAPBackendConfigs: testdomain: url: ldaps://192.0.2.250 user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword suffix: dc=director,dc=example,dc=com user_tree_dn: ou=Users,dc=director,dc=example,dc=com user_filter: \"(memberOf=cn=OSuser,ou=Groups,dc=director,dc=example,dc=com)\" user_objectclass: person user_id_attribute: cn", "KeystoneLDAPBackendConfigs: domain1: url: ldaps://domain1.example.com user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword domain2: url: ldaps://domain2.example.com user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword", "openstack domain show LAB +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | 6800b0496429431ab1c4efbb3fe810d4 | | name | LAB | +---------+----------------------------------+", "openstack user list --domain default | grep admin | 3d75388d351846c6a880e53b2508172a | admin |", "openstack role list", "+----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+", "+----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 544d48aaffde48f1b3c31a52c35f01f9 | SwiftOperator | | 6d005d783bf0436e882c55c62457d33d | ResellerAdmin | | 785c70b150ee4c778fe4de088070b4cf | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | +----------------------------------+---------------+", "openstack role add --domain 6800b0496429431ab1c4efbb3fe810d4 --user 3d75388d351846c6a880e53b2508172a 785c70b150ee4c778fe4de088070b4cf", "openstack group list --domain LAB", "+------------------------------------------------------------------+---------------------+ | ID | Name | +------------------------------------------------------------------+---------------------+ | 185277be62ae17e498a69f98a59b66934fb1d6b7f745f14f5f68953a665b8851 | grp-openstack | | a8d17f19f464c4548c18b97e4aa331820f9d3be52654aa8094e698a9182cbb88 | grp-openstack-admin | | d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 | grp-openstack-demo | +------------------------------------------------------------------+---------------------+", "+------------------------------------------------------------------+---------------------+ | ID | Name | +------------------------------------------------------------------+---------------------+ | 185277be62ae17e498a69f98a59b66934fb1d6b7f745f14f5f68953a665b8851 | grp-openstack | | a8d17f19f464c4548c18b97e4aa331820f9d3be52654aa8094e698a9182cbb88 | grp-openstack-admin | | d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 | grp-openstack-demo | +------------------------------------------------------------------+---------------------+", "openstack role list", "+----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+", "+----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 0969957bce5e4f678ca6cef00e1abf8a | ResellerAdmin | | 1fcb3c9b50aa46ee8196aaaecc2b76b7 | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | d3570730eb4b4780a7fed97eba197e1b | SwiftOperator | +----------------------------------+---------------+", "openstack role add --project demo --group d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 member", "openstack role add --project demo --group d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 _member_", "openstack user list --domain LAB +------------------------------------------------------------------+----------------+ | ID | Name | +------------------------------------------------------------------+----------------+ | 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e | user1 | | 12c062faddc5f8b065434d9ff6fce03eb9259537c93b411224588686e9a38bf1 | user2 | | afaf48031eb54c3e44e4cb0353f5b612084033ff70f63c22873d181fdae2e73c | user3 | | e47fc21dcf0d9716d2663766023e2d8dc15a6d9b01453854a898cabb2396826e | user4 | +------------------------------------------------------------------+----------------+", "openstack role list", "+----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+", "+----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 0969957bce5e4f678ca6cef00e1abf8a | ResellerAdmin | | 1fcb3c9b50aa46ee8196aaaecc2b76b7 | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | d3570730eb4b4780a7fed97eba197e1b | SwiftOperator | +----------------------------------+---------------+", "openstack role add --project demo --user 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e member", "openstack role add --project demo --user 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e _member_", "openstack role add --project demo --user 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e admin", "openstack domain list +----------------------------------+---------+---------+----------------------------------------------------------------------+ | ID | Name | Enabled | Description | +----------------------------------+---------+---------+----------------------------------------------------------------------+ | 6800b0496429431ab1c4efbb3fe810d4 | LAB | True | | | default | Default | True | Owns users and projects available on Identity API v2. | +----------------------------------+---------+---------+----------------------------------------------------------------------+", "openstack user list --domain LAB", "openstack user list --domain Default", "cat overcloudrc-v3-user1 Clear any old environment that may conflict. for key in USD( set | awk '{FS=\"=\"} /^OS_/ {print USD1}' ); do unset USDkey ; done export OS_USERNAME=user1 export NOVA_VERSION=1.1 export OS_PROJECT_NAME=demo export OS_PASSWORD=RedactedComplexPassword export OS_NO_CACHE=True export COMPUTE_API_VERSION=1.1 export no_proxy=,10.0.0.5,192.168.2.11 export OS_CLOUDNAME=overcloud export OS_AUTH_URL=https://10.0.0.5:5000/v3 export OS_AUTH_TYPE=password export PYTHONWARNINGS=\"ignore:Certificate has no, ignore:A true SSLContext object is not available\" export OS_IDENTITY_API_VERSION=3 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=LAB", "ldapsearch -D \"cn=directory manager\" -H ldaps://idm.lab.local:636 -b \"dc=lab,dc=local\" -s sub \"(objectclass=*)\" -w RedactedComplexPassword", "nc -v idm.lab.local 636 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 192.168.200.10:636. ^C" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/integrate_openstack_identity_with_external_user_management_services/assembly-integrating-identity-with-idm_rhosp
Chapter 3. Differences between OpenShift Container Platform 3 and 4
Chapter 3. Differences between OpenShift Container Platform 3 and 4 OpenShift Container Platform 4.16 introduces architectural changes and enhancements/ The procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply to OpenShift Container Platform 4. For information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.16 release notes . It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. 3.1. Architecture With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates. OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, machine sets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling. Beginning with OpenShift Container Platform 4.13, RHCOS now uses Red Hat Enterprise Linux (RHEL) 9.2 packages. This enhancement enables the latest fixes and features as well as the latest hardware support and driver updates. For more information about how this upgrade to RHEL 9.2 might affect your options configuration and services as well as driver and container support, see the RHCOS now uses RHEL 9.2 in the OpenShift Container Platform 4.13 release notes . For more information, see OpenShift Container Platform architecture . Immutable infrastructure OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform. In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3. For more information, see Red Hat Enterprise Linux CoreOS (RHCOS) . Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically. For more information, see Understanding Operators . 3.2. Installation and upgrade Installation process To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster. In OpenShift Container Platform 4.16, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster. For more information, see Installation process . If you want to add Red Hat Enterprise Linux (RHEL) worker machines to your OpenShift Container Platform 4.16 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster . Infrastructure options In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains. For more information, see OpenShift Container Platform installation overview . Upgrading your cluster In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.16, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.16 cluster has RHEL worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines. For more information, see Updating clusters . 3.3. Migration considerations Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4. 3.3.1. Storage considerations Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.16. Local volume persistent storage Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.16. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11. For more information, see Persistent storage using local volumes . FlexVolume persistent storage The FlexVolume plugin location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.16 is /etc/kubernetes/kubelet-plugins/volume/exec . Attachable FlexVolume plugins are no longer supported. For more information, see Persistent storage using FlexVolume . Container Storage Interface (CSI) persistent storage Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. OpenShift Container Platform 4.16 ships with several CSI drivers . You can also install your own driver. For more information, see Persistent storage using the Container Storage Interface (CSI) . Red Hat OpenShift Data Foundation OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage. Red Hat OpenShift Data Foundation 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage. For more information, see Persistent storage using Red Hat OpenShift Data Foundation and the interoperability matrix article. Unsupported persistent storage options Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.16: GlusterFS is no longer supported. CephFS as a standalone product is no longer supported. Ceph RBD as a standalone product is no longer supported. If you used one of these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.16. For more information, see Understanding persistent storage . Migration of in-tree volumes to CSI drivers OpenShift Container Platform 4 is migrating in-tree volume plugins to their Container Storage Interface (CSI) counterparts. In OpenShift Container Platform 4.16, CSI drivers are the new default for the following in-tree volume types: Amazon Web Services (AWS) Elastic Block Storage (EBS) Azure Disk Azure File Google Cloud Platform Persistent Disk (GCP PD) OpenStack Cinder VMware vSphere Note As of OpenShift Container Platform 4.13, VMware vSphere is not available by default. However, you can opt into VMware vSphere. All aspects of volume lifecycle, such as creation, deletion, mounting, and unmounting, is handled by the CSI driver. For more information, see CSI automatic migration . 3.3.2. Networking considerations Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.16. Network isolation mode The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet , though users frequently switched to use ovn-multitenant . The default network isolation mode for OpenShift Container Platform 4.16 is controlled by a network policy. If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your OpenShift Container Platform 4.16 cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in OpenShift Container Platform 4.16, follow the steps to configure multitenant isolation using network policy . For more information, see About network policy . OVN-Kubernetes as the default networking plugin in Red Hat OpenShift Networking In OpenShift Container Platform 3.11, OpenShift SDN was the default networking plugin in Red Hat OpenShift Networking. In OpenShift Container Platform 4.16, OVN-Kubernetes is now the default networking plugin. For information on migrating to OVN-Kubernetes from OpenShift SDN, see Migrating from the OpenShift SDN network plugin . Warning It is not possible to upgrade a cluster to OpenShift Container Platform 4.17 if it is using the OpenShift SDN network plugin. You must migrate to the OVN-Kubernetes plugin before upgrading to OpenShift Container Platform 4.17. 3.3.3. Logging considerations Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.16. Deploying OpenShift Logging OpenShift Container Platform 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. For more information, see Installing OpenShift Logging . Aggregated logging data You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster. For more information, see About OpenShift Logging . Unsupported logging configurations Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.16. For more information on the explicitly unsupported logging cases, see the logging support documentation . 3.3.4. Security considerations Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.16. Unauthenticated access to discovery endpoints In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/* ). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.16. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network. Identity providers Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes: The request header identity provider in OpenShift Container Platform 4.16 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not. The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.16. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider's /.well-known/openid-configuration endpoint. For more information, see Understanding identity provider configuration . OAuth token storage format Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information. Default security context constraints The restricted security context constraints (SCC) in OpenShift Container Platform 4 can no longer be accessed by any authenticated user as the restricted SCC in OpenShift Container Platform 3.11. The broad authenticated access is now granted to the restricted-v2 SCC, which is more restrictive than the old restricted SCC. The restricted SCC still exists; users that want to use it must be specifically given permissions to do it. For more information, see Managing security context constraints . 3.3.5. Monitoring considerations Review the following monitoring changes when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.16. You cannot migrate Hawkular configurations and metrics to Prometheus. Alert for monitoring infrastructure availability The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4. For more information, see Configuring alert routing for default platform alerts .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/migrating_from_version_3_to_4/planning-migration-3-4
3.7. Uninstalling a Client
3.7. Uninstalling a Client Uninstalling a client removes the client from the IdM domain, along with all of the IdM-specific configuration for system services, such as SSSD. This restores the client machine's configuration. Run the ipa-client-install --uninstall command: Remove the DNS entries for the client host manually from the server. See Section 33.4.6, "Deleting Records from DNS Zones" .
[ "ipa-client-install --uninstall" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-uninstall
probe::tty.unregister
probe::tty.unregister Name probe::tty.unregister - Called when a tty device is being unregistered Synopsis Values driver_name the driver name name the driver .dev_name name index the tty index requested module the module name
[ "tty.unregister" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tty-unregister
Chapter 1. Introducing WSDL Contracts
Chapter 1. Introducing WSDL Contracts Abstract WSDL documents define services using Web Service Description Language and a number of possible extensions. The documents have a logical part and a concrete part. The abstract part of the contract defines the service in terms of implementation neutral data types and messages. The concrete part of the document defines how an endpoint implementing a service will interact with the outside world. The recommended approach to design services is to define your services in WSDL and XML Schema before writing any code. When hand-editing WSDL documents you must make sure that the document is valid, as well as correct. To do this you must have some familiarity with WSDL. You can find the standard on the W3C web site, www.w3.org . 1.1. Structure of a WSDL document Overview A WSDL document is, at its simplest, a collection of elements contained within a root definition element. These elements describe a service and how an endpoint implementing that service is accessed. A WSDL document has two distinct parts: A logical part that defines the service in implementation neutral terms A concrete part that defines how an endpoint implementing the service is exposed on a network The logical part The logical part of a WSDL document contains the types , the message , and the portType elements. It describes the service's interface and the messages exchanged by the service. Within the types element, XML Schema is used to define the structure of the data that makes up the messages. A number of message elements are used to define the structure of the messages used by the service. The portType element contains one or more operation elements that define the messages sent by the operations exposed by the service. The concrete part The concrete part of a WSDL document contains the binding and the service elements. It describes how an endpoint that implements the service connects to the outside world. The binding elements describe how the data units described by the message elements are mapped into a concrete, on-the-wire data format, such as SOAP. The service elements contain one or more port elements which define the endpoints implementing the service. 1.2. WSDL elements A WSDL document is made up of the following elements: definitions - The root element of a WSDL document. The attributes of this element specify the name of the WSDL document, the document's target namespace, and the shorthand definitions for the namespaces referenced in the WSDL document. types - The XML Schema definitions for the data units that form the building blocks of the messages used by a service. For information about defining data types see Chapter 2, Defining Logical Data Units . message - The description of the messages exchanged during invocation of a services operations. These elements define the arguments of the operations making up your service. For information on defining messages see Chapter 3, Defining Logical Messages Used by a Service . portType - A collection of operation elements describing the logical interface of a service. For information about defining port types see Chapter 4, Defining Your Logical Interfaces . operation - The description of an action performed by a service. Operations are defined by the messages passed between two endpoints when the operation is invoked. For information on defining operations see the section called "Operations" . binding - The concrete data format specification for an endpoint. A binding element defines how the abstract messages are mapped into the concrete data format used by an endpoint. This element is where specifics such as parameter order and return values are specified. service - A collection of related port elements. These elements are repositories for organizing endpoint definitions. port - The endpoint defined by a binding and a physical address. These elements bring all of the abstract definitions together, combined with the definition of transport details, and they define the physical endpoint on which a service is exposed. 1.3. Designing a contract To design a WSDL contract for your services you must perform the following steps: Define the data types used by your services. Define the messages used by your services. Define the interfaces for your services. Define the bindings between the messages used by each interface and the concrete representation of the data on the wire. Define the transport details for each of the services.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/wsdlintro
Chapter 4. Upgrading Camel K
Chapter 4. Upgrading Camel K You can upgrade installed Camel K operator automatically, but it does not automatically upgrade the Camel K integrations. You must manually trigger the upgrade for the Camel K integrations. This chapter explains how to upgrade both Camel K operator and Camel K integrations. 4.1. Upgrading Camel K operator The subscription of an installed Camel K operator specifies an update channel, for example, 1.10.x channel, which is used to track and receive updates for the operator. To upgrade the operator to start tracking and receiving updates from a newer channel, you can change the update channel in the subscription. See Upgrading installed operators for more information about changing the update channel for installed operator. Note Installed Operators cannot change to a channel that is older than the current channel. If the approval strategy in the subscription is set to Automatic, the upgrade process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending upgrades. Prerequisites Camel K operator is installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click the Camel K Operator . Click the Subscription tab. Click the name of the update channel under Channel . Click the newer update channel that you want to change to. For example, latest . Click Save . This will start the upgrade to the latest Camel K version. For subscriptions with an Automatic approval strategy, the upgrade begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date. For subscriptions with a Manual approval strategy, you can manually approve the upgrade from the Subscription tab. 4.2. Upgrading Camel K integrations When you trigger the upgrade for Camel K operator, the operator prepares the integrations to be upgraded, but does not trigger an upgrade for each one, to avoid service interruptions. When upgrading the operator, integration custom resources are not automatically upgraded to the newer version, so for example, the operator may be at version 1.10.3 , while integrations report the status.version field of the custom resource the version 1.8.2 . Prerequisites Camel K operator is installed and upgraded using Operator Lifecycle Manager (OLM). Procedure Open the terminal and run the following command to upgrade the Camel K intergations. This will clear the status of the integration resource and the operator will start the deployment of the integration using the artifacts from upgraded version, for example, version 1.10.3 . 4.3. Downgrading Camel K You can downgrade to older version of Camel K operator by installing a version of the operator. This needs to be triggered manually using OC CLI. For more infromation about installing specific version of the operator using CLI see Installing a specific version of an Operator . Important You must remove the existing Camel K operator and then install the specifc version of the operator as downgrading is not supported in OLM. Once you install the older version of operator, use the kamel rebuild command to downgrade the integrations to the operator version. For example,
[ "kamel rebuild myintegration", "kamel rebuild myintegration" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/getting_started_with_camel_k/upgrading-camel-k
6.6. Diagnosing and Correcting Problems in a Cluster
6.6. Diagnosing and Correcting Problems in a Cluster For information about diagnosing and correcting problems in a cluster, contact an authorized Red Hat support representative.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-admin-problems-ca
Chapter 9. Creating the control plane for NFV environments
Chapter 9. Creating the control plane for NFV environments The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. These control plane services are services that provide APIs and do not run Compute node workloads. The RHOSO control plane services run as a Red Hat OpenShift Container Platform (RHOCP) workload, and you deploy these services using Operators in OpenShift. When you configure these OpenStack control plane services, you use one custom resource (CR) definition called OpenStackControlPlane . Note Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell ( rsh ) to run RHOSO CLI commands. 9.1. Prerequisites The RHOCP cluster is prepared for RHOSO network isolation. For more information, see Preparing RHOCP for RHOSO networks . The OpenStack Operator ( openstack-operator ) is installed. For more information, see Installing and preparing the Operators . The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack ). Use the following command to check the existing network policies on the cluster: You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. 9.2. Creating the control plane Define an OpenStackControlPlane custom resource (CR) to perform the following tasks: Create the control plane. Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services. The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you create an operational control plane environment. You can use the environment to test and troubleshoot issues before additional required service customization. Services can be added and customized after the initial deployment. To configure a service, you use the CustomServiceConfig field in a service specification to pass OpenStack configuration parameters in INI file format. For more information about the available configuration parameters, see Configuration reference . For more information on how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. For more information, see Example OpenStackControlPlane CR . Tip Use the following commands to view the OpenStackControlPlane CRD definition and specification schema: USD oc describe crd openstackcontrolplane USD oc explain openstackcontrolplane.spec For NFV environments, when you add the Networking service (neutron) and OVN service configurations, you must supply the following information: Physical networks where your gateways are located. Path to vhost sockets. VLAN ranges. Number of NUMA nodes. NICs that connect to the gateway networks. Note If you are using SR-IOV, you must also add the sriovnicswitch mechanism driver to the Networking service configuration. Procedure Create the openstack project for the deployed RHOSO environment: USD oc new-project openstack Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators: USD oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { "kubernetes.io/metadata.name": "openstack", "pod-security.kubernetes.io/enforce": "privileged", "security.openshift.io/scc.podSecurityLabelSync": "false" } If the security context constraint (SCC) is not "privileged", use the following commands to change it: USD oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite USD oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite Create a file on your workstation named openstack_control_plane.yaml to define the OpenStackControlPlane CR: apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack Specify the Secret CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services : apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret Specify the storageClass you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end: apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret storageClass: your-RHOCP-storage-class Note For information about storage classes, see Creating a storage class . Add the following service configurations: Block Storage service (cinder): cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the service cinderVolumes: volume1: networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the service Important This definition for the Block Storage service is only a sample. You might need to modify it for your NFV environment. For more information, see Planning storage and shared file systems in Planning your deployment . Note For the initial control plane deployment, the cinderBackup and cinderVolumes services are deployed but not activated (replicas: 0). You can configure your control plane post-deployment with a back end for the Block Storage service and the backup service. Compute service (nova): nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] enabled_filters = AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter, AggregateInstanceExtraSpecsFilter available_filters = nova.scheduler.filters.all_filters metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cellTemplates: cell1: noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane secret: osp-secret Note A full set of Compute services (nova) are deployed by default for each of the default cells, cell0 and cell1 : nova-api , nova-metadata , nova-scheduler , and nova-conductor . The novncproxy service is also enabled for cell1 by default. DNS service for the data plane: dns: template: options: 1 - key: server 2 values: 3 - 192.168.122.1 - key: server values: - 192.168.122.2 override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2 1 Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there are two key-value pairs defined because there are two DNS servers configured to forward requests to. 2 Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values: server rev-server srv-host txt-record ptr-record rebind-domain-ok naptr-record cname host-record caa-record dns-rr auth-zone synth-domain no-negcache local 3 Specifies the values for the dnsmasq parameter. You can specify a generic DNS server as the value, for example, 1.1.1.1 , or a DNS server for a specific domain, for example, /google.com/8.8.8.8 . A Galera cluster for use by all RHOSO services ( openstack ), and a Galera cluster for use by the Compute service for cell1 ( openstack-cell1 ): galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3 Identity service (keystone) keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3 Image service (glance): glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # backend needs to be configured to activate the service override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage Note For the initial control plane deployment, the Image service is deployed but not activated (replicas: 0). You can configure your control plane post-deployment with a back end for the Image service. Key Management service (barbican): barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1 Memcached: memcached: templates: memcached: replicas: 3 Networking service (neutron): If you are using SR-IOV, you must also add the sriovnicswitch mechanism driver, for example, mechanism_drivers = ovn,sriovnicswitch . Replace <path> with the absolute path to the vhost sockets, for example, /var/lib/vhost . Replace <network_name1> and <network_name2> with the names of the physical networks that your gateways are on. (This network is set in the neutron network provider:*name field.) Replace <VLAN-ID1> and`<VLAN-ID2>` with the VLAN IDs you are using. Object Storage service (swift): swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 swiftRing: ringReplicas: 1 swiftStorage: networkAttachments: - storage replicas: 1 storageClass: local-storage storageRequest: 10Gi OVN: Replace <network_name> with the name of the physical network your gateway is on. (This network is set in the neutron network provider:*name field.) Replace <nic_name> with the name of the NIC connecting to the gateway network. Optional: Add additional <network_name>:<nic_name> pairs under nicMappings as required. Placement service (placement): placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret RabbitMQ: rabbitmq: templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer Telemetry service (ceilometer, prometheus): telemetry: enabled: true template: metricStorage: enabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: 1 enabled: false aodh: passwordSelectors: databaseAccount: aodh databaseInstance: openstack memcachedInstance: memcached secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false ipaddr: 172.17.0.80 1 You must have the autoscaling field present, even if autoscaling is disabled. Create the control plane: USD oc create -f openstack_control_plane.yaml -n openstack Note Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell ( rsh ) to run RHOSO CLI commands. USD oc rsh -n openstack openstackclient Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Check the status of the control plane deployment: USD oc get openstackcontrolplane -n openstack Sample output NAME STATUS MESSAGE openstack-control-plane Unknown Setup started The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Note Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell ( rsh ) to run RHOSO CLI commands. USD oc rsh -n openstack openstackclient Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace: USD oc get pods -n openstack The control plane is deployed when all the pods are either completed or running. Verification Open a remote shell connection to the OpenStackClient pod: USD oc rsh -n openstack openstackclient Confirm that the internal service endpoints are registered with each service: USD openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance Sample output +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | http://glance-internal.openstack.svc:9292 | | glance | public | http://glance-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+ Exit the OpenStackClient pod: USD exit 9.3. Example OpenStackControlPlane CR The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment. 1 The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. 2 Service-specific parameters for the Block Storage service (cinder). 3 The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. 4 The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. 5 The list of networks that each service pod is directly attached to, specified by using the NetworkAttachmentDefinition resource names. A NIC is configured for the service for each specified network attachment. Note If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the ovnDBCluster service uses the internalapi network; and the ovnController service uses the tenant network. 6 Service-specific parameters for the Compute service (nova). 7 Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Set route: to {} to apply the default route template. 8 The internal service API endpoint registered as a MetalLB service with the IPAddressPool internalapi . 9 The virtual IP (VIP) address for the service. The IP is shared with other services by default. 10 The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in the loadBalancerIPs annotation, as indicated in 11 and 12 . Note Multiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses. 11 The distinct IP address for a RabbitMQ instance that is exposed to an isolated network. 12 The distinct IP address for a RabbitMQ instance that is exposed to an isolated network. 9.4. Removing a service from the control plane You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0 . Warning Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service. Procedure Open the OpenStackControlPlane CR file on your workstation. Locate the service you want to remove from the control plane and disable it: Update the control plane: Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status: The OpenStackControlPlane resource is updated with the disabled service when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the openstack namespace: Check that the service is removed: This command returns the following message when the service is successfully removed: Check that the API endpoints for the service are removed from the Identity service (keystone): This command returns the following message when the API endpoints for the service are successfully removed: 9.5. Additional resources Kubernetes NMState Operator The Kubernetes NMState project Load balancing with MetalLB MetalLB documentation MetalLB in layer 2 mode Specify network interfaces that LB IP can be announced from Multiple networks Using the Multus CNI in OpenShift macvlan plugin whereabouts IPAM CNI plugin - Extended configuration About advertising for the IP address pools Dynamic provisioning Configuring the Block Storage backup service in Configuring persistent storage . Configuring the Image service (glance) in Configuring persistent storage .
[ "oc rsh -n openstack openstackclient", "oc get networkpolicy -n openstack", "oc describe crd openstackcontrolplane oc explain openstackcontrolplane.spec", "oc new-project openstack", "oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { \"kubernetes.io/metadata.name\": \"openstack\", \"pod-security.kubernetes.io/enforce\": \"privileged\", \"security.openshift.io/scc.podSecurityLabelSync\": \"false\" }", "oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret storageClass: your-RHOCP-storage-class", "cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the service cinderVolumes: volume1: networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the service", "nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] enabled_filters = AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter, AggregateInstanceExtraSpecsFilter available_filters = nova.scheduler.filters.all_filters metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cellTemplates: cell1: noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane secret: osp-secret", "dns: template: options: 1 - key: server 2 values: 3 - 192.168.122.1 - key: server values: - 192.168.122.2 override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2", "galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3", "keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3", "glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # backend needs to be configured to activate the service override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage", "barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1", "memcached: templates: memcached: replicas: 3", "neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi customServiceConfig: | [DEFAULT] global_physnet_mtu = 9000 [ml2] mechanism_drivers = ovn [ovn] vhost_sock_dir = <path> [ml2_type_vlan] network_vlan_ranges = <network_name1>:<VLAN-ID1>:<VLAN-ID2> , <network_name2>:<VLAN-ID1>:<VLAN-ID2>", "swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 swiftRing: ringReplicas: 1 swiftStorage: networkAttachments: - storage replicas: 1 storageClass: local-storage storageRequest: 10Gi", "ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {} ovnController: networkAttachment: tenant nicMappings: <network_name>: <nic_name>", "placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret", "rabbitmq: templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer", "telemetry: enabled: true template: metricStorage: enabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: 1 enabled: false aodh: passwordSelectors: databaseAccount: aodh databaseInstance: openstack memcachedInstance: memcached secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false ipaddr: 172.17.0.80", "oc create -f openstack_control_plane.yaml -n openstack", "oc rsh -n openstack openstackclient", "oc get openstackcontrolplane -n openstack", "NAME STATUS MESSAGE openstack-control-plane Unknown Setup started", "oc rsh -n openstack openstackclient", "oc get pods -n openstack", "oc rsh -n openstack openstackclient", "openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance", "+--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | http://glance-internal.openstack.svc:9292 | | glance | public | http://glance-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+", "exit", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secret storageClass: your-RHOCP-storage-class 1 cinder: 2 apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: 3 networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the service cinderVolumes: 4 volume1: networkAttachments: 5 - storage replicas: 0 # backend needs to be configured to activate the service nova: 6 apiOverride: 7 route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi 8 metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 9 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane hasAPIAccess: true secret: osp-secret dns: template: options: - key: server values: - 192.168.122.1 - key: server values: - 192.168.122.2 override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2 galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3 keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3 glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # Configure back end; set to 3 when deploying service override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1 memcached: templates: memcached: replicas: 3 neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 swiftRing: ringReplicas: 1 swiftStorage: networkAttachments: - storage replicas: 1 storageRequest: 10Gi ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {} placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret rabbitmq: 10 templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 11 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 12 spec: type: LoadBalancer telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: databaseAccount: aodh databaseInstance: openstack passwordSelector: aodhService: AodhPassword rabbitMqClusterName: rabbitmq serviceUser: aodh secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false ipaddr: 172.17.0.80", "cinder: enabled: false apiOverride: route: {}", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started", "oc get pods -n openstack", "oc get cinder -n openstack", "No resources found in openstack namespace.", "oc rsh -n openstack openstackclient openstack endpoint list --service volumev3", "No service with a type, name or ID of 'volumev3' exists." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_network_functions_virtualization_environment/create-ctrl-plane-nfv
2.6. Displaying the Full Cluster Configuration
2.6. Displaying the Full Cluster Configuration Use the following command to display the full current cluster configuration.
[ "pcs config" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-pcsfullconfig-haar
Chapter 1. Web console
Chapter 1. Web console Learn how to access and use components of the Red Hat Advanced Cluster Management for Kubernetes console from the following documentation: Console overview Search in the console Accessing your console Enabling virtual machine actions (Technology Preview) 1.1. Console overview Learn more about console components that you can use to view, manage, or customize your console. See the following image of the Navigation from the Red Hat Advanced Cluster Management for Kubernetes console, which is described in more detail later in each section. See that the navigation represents major production function. 1.1.1. Console components Home Infrastructure Applications Governance Credentials 1.1.2. Home From the Red Hat Advanced Cluster Management for Kubernetes Home page in the All clusters view, you can access more information and you can search across the product. Click Welcome for more introductory information about each product function. 1.1.2.1. Overview Click Overview to see summary information, or to access clickable Cluster percentage values for policy violations, and more. From the Overview page, you can view the following information: Cluster and node counts across all clusters and for each provider Cluster status Cluster compliance Pod status Cluster add-ons You can also access all APIs from the integrated console. From the local-cluster view, go to Home > API Explorer to explore API groups. You can also use the Fleet view switch from the Overview page header to filter the page data by using cluster labels, and display metrics. The following information is displayed from the Fleet view switch: Number of clusters Application types Number of enabled policies on your cluster Cluster version Total number of nodes on your cluster Number of worker cores The following information from Red Hat Insights is displayed: Cluster recommendations Number of risk predictions Cluster health which includes the status and violations A view of your resources based on your custom query. If observability is enabled, alert and failing operator metrics from across your fleet are also displayed. To learn about Search, see Search in the console . 1.1.2.2. Command line tools From the Home page, you can access Command Line Interface (CLI) downloads by using the following steps: Click the ? icon in the toolbar of the console. Click Command Line Tools from the drop-down menu. Find the Advanced Cluster Management header to find a list of tools that are available for Red Hat Advanced Cluster Management, which is specified with the operating system and architecture. Select the appropriate binary file to download and use on your local system. 1.1.3. Infrastructure From Clusters , you can create new clusters or import existing clusters. From Automation , you can create an Ansible template. For more information about managing clusters, see The multicluster engine operator cluster lifecycle overview . Additionally, see specific information on these cluster types at Configuring Ansible Automation Platform tasks to run on managed clusters . 1.1.4. Applications Create an application and edit a .yaml file. Access an overview or more advanced information about each application. For more information about application resources, see Managing applications . 1.1.5. Governance Create and edit a .yaml file to create a policy. Use the Governance dashboard to manage policies and policy controllers. For more information, see Governance . 1.1.6. Credentials The credential stores the access information for a cloud provider. Each provider account requires its own credential, as does each domain on a single provider. Review your credentials or add a credential. See Managing credentials overview for more specific information about providers and credentials. 1.2. Search in the console For Red Hat Advanced Cluster Management for Kubernetes, search provides visibility into your Kubernetes resources across all of your clusters. Search also indexes the Kubernetes resources and the relationships to other resources. Search components Search customization and configurations Search operations and data types 1.2.1. Search components The search architecture is composed of the following components: Table 1.1. Search component table Component name Metrics Metric type Description search-collector Watches the Kubernetes resources, collects the resource metadata, computes relationships for resources across all of your managed clusters, and sends the collected data to the search-indexer . The search-collector on your managed cluster runs as a pod named, klusterlet-addon-search . search-indexer Receives resource metadata from the collectors and writes to PostgreSQL database. The search-indexer also watches resources in the hub cluster to keep track of active managed clusters. search_indexer_request_duration Histogram Time (seconds) the search indexer takes to process a request (from managed cluster). search_indexer_request_size Histogram Total changes (add, update, delete) in the search indexer request (from managed cluster). search_indexer_request_count Counter Total requests received by the search indexer (from managed clusters). search_indexer_requests_in_flight Gauge Total requests the search indexer is processing at a given time. search-api Provides access to all cluster data in the search-indexer through GraphQL and enforces role-based access control (RBAC). search_api_requests Histogram Histogram of HTTP requests duration in seconds. search_dbquery_duration_seconds Histogram Latency of database requests in seconds. search_api_db_connection_failed_total Counter The total number of database connection attempts that failed. search-postgres Stores collected data from all managed clusters in an instance of the PostgreSQL database. Search is configured by default on the hub cluster. When you provision or manually import a managed cluster, the klusterlet-addon-search is enabled. If you want to disable search on your managed cluster, see Modifying the klusterlet add-ons settings of your cluster for more information. 1.2.2. Search customization and configurations You can modify the default values in the search-v2-operator custom resource. To view details of the custom resource, run the following command: oc get search search-v2-operator -o yaml The search operator watches the search-v2-operator custom resource, reconciles the changes and updates active pods. View the following descriptions of the configurations: PostgreSQL database storage: When you install Red Hat Advanced Cluster Management, the PostgreSQL database is configured to save the PostgreSQL data in an empty directory ( emptyDir ) volume. If the empty directory size is limited, you can save the PostgreSQL data on a Persistent Volume Claim (PVC) to improve search performance. You can select a storageclass from your Red Hat Advanced Cluster Management hub cluster to back up your search data. For example, if you select the gp2 storageclass your configuration might resemble the following example: apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management labels: cluster.open-cluster-management.io/backup: "" spec: dbStorage: size: 10Gi storageClassName: gp2 This configuration creates a PVC named gp2-search and is mounted to the search-postgres pod. By default, the storage size is 10Gi . You can modify the storage size. For example, 20Gi might be sufficient for about 200 managed clusters. Optimize cost by tuning the pod memory or CPU requirements, replica count, and update log levels for any of the four search pods ( indexer , database , queryapi , or collector pod). Update the deployment section of the search-v2-operator custom resource. There are four deployments managed by the search-v2-operator , which can be updated individually. Your search-v2-operator custom resource might resemble the following file: apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management spec: deployments: collector: resources: 1 limits: cpu: 500m memory: 128Mi requests: cpu: 250m memory: 64Mi indexer: replicaCount: 3 database: 2 envVar: - name: POSTGRESQL_EFFECTIVE_CACHE_SIZE value: 1024MB - name: POSTGRESQL_SHARED_BUFFERS value: 512MB - name: WORK_MEM value: 128MB queryapi: arguments: 3 - -v=3 1 You can apply resources to an indexer , database , queryapi , or collector pod. 2 You can add multiple environment variables in the envVar section to specify a value for each variable that you name. 3 You can control the log level verbosity for any of the four pods by adding the - -v=3 argument. See the following example where memory resources are applied to the indexer pod: indexer: resources: limits: memory: 5Gi requests: memory: 1Gi You can define the node placement for search pods. You can update the Placement resource of search pods by using the nodeSelector parameter, or the tolerations parameter. View the following example configuration: spec: dbStorage: size: 10Gi deployments: collector: {} database: {} indexer: {} queryapi: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists Specify your search query by selecting the Advanced search drop-down button to filter the Column , Operator , and Value options or add a search constraint. 1.2.3. Search operations and data types Specify your search query by using search operations as conditions. Characters such as >, >=, <, <=, != are supported. See the following search operation table: Table 1.2. Search operation table Default operation Data type Description = string, number This is the default operation. ! or != string, number This represents the NOT operation, which means to exclude from the search results. <, ⇐, >, >= number > date Dates matching the last hour, day, week, month, and year. * string Partial string match. 1.2.4. Additional resources For instruction about how to manage search, see Managing search . For more topics about the Red Hat Advanced Cluster Management for Kubernetes console, see Web console . 1.2.5. Managing search Use search to query resource data from your clusters. Required access: Cluster administrator Continue reading the following topics: Creating search configurable collection Customizing the search console Querying in the console Updating klusterlet-addon-search deployments on managed clusters 1.2.5.1. Creating search configurable collection To define which Kubernetes resources get collected from the cluster, create the search-collector-config config map. Complete the following steps: Run the following command to create the search-collector-config config map: oc apply -f <your-search-collector-config>.yaml List the resources in the allow ( data.AllowedResources ) and deny list ( data.DeniedResources ) sections within the config map. Your config map might resemble the following YAML file: apiVersion: v1 kind: ConfigMap metadata: name: search-collector-config namespace: <namespace where search-collector add-on is deployed> data: AllowedResources: |- 1 - apiGroups: - "*" resources: - services - pods - apiGroups: - admission.k8s.io - authentication.k8s.io resources: - "*" DeniedResources: |- 2 - apiGroups: - "*" resources: - secrets - apiGroups: - admission.k8s.io resources: - policies - iampolicies - certificatepolicies 1 The config map example displays services and pods to be collected from all apiGroups , while allowing all resources to be collected from the admission.k8s.io and authentication.k8s.io apiGroups . 2 The config map example also prevents the central collection of secrets from all apiGroups while preventing the collection of policies , iampolicies , and certificatepolicies from the apiGroup admission.k8s.io . Note: If you do not provide a config map, all resources are collected by default. If you only provide AllowedResources , all resources not listed in AllowedResources are automatically excluded. Resources listed in AllowedResources and DeniedResources at the same time are also excluded. 1.2.5.2. Customizing the search console Customize your search results and limits. Complete the following tasks to perform the customization: Customize the search result limit from the OpenShift Container Platform console. Update the console-mce-config in the multicluster-engine namespace. These settings apply to all users and might affect performance. View the following performance parameter descriptions: SAVED_SEARCH_LIMIT - The maximum amount of saved searches for each user. By default, there is a limit of ten saved searches for each user. The default value is 10 . To update the limit, add the following key value to the console-config config map: SAVED_SEARCH_LIMIT: x . SEARCH_RESULT_LIMIT - The maximum amount of search results displayed in the console. Default value is 1000 . To remove this limit set to -1 . SEARCH_AUTOCOMPLETE_LIMIT - The maximum number of suggestions retrieved for the search bar typeahead. Default value is 10,000 . To remove this limit set to -1 . Run the following patch command from the OpenShift Container Platform console to change the search result to 100 items: oc patch configmap console-mce-config -n multicluster-engine --type merge -p '{"data":{"SEARCH_RESULT_LIMIT":"100"}}' To add, edit, or remove suggested searches, create a config map named console-search-config and configure the suggestedSearches section. Suggested searches that are listed are also displayed from the console. It is required to have an id, name, and searchText for each search object. View the following config map example: kind: ConfigMap apiVersion: v1 metadata: name: console-search-config namespace: <acm-namespace> 1 data: suggestedSearches: |- [ { "id": "search.suggested.workloads.name", "name": "Workloads", "description": "Show workloads running on your fleet", "searchText": "kind:DaemonSet,Deployment,Job,StatefulSet,ReplicaSet" }, { "id": "search.suggested.unhealthy.name", "name": "Unhealthy pods", "description": "Show pods with unhealthy status", "searchText": "kind:Pod status:Pending,Error,Failed,Terminating,ImagePullBackOff,CrashLoopBackOff,RunContainerError,ContainerCreating" }, { "id": "search.suggested.createdLastHour.name", "name": "Created last hour", "description": "Show resources created within the last hour", "searchText": "created:hour" }, { "id": "search.suggested.virtualmachines.name", "name": "Virtual Machines", "description": "Show virtual machine resources", "searchText": "kind:VirtualMachine" } ] 1 Add the namespace where search is enabled. 1.2.5.3. Querying in the console You can type any text value in the Search box and results include anything with that value from any property, such as a name or namespace. Queries that contain an empty space are not supported. For more specific search results, include the property selector in your search. You can combine related values for the property for a more precise scope of your search. For example, search for cluster:dev red to receive results that match the string "red" in the dev cluster. Complete the following steps to make queries with search: Click Search in the navigation menu. Type a word in the Search box , then Search finds your resources that contain that value. As you search for resources, you receive other resources that are related to your original search result, which help you visualize how the resources interact with other resources in the system. Search returns and lists each cluster with the resource that you search. For resources in the hub cluster, the cluster name is displayed as local-cluster . Your search results are grouped by kind , and each resource kind is grouped in a table. Your search options depend on your cluster objects. You can refine your results with specific labels. Search is case-sensitive when you query labels. See the following examples that you can select for filtering: name , namespace , status , and other resource fields. Auto-complete provides suggestions to refine your search. See the following example: Search for a single field, such as kind:pod to find all pod resources. Search for multiple fields, such as kind:pod namespace:default to find the pods in the default namespace. Notes: When you search for more than one property selector with multiple values, the search returns either of the values that were queried. View the following examples: When you search for kind:Pod name:a , any pod named a is returned. When you search for kind:Pod name:a,b , any pod named a or b are returned. Search for kind:pod status:!Running to find all pod resources where the status is not Running . Search for kind:pod restarts:>1 to find all pods that restarted at least twice. If you want to save your search, click the Save search icon. To download your search results, select the Export as CSV button. 1.2.5.4. Updating klusterlet-addon-search deployments on managed clusters To collect the Kubernetes objects from the managed clusters, the klusterlet-addon-search pod is run on all the managed clusters where search is enabled. This deployment is run in the open-cluster-management-agent-addon namespace. A managed cluster with a high number of resources might require more memory for the klusterlet-addon-search deployment to function. Resource requirements for the klusterlet-addon-search pod in a managed cluster can be specified in the ManagedClusterAddon custom resource in your Red Hat Advanced Cluster Management hub cluster. There is a namespace for each managed cluster with the managed cluster name. Complete the following steps: Edit the ManagedClusterAddon custom resource from the namespace matching the managed cluster name. Run the following command to update the resource requirement in xyz managed cluster: oc edit managedclusteraddon search-collector -n xyz Append the resource requirements as annotations. View the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: annotations: addon.open-cluster-management.io/search_memory_limit: 2048Mi addon.open-cluster-management.io/search_memory_request: 512Mi The annotation overrides the resource requirements on the managed clusters and automatically restarts the pod with new resource requirements. Note: You can discover all resources defined in your managed cluster by using the API Explorer in the console. Alternatively, you can discover all resources by running the following command: oc api-resources 1.2.5.5. Additional resources See multicluster global hub for more details. See Observing environments introduction . 1.3. Accessing your console The Red Hat Advanced Cluster Management for Kubernetes web console is integrated with the Red Hat OpenShift Container Platform web console as a console plug-in. You can access Red Hat Advanced Cluster Management within the OpenShift Container Platform console from the cluster switcher by selecting All Clusters . The cluster switcher is a drop-down menu that initially displays local-cluster . Select local-cluster when you want to use OpenShift Container Platform console features on the cluster where you installed Red Hat Advanced Cluster Management. Select All Clusters when you want to use Red Hat Advanced Cluster Management features to manage your fleet of clusters. If the cluster switcher is not present, the required console plug-ins might not be enabled. For new installations, the console plug-ins are enabled by default. If you upgraded from a version of Red Hat Advanced Cluster Management and want to enable the plug-ins, or if you want to disable the plug-ins, complete the following steps: To disable the plug-in, be sure you are in the Administrator perspective in the OpenShift Container Platform console. Find Administration in the navigation and click Cluster Settings , then click the Configuration tab. From the list of Configuration resources , click the Console resource with the operator.openshift.io API group, which contains cluster-wide configuration for the web console. Select the Console plug-ins tab. Both the acm and mce plug-ins are listed. Modify plug-in status from the table. In a few moments, you are prompted to refresh the console. Note: To enable and disable the console, see MultiClusterHub advanced for information. To learn more about the Red Hat Advanced Cluster Management for Kubernetes console, see Console overview . 1.4. Enabling virtual machine actions (Technology Preview) To view VirtualMachine resources across all the clusters that Red Hat Advanced Cluster Management for Kubernetes manages, use the Search feature to list and filter the VirtualMachine resources created with the Red Hat OpenShift Virtualization. You can also enable the following actions from the Red Hat Advanced Cluster Management console on your VirtualMachine resources: Start Stop Restart Pause Unpause Required access: Cluster administrator 1.4.1. Prerequisites Confirm that the ManagedServiceAccount add-on is enabled. See ManagedServiceAccount add-on . 1.4.2. Enabling virtual machine actions for Red Hat Advanced Cluster Management You can enable the virtual machine actions for Red Hat Advanced Cluster Management by updating the console config map. Complete the following steps: To update the Red Hat Advanced Cluster Management console config map for enabling virtual machine actions, run the following command: oc patch configmap console-mce-config -n multicluster-engine -p '{"data": {"VIRTUAL_MACHINE_ACTIONS": "enabled"}}' To configure Red Hat Advanced Cluster Management to process the actions, create and configure a ManagedServiceAccount resource for each managed cluster. Save the following YAML file: apiVersion: authentication.open-cluster-management.io/v1beta1 kind: ManagedServiceAccount metadata: name: vm-actor labels: app: search spec: rotation: {} --- apiVersion: rbac.open-cluster-management.io/v1alpha1 kind: ClusterPermission metadata: name: vm-actions labels: app: search spec: clusterRole: rules: - apiGroups: - subresources.kubevirt.io resources: - virtualmachines/start - virtualmachines/stop - virtualmachines/restart - virtualmachineinstances/pause - virtualmachineinstances/unpause verbs: - update clusterRoleBinding: subject: kind: ServiceAccount name: vm-actor namespace: open-cluster-management-agent-addon Note: You must repeat this step for each new managed cluster. Apply the ManagedServiceAccount resource to your hub cluster by running the following command: oc apply -n <MANAGED_CLUSTER> -f /path/to/file The virtual machine actions are enabled for Red Hat Advanced Cluster Management. 1.4.3. Disabling virtual machine actions To disable virtual machine actions for Red Hat Advanced Cluster Management, run the following command: oc patch configmap console-mce-config -n multicluster-engine -p '{"data": {"VIRTUAL_MACHINE_ACTIONS": "disabled"}}' The virtual machine actions are disabled for Red Hat Advanced Cluster Management. 1.4.4. Deleting ManagedServiceAccounts and ClusterPermissions resources To delete ManagedServiceAccounts and ClusterPermissions resources that use virtual machine actions, complete the following steps: To delete the resources, run the following command: oc delete managedserviceaccount,clusterpermission -A -l app=search You might receive the following output: managedserviceaccount.authentication.open-cluster-management.io "vm-actor" deleted managedserviceaccount.authentication.open-cluster-management.io "vm-actor" deleted clusterpermission.rbac.open-cluster-management.io "vm-actions" deleted clusterpermission.rbac.open-cluster-management.io "vm-actions" deleted To confirm that the clean up is complete, run the following command: oc get managedserviceaccount,clusterpermission -A -l app=search When the resources are deleted successfully, you receive the following message: "No resources found" The ManagedServiceAccounts and ClusterPermissions resources are deleted.
[ "get search search-v2-operator -o yaml", "apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management labels: cluster.open-cluster-management.io/backup: \"\" spec: dbStorage: size: 10Gi storageClassName: gp2", "apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management spec: deployments: collector: resources: 1 limits: cpu: 500m memory: 128Mi requests: cpu: 250m memory: 64Mi indexer: replicaCount: 3 database: 2 envVar: - name: POSTGRESQL_EFFECTIVE_CACHE_SIZE value: 1024MB - name: POSTGRESQL_SHARED_BUFFERS value: 512MB - name: WORK_MEM value: 128MB queryapi: arguments: 3 - -v=3", "indexer: resources: limits: memory: 5Gi requests: memory: 1Gi", "spec: dbStorage: size: 10Gi deployments: collector: {} database: {} indexer: {} queryapi: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists", "apply -f <your-search-collector-config>.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: search-collector-config namespace: <namespace where search-collector add-on is deployed> data: AllowedResources: |- 1 - apiGroups: - \"*\" resources: - services - pods - apiGroups: - admission.k8s.io - authentication.k8s.io resources: - \"*\" DeniedResources: |- 2 - apiGroups: - \"*\" resources: - secrets - apiGroups: - admission.k8s.io resources: - policies - iampolicies - certificatepolicies", "patch configmap console-mce-config -n multicluster-engine --type merge -p '{\"data\":{\"SEARCH_RESULT_LIMIT\":\"100\"}}'", "kind: ConfigMap apiVersion: v1 metadata: name: console-search-config namespace: <acm-namespace> 1 data: suggestedSearches: |- [ { \"id\": \"search.suggested.workloads.name\", \"name\": \"Workloads\", \"description\": \"Show workloads running on your fleet\", \"searchText\": \"kind:DaemonSet,Deployment,Job,StatefulSet,ReplicaSet\" }, { \"id\": \"search.suggested.unhealthy.name\", \"name\": \"Unhealthy pods\", \"description\": \"Show pods with unhealthy status\", \"searchText\": \"kind:Pod status:Pending,Error,Failed,Terminating,ImagePullBackOff,CrashLoopBackOff,RunContainerError,ContainerCreating\" }, { \"id\": \"search.suggested.createdLastHour.name\", \"name\": \"Created last hour\", \"description\": \"Show resources created within the last hour\", \"searchText\": \"created:hour\" }, { \"id\": \"search.suggested.virtualmachines.name\", \"name\": \"Virtual Machines\", \"description\": \"Show virtual machine resources\", \"searchText\": \"kind:VirtualMachine\" } ]", "edit managedclusteraddon search-collector -n xyz", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: annotations: addon.open-cluster-management.io/search_memory_limit: 2048Mi addon.open-cluster-management.io/search_memory_request: 512Mi", "patch configmap console-mce-config -n multicluster-engine -p '{\"data\": {\"VIRTUAL_MACHINE_ACTIONS\": \"enabled\"}}'", "apiVersion: authentication.open-cluster-management.io/v1beta1 kind: ManagedServiceAccount metadata: name: vm-actor labels: app: search spec: rotation: {} --- apiVersion: rbac.open-cluster-management.io/v1alpha1 kind: ClusterPermission metadata: name: vm-actions labels: app: search spec: clusterRole: rules: - apiGroups: - subresources.kubevirt.io resources: - virtualmachines/start - virtualmachines/stop - virtualmachines/restart - virtualmachineinstances/pause - virtualmachineinstances/unpause verbs: - update clusterRoleBinding: subject: kind: ServiceAccount name: vm-actor namespace: open-cluster-management-agent-addon", "apply -n <MANAGED_CLUSTER> -f /path/to/file", "patch configmap console-mce-config -n multicluster-engine -p '{\"data\": {\"VIRTUAL_MACHINE_ACTIONS\": \"disabled\"}}'", "delete managedserviceaccount,clusterpermission -A -l app=search", "managedserviceaccount.authentication.open-cluster-management.io \"vm-actor\" deleted managedserviceaccount.authentication.open-cluster-management.io \"vm-actor\" deleted clusterpermission.rbac.open-cluster-management.io \"vm-actions\" deleted clusterpermission.rbac.open-cluster-management.io \"vm-actions\" deleted", "get managedserviceaccount,clusterpermission -A -l app=search", "\"No resources found\"" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/web_console/web-console
About
About Red Hat OpenShift Service on AWS 4 OpenShift Service on AWS Documentation. Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/about/index
Installing on OCI
Installing on OCI OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Oracle Cloud Infrastructure Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_oci/index
Chapter 1. Introduction to Red Hat Virtualization
Chapter 1. Introduction to Red Hat Virtualization Red Hat Virtualization is an enterprise-grade virtualization platform built on Red Hat Enterprise Linux. Virtualization allows users to easily provision new virtual servers and workstations, and provides more efficient use of physical server resources. With Red Hat Virtualization, you can manage your entire virtual infrastructure - including hosts, virtual machines, networks, storage, and users - from a centralized graphical user interface or REST API. Table 1.1. Red Hat Virtualization Key Components Component Name Description Red Hat Virtualization Manager A service that provides a graphical user interface and a REST API to manage the resources in the environment. The Manager is installed on a physical or virtual machine running Red Hat Enterprise Linux. Hosts Red Hat Enterprise Linux hosts (RHEL hosts) and Red Hat Virtualization Hosts (image-based hypervisors) are the two supported types of host. Hosts use Kernel-based Virtual Machine (KVM) technology and provide resources used to run virtual machines. Shared Storage A storage service is used to store the data associated with virtual machines. Data Warehouse A service that collects configuration information and statistical data from the Manager. For detailed technical information about Red Hat Virtualization, see the Technical Reference . 1.1. Red Hat Virtualization Architecture Red Hat Virtualization can be deployed as a self-hosted engine, or as a standalone Manager. A self-hosted engine is the recommended deployment option. 1.1.1. Self-Hosted Engine Architecture The Red Hat Virtualization Manager runs as a virtual machine on self-hosted engine nodes (specialized hosts) in the same environment it manages. A self-hosted engine environment requires one less physical server, but requires more administrative overhead to deploy and manage. The Manager is highly available without external HA management. The minimum setup of a self-hosted engine environment includes: One Red Hat Virtualization Manager virtual machine that is hosted on the self-hosted engine nodes. The RHV-M Appliance is used to automate the installation of a Red Hat Enterprise Linux 7 virtual machine, and the Manager on that virtual machine. A minimum of two self-hosted engine nodes for virtual machine high availability. You can use Red Hat Enterprise Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent) runs on all hosts to facilitate communication with the Red Hat Virtualization Manager. The HA services run on all self-hosted engine nodes to manage the high availability of the Manager virtual machine. One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts. Figure 1.1. Self-Hosted Engine Red Hat Virtualization Architecture 1.1.2. Standalone Manager Architecture The Red Hat Virtualization Manager runs on a physical server, or a virtual machine hosted in a separate virtualization environment. A standalone Manager is easier to deploy and manage, but requires an additional physical server. The Manager is only highly available when managed externally with a product such as Red Hat's High Availability Add-On. The minimum setup for a standalone Manager environment includes: One Red Hat Virtualization Manager machine. The Manager is typically deployed on a physical server. However, it can also be deployed on a virtual machine, as long as that virtual machine is hosted in a separate environment. The Manager must run on Red Hat Enterprise Linux 7. A minimum of two hosts for virtual machine high availability. You can use Red Hat Enterprise Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent) runs on all hosts to facilitate communication with the Red Hat Virtualization Manager. One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts. Figure 1.2. Standalone Manager Red Hat Virtualization Architecture 1.2. Red Hat Virtualization Terminology Cluster - A cluster is a set of physical hosts that are treated as a resource pool for virtual machines. Hosts in a cluster share the same network infrastructure and storage. They form a migration domain within which virtual machines can be moved from host to host. Data Center - A data center is the highest level container for all physical and logical resources within a managed virtual environment. It is a collection of clusters, virtual machines, storage domains, and networks. Events - Alerts, warnings, and other notices about activities help the administrator to monitor the performance and status of resources. HA Services - The HA services include the ovirt-ha-agent service and the ovirt-ha-broker service. The HA services run on self-hosted engine nodes and manage the high availability of the Manager virtual machine. High Availability - High availability means that a virtual machine is automatically restarted if its process is interrupted, either on its original host or another host in the cluster. Highly available environments involve a small amount of downtime, but have a much lower cost than fault tolerance, which maintains two copies of each resource so that one can replace the other immediately in the event of a failure. Host - A host, or hypervisor, is a physical server that runs one or more virtual machines. Hosts are grouped into clusters. Virtual machines can be migrated from one host to another within a cluster. Host Storage Manager (HSM) - Any non-SPM host in the data center that can be used for data operations, such as moving a disk between storage domains. This prevents a bottleneck at the SPM host, which should be used for shorter metadata operations. Logical Network - A logical network is a logical representation of a physical network. Logical networks group network traffic and communication between the Manager, hosts, storage, and virtual machines. Remote Viewer - A graphical interface to connect to virtual machines over a network connection. Self-Hosted Engine Node - A self-hosted engine node is a host that has self-hosted engine packages installed so that it can host the Manager virtual machine. Regular hosts can also be attached to a self-hosted engine environment, but cannot host the Manager virtual machine. Snapshot - A snapshot is a view of a virtual machine's operating system and all its applications at a point in time. It can be used to save the settings of a virtual machine before an upgrade or before installing new applications. In case of problems, a snapshot can be used to restore the virtual machine to its original state. Storage Domain - A storage domain is a logical entity that contains a standalone image repository. Each storage domain is used to store virtual disks or ISO images, and for the import and export of virtual machine images. Storage Pool Manager (SPM) - The Storage Pool Manager (SPM) is a role assigned to one host in a data center. The SPM host has sole authority to make all metadata changes for the data center, such as the creation and removal of virtual disks. Template - A template is a model virtual machine with predefined settings. A virtual machine that is based on a particular template acquires the settings of the template. Using templates is the quickest way of creating a large number of virtual machines in a single step. VDSM - The host agent service running on the hosts, which communicates with the Red Hat Virtualization Manager. The service listens on TCP port 54321. Virtual Machine - A virtual machine is a virtual workstation or virtual server containing an operating system and a set of applications. Multiple identical virtual machines can be created in a Pool . Virtual machines are created, managed, or deleted by power users and accessed by users. Virtual Machine Pool - A virtual machine pool is a group of identical virtual machines that are available on demand by each group member. Virtual machine pools can be set up for different purposes. For example, one pool can be for the Marketing department, another for Research and Development, and so on.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/product_guide/introduction
probe::signal.check_ignored
probe::signal.check_ignored Name probe::signal.check_ignored - Checking to see signal is ignored Synopsis signal.check_ignored Values sig_pid The PID of the process receiving the signal sig The number of the signal sig_name A string representation of the signal pid_name Name of the process receiving the signal
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-check-ignored
Chapter 2. Archive JFR recordings
Chapter 2. Archive JFR recordings You can archive active JFR recordings to avoid potential data loss from JFR recordings. You can download or upload the archived JFR recording, so that you can analyze the recording to suits your needs. You can find archived JFR recordings from the Archives menu in chronological order under one of three headings: All Targets , All Archives , and Uploads . Depending on what actions you performed on a JFR recording, the recording might display under each table. 2.1. Archiving JDK Flight Recorder (JFR) recordings You can archive active JFR recordings to avoid potential data loss from JFR recordings. Data loss might occur when Cryostat replaces legacy JFR recording data with new data to save storage space or when a target JVM abruptly stops or restarts. When you create an archived recording, Cryostat copies the active JFR recording's data and stores the data in a persistent storage location on your Cryostat instance. The Red Hat build of Cryostat Operator builds this persistent storage location onto the associated persistent volume claim (PVC) on the Red Hat OpenShift cluster. You can archive any JFR recording, regardless of its configuration. Additionally, you can archive snapshots from a JFR recording. Prerequisites Entered your authentication details for your Cryostat instance. Created a target JVM recording and entered your authenticated details to access the Recordings menu. See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat). Procedure On the Active Recordings tab, select the checkbox for your JFR recording. The Archive button is activated in the Active Recordings toolbar. Figure 2.1. Archive button for your JFR recording Click the Archive button. Cryostat creates an archived recording of your JFR recording. You can view your archived recording from under the Archived Recordings tab along with any other recording that relates to your selected target JVM. Alternatively, you can view your archived recording from under the All Targets table. Figure 2.2. Example of a listed target JVM application that is under the All Targets table Tip To remove a target JVM entry that does not have an archived recording, select the Hide targets with zero recordings checkbox. After you click on the twistie ( v ) beside the JVM target entry, you can access a filter function, where you can edit labels to enhance your filter or click the Delete button to remove the filter. From the All Targets table, select the checkbox beside each target JVM application that you want to review. The table lists each archived recording and its source location. Go to the All Archives table. This table looks similar to the All Targets table, but the All Archives table lists target JVM applications from files that Cryostat archived inside Cryostat. Note If an archived file has no recognizable JVM applications, it is still listed on the All Archives table but opens within a nested table under the heading lost . Optional: To delete an archived recording, select the checkbox to the specific archived JFR recording item, and click Delete when prompted. Figure 2.3. Deleting an archived JFR recording Note Cryostat assigns names to archived recordings based on the address of the target JVM's application, the name of the active recording, and the timestamp of the created archived recordings. Additional resources See Persistent storage using local volumes (Red Hat OpenShift) 2.2. Downloading an active recording or an archived recording You can use Cryostat to download an active recording or an archived recording to your local system. Prerequisites Entered your authentication details for your Cryostat instance. Created a JFR recording. See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat). Optional: Uploaded an SSL certificate or provided your credentials to the target JVM. Optional: Archived your JFR recording. See Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording). Procedure Navigate to the Recordings menu or the Archives menu on your Cryostat instance. Note The remaining steps use the Recordings menu as an example, but you can follow similar steps on the Archives menu. Determine the recording you want by clicking either the Active Recordings tab or the Archived Recordings tab. Locate your listed JFR recording and then select its overflow menu. Figure 2.4. Viewing a JFR recording's overflow menu Choose one of the following options: From the overflow menu, click Download Recording . Depending on how you configured your operating system, a file-save dialog opens. Save the JFR binary file and the JSON file to your preferred location. From the All Targets table, select the overflow menu for your listed JFR recordings. Click Download to save the archived file along with its JSON file, which contains metadata and label information, to your local system. Optional: View the downloaded file with the Java Mission Control (JMC) desktop application. Note If you do not want to download the .jfr file, but instead want to view the data from your recording on the Cryostat application, you can click the View in Grafana option. 2.3. Uploading a JFR recording to the Cryostat archives location You can upload a JFR recording from your local system to the archives location of your Cryostat. To save Cryostat storage space, you might have scaled down or removed your JFR recording. If you downloaded a JFR recording, you can upload it to your Cryostat instance when you scale up or redeploy the instance. Additionally, you can upload a file from a Cryostat instance to a new Cryostat instance. Cryostat analysis tools work on the recording uploaded to the new Cryostat instance. Prerequisites Entered your authentication details for your Cryostat instance. Created a JFR recording. See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat). See Downloading an active recording or an archived recordings (Using Cryostat to manage a JFR recording). Procedure Go to the Archives menu on your Cryostat instance. Figure 2.5. Archives menu on the Cryostat web console Optional: From the Uploads table, you can view all of your uploaded JFR recordings. The Uploads table also includes a filtering mechanism similar to other tables, such as the All Targets table, and other output. You can also use the filtering mechanism on the Archives menu to find an archived file that might have no recognizable target JVM application. Figure 2.6. The Uploads table in the Archives menu Click the upload icon. A Re-Upload Archived Recording window opens in your Cryostat web console: Figure 2.7. Re-Upload Archived Recording window In the JFR File field, click Upload . Locate the JFR recording files, which are files with a .jfr extension, and then click Submit . Note Alternatively, you can drag and drop .jfr files into the JFR File field. Your JFR recording files open in the Uploads table. Figure 2.8. Example of a JFR recording that is in the Uploads table
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_cryostat_to_manage_a_jfr_recording/assembly_archive-jfr-recordings_assembly_security-options
8.20. bind
8.20. bind 8.20.1. RHBA-2014:1373 - bind bug fix and enhancement update Updated bind packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server ( named ), which resolves host names to IP addresses; a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. Bug Fixes BZ# 1044545 Previously, the allow-notify configuration option did not take into account the Transaction SIGnature (TSIG) key for authentication. Consequently, this caused a slave server not to accept a NOTIFY message from a non-master server that used the TSIG key for authentication, even though the slave server was configured to accept NOTIFY messages when the specific TSIG key was used. The named source code has been fixed to also check the TSIG key ID when receiving a NOTIFY message from a non-master server, and the slave server now correctly accepts NOTIFY messages in this scenario. BZ# 1036700 Prior to this update, the Response Rate Limiting (RRL) functionality in BIND distributed in Red Hat Enterprise Linux 6 was missing the referrals-per-second and nodata-per-second options. As a consequence, users of BIND that was configured to use the RRL functionality could not explicitly filter empty responses for a valid domain and referrals or delegations to the server for a given domain. With this update, the missing functionality has been backported to BIND , and users can now explicitly filter empty responses for a valid domain and referrals or delegations to the server for a given domain when using the RRL functionality in BIND . BZ# 1008827 Previously, the host utility used the same send buffer for all outgoing queries. As a consequence, under high network load, a race condition occurred when the buffer was used by multiple queries, and the host utility terminated unexpectedly with a segmentation fault when sending of one query finished after another query had been sent. The host utility source code has been modified to use a separate send buffer for all outgoing queries, and the described problem no longer occurs. BZ# 993612 Prior to this update, a bug in the BIND resolver source code caused a race condition, which could lead to prematurely freeing a fetch memory object. As a consequence, BIND could terminate unexpectedly with a segmentation fault when it accessed already freed memory. The BIND resolver source code has been fixed to guarantee that the resolver fetch object is not freed until there is no outstanding reference to that object, and BIND no longer crashes in this scenario. BZ# 1023045 Previously, the manual page for the dig utility contained upstream-specific options for an Internationalized Domain Name (IDN) library. Consequently, these options did not function as expected and users were incapable of disabling IDN support in dig following the steps from the manual page. The dig(1) manual page has been modified to include the options of the IDN library used in Red Hat Enterprise Linux and users can now successfully disable IDN support in dig following the steps from the manual page. BZ# 919545 Prior to this update, due to a regression, the dig utility could access an already freed query when trying multiple origins during domain name resolution. Consequently, the dig utility sometimes terminated unexpectedly with a segmentation fault, especially when running on a host that had multiple search domains configured in the /etc/resolv.conf file. The dig source code has been modified to always use a query that is still valid when trying the origin, and the dig utility no longer crashes in this scenario. BZ# 1066876 Prior to this update, the named source code was unable to correctly handle the Internet Control Message Protocol (ICMP) Destination unreachable (Protocol unreachable) responses. Consequently, an error message was logged by named upon receiving such an ICMP response but BIND did not add the address of the name server to a list of unreachable name servers. This bug has been fixed, and no errors are now logged when the ICMP Destination unreachable (Protocol unreachable) response is received. BZ# 902431 Previously, the /var/named/chroot/etc/localtime file was created during the installation of the bind-chroot package, but its SELinux context was not restored. Consequently, /var/named/chroot/etc/localtime had an incorrect SELinux context. With this update, the command to restore the SELinux context of /var/named/chroot/etc/localtime after creation has been added in the post transaction section of the SPEC file, and the correct SELinux context is preserved after installing bind-chroot . BZ# 917356 Previously, the /var/named/named.ca file was outdated and the IP addresses of certain root servers were not valid. Although the named service fetches the current IP addresses of all root servers during its startup, invalid IP addresses can reduce performance just after a restart. Now, /var/named/named.ca has been updated to include the current IP addresses of root servers. BZ# 997743 Prior to this update, the named init script checked the existence of the rndc.key file only during the server startup. Consequently, the init script generated rndc.key even if the user had a custom Remote Name Daemon Control (RNDC) configuration. This bug has been fixed, and the init script no longer generates rndc.key if the user has a custom RNDC configuration. BZ# 919414 Previously, when calling the sqlite commands, the zone2sqlite utility used a formatting option that did not add single quotes around the argument. As a consequence, zone2sqlite was unable to perform operations on tables whose name started with a digit or contained the period ( . ) or dash ( - ) characters. With this update, zone2sqlite has been fixed to use the correct formatting option and the described problem no longer occurs. BZ# 980632 Previously, the named init script did not check whether the PID written in the named.pid file was a PID of a running named server. After an unclean shutdown of the server, the PID written in named.pid could belong to an existing process while the named server was not running. Consequently, the init script could identify the server as running and therefore the user was unable to start the server. With this update, the init script has been enhanced to perform the necessary check, and if the PID written in named.pid is not a PID of the running named server, the init script deletes the named.pid file. The check is performed before starting, stopping, or reloading the server, and before checking its status. As a result, the user is able to start the server without problems in the described scenario. BZ# 1025008 Prior to this update, BIND was not configured with the --enable-filter-aaaa configuration option. As a consequence, the filter-aaaa-on-v4 option could not be used in the BIND configuration. The --enable-filter-aaaa option has been added, and users can now configure the filter-aaaa-on-v4 option in BIND . BZ# 851123 Prior to this update, the named init script command configtest did not check if BIND was already running, and mounted or unmounted the file system into a chroot environment. As a consequence, the named chroot file system was damaged by executing the configtest command while the named service was running in a chroot environment. This bug has been fixed, and using the init script configtest command no longer damages the file system if named is running in a chroot environment. BZ# 848033 Previously, due to a missing statement in the named init script, the init script could return an incorrect exit status when calling certain commands (namely, checkconfig , configtest , check , and test ) if the named configuration included an error. Consequently, for example, when the service named configtest command was run, the init script returned a zero value meaning success, regardless of the errors in the configuration. With this update, the init script has been fixed to correctly return a non-zero value in case of an error in the named configuration. BZ# 1051283 Previously, ownership of some documentation files installed by the bind package was not correctly set. Consequently, the files were incorrectly owned by named instead of the root user. A patch has been applied, and the ownership of documentation files installed by the bind package has been corrected. BZ# 951255 Prior to this update, the /dev/random device, which is a source of random data, did not have a sufficient amount of entropy when booting a newly installed virtual machine (VM). Consequently, generating the /etc/rndc.key file took excessively long when the named service was started for the first time. The init script has been changed to use /dev/urandom instead of /dev/random as the source of random data, and the generation of /etc/rndc.key now consumes a more reasonable amount of time in this scenario. BZ# 1064045 Previously, the nsupdate utility was unable to correctly handle an extra argument after the -r option, which sets the number of User Datagram Protocol (UDP) retries. As a consequence, when an argument followed the -r option, nsupdate terminated unexpectedly with a segmentation fault. A patch has been applied, and nsupdate now handles the -r option with an argument as expected. BZ# 948743 Previously, when the named service was running in a chroot environment, the init script checked whether the server was already running after it had mounted the chroot file system. As a consequence, if some directories were empty in the chroot environment, they were mounted again when the service named start command was used. With this update, the init script has been fixed to check whether named is running before mounting file system into the chroot environment and no directories are mounted multiple times in this scenario. BZ# 846065 Previously, BIND was not configured with the --with-dlopen=yes option. As a consequence, external Dynamically Loadable Zones (DLZ) drivers could not be dynamically loaded. A patch has been applied, and external DLZ drivers are now dynamically loadable as expected. Enhancements BZ# 1092035 Previously, the number of workers and client-objects was hard-coded in the Lightweight Resolver Daemon ( lwresd ) source, and it was insufficient. This update adds two new options: the lwres-tasks option, which can be used for modifying the number of workers created, and the lwres-clients option, which can be used for specifying the number of client objects created per worker. The options can be used inside the lwres statement in the named/lwresd configuration file. BZ# 956685 This update adds support for the TLSA resource record type in input zone files, as specified in RFC 6698. TLSA records together with Domain Name System Security Extensions (DNSSEC) are used for DNS-Based Authentication of Named Entities (DANE). Users of bind are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. After installing the update, the BIND daemon ( named ) will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/bind
Operating
Operating Red Hat Advanced Cluster Security for Kubernetes 4.7 Operating Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_vmware_vsphere/providing-feedback-on-red-hat-documentation_rhodf
Support
Support OpenShift Container Platform 4.18 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/support/index
Chapter 9. Security
Chapter 9. Security SSH connections using libica AES-GCM now work correctly Previously, unmodified data could be tagged as modified when using decryption with the AES-GCM cipher suite. As a consequence, SSH connections could not be established when using AES-GCM , and with some applications, data encrypted using AES-GCM could not be decrypted. With this update, the tag is computed from the ciphertext when decrypting and from the plaintext when encrypting. As a result, SSH connections using AES-GCM are now successfully established, and it is possible to decrypt data encrypted with AES-GCM . (BZ#1490894)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/bug_fixes_security
function::pstrace
function::pstrace Name function::pstrace - Chain of processes and pids back to init(1) Synopsis Arguments task Pointer to task struct of process Description This function returns a string listing execname and pid for each process starting from task back to the process ancestor that init(1) spawned.
[ "pstrace:string(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-pstrace
Chapter 3. Monitoring Camel Spring Boot integrations
Chapter 3. Monitoring Camel Spring Boot integrations This chapter explains how to monitor integrations on Red Hat build of Camel Spring Boot at runtime. You can use the Prometheus Operator that is already deployed as part of OpenShift Monitoring to monitor your own applications. 3.1. Enabling user workload monitoring in OpenShift You can enable the monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important In OpenShift Container Platform 4.13 you must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Prerequisites You must have access to the cluster as a user with the cluster-admin cluster role access to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. You have cluster admin access to the OpenShift cluster. You have installed the OpenShift CLI (oc). You have created the cluster-monitoring-config ConfigMap object. Optional: You have created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It can sometimes take a while for these components to redeploy. You can create and configure the ConfigMap object before you first enable monitoring for user-defined projects, to prevent having to redeploy the pods often. Procedure Login to OpenShift with administrator permissions. Edit the cluster-monitoring-config ConfigMap object. Add enableUserWorkload: true in the data/config.yaml section. When it is set to true, the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. The monitoring for the user-defined projects is then enabled automatically. Note When the changes are saved to the cluster-monitoring-config ConfigMap object, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also be restarted. Verify that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. 3.2. Deploying a Camel Spring Boot application After you enable the monitoring for your project, you can deploy and monitor the Camel Spring Boot application. This section uses the monitoring-micrometrics-grafana-prometheus example listed in the Camel Spring Boot Examples . Procedure Add the openshift-maven-plugin to the pom.xml file of the monitoring-micrometrics-grafana-prometheus example. In the pom.xml , add an openshift profile to allow deployment to openshift through the openshift-maven-plugin. <profiles> <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.13.1</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> Add the Prometheus support. In order to add the Prometheus support to your Camel Spring Boot application, expose the Prometheus statistics on an actuator endpoint. Edit your src/main/resources/application.properties file. If you have a management.endpoints.web.exposure.include entry, add prometheus, metrics, and health. If you do not have a management.endpoints.web.exposure.include entry, please add one. Add the following to the <dependencies/> section of your pom.xml to add some starter support to your application. Add the following to the Application.java of your Camel Spring Boot application. import org.springframework.context.annonation.Bean; import org.apache.camel.component.micrometer.MicrometerConstants; import org.apache.camel.component.micrometer.eventnotifier.MicrometerExchangeEventNotifier; import org.apache.camel.component.micrometer.eventnotifier.MicrometerRouteEventNotifier; import org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory; import org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory; The updated Application.java is shown below. @SpringBootApplication public class SampleCamelApplication { @Bean(name = {MicrometerConstants.METRICS_REGISTRY_NAME, "prometheusMeterRegistry"}) public PrometheusMeterRegistry prometheusMeterRegistry( PrometheusConfig prometheusConfig, CollectorRegistry collectorRegistry, Clock clock) throws MalformedObjectNameException, IOException { InputStream resource = new ClassPathResource("config/prometheus_exporter_config.yml").getInputStream(); new JmxCollector(resource).register(collectorRegistry); new BuildInfoCollector().register(collectorRegistry); return new PrometheusMeterRegistry(prometheusConfig, collectorRegistry, clock); } @Bean public CamelContextConfiguration camelContextConfiguration(@Autowired PrometheusMeterRegistry registry) { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext camelContext) { MicrometerRoutePolicyFactory micrometerRoutePolicyFactory = new MicrometerRoutePolicyFactory(); micrometerRoutePolicyFactory.setMeterRegistry(registry); camelContext.addRoutePolicyFactory(micrometerRoutePolicyFactory); MicrometerMessageHistoryFactory micrometerMessageHistoryFactory = new MicrometerMessageHistoryFactory(); micrometerMessageHistoryFactory.setMeterRegistry(registry); camelContext.setMessageHistoryFactory(micrometerMessageHistoryFactory); MicrometerExchangeEventNotifier micrometerExchangeEventNotifier = new MicrometerExchangeEventNotifier(); micrometerExchangeEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerExchangeEventNotifier); MicrometerRouteEventNotifier micrometerRouteEventNotifier = new MicrometerRouteEventNotifier(); micrometerRouteEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerRouteEventNotifier); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; } Deploy the application to OpenShift. Verify if your application is deployed. Add the Service Monitor for this application so that Openshift's prometheus instance can start scraping from the / actuator/prometheus endpoint. Create the following YAML manifest for a Service monitor. In this example, the file is named as servicemonitor.yaml . Add a Service Monitor for this application. Verify that the service monitor was successfully deployed. Verify that you can see the service monitor in the list of scrape targets. In the Administrator view, navigate to Observe Targets. You can find csb-demo-monitor within the list of scrape targets. Wait about ten minutes after deploying the servicemonitor. Then navigate to the Observe Metrics in the Developer view. Select Custom query in the drop-down menu and type camel to view the Camel metrics that are exposed through the /actuator/prometheus endpoint. Note Red Hat does not offer support for installing and configuring Prometheus and Grafana on non-OCP environments.
[ "login --user system:admin --token=my-token --server=https://my-cluster.example.com:6443", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true", "oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h", "<profiles> <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.13.1</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles>", "expose actuator endpoint via HTTP management.endpoints.web.exposure.include=mappings,metrics,health,shutdown,jolokia,prometheus", "<dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> </dependency> <dependency> <groupId>org.jolokia</groupId> <artifactId>jolokia-core</artifactId> <version>USD{jolokia-version}</version> </dependency> <dependency> <groupId>io.prometheus.jmx</groupId> <artifactId>collector</artifactId> <version>USD{prometheus-version}</version> </dependency>", "import org.springframework.context.annonation.Bean; import org.apache.camel.component.micrometer.MicrometerConstants; import org.apache.camel.component.micrometer.eventnotifier.MicrometerExchangeEventNotifier; import org.apache.camel.component.micrometer.eventnotifier.MicrometerRouteEventNotifier; import org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory; import org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory;", "@SpringBootApplication public class SampleCamelApplication { @Bean(name = {MicrometerConstants.METRICS_REGISTRY_NAME, \"prometheusMeterRegistry\"}) public PrometheusMeterRegistry prometheusMeterRegistry( PrometheusConfig prometheusConfig, CollectorRegistry collectorRegistry, Clock clock) throws MalformedObjectNameException, IOException { InputStream resource = new ClassPathResource(\"config/prometheus_exporter_config.yml\").getInputStream(); new JmxCollector(resource).register(collectorRegistry); new BuildInfoCollector().register(collectorRegistry); return new PrometheusMeterRegistry(prometheusConfig, collectorRegistry, clock); } @Bean public CamelContextConfiguration camelContextConfiguration(@Autowired PrometheusMeterRegistry registry) { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext camelContext) { MicrometerRoutePolicyFactory micrometerRoutePolicyFactory = new MicrometerRoutePolicyFactory(); micrometerRoutePolicyFactory.setMeterRegistry(registry); camelContext.addRoutePolicyFactory(micrometerRoutePolicyFactory); MicrometerMessageHistoryFactory micrometerMessageHistoryFactory = new MicrometerMessageHistoryFactory(); micrometerMessageHistoryFactory.setMeterRegistry(registry); camelContext.setMessageHistoryFactory(micrometerMessageHistoryFactory); MicrometerExchangeEventNotifier micrometerExchangeEventNotifier = new MicrometerExchangeEventNotifier(); micrometerExchangeEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerExchangeEventNotifier); MicrometerRouteEventNotifier micrometerRouteEventNotifier = new MicrometerRouteEventNotifier(); micrometerRouteEventNotifier.setMeterRegistry(registry); camelContext.getManagementStrategy().addEventNotifier(micrometerRouteEventNotifier); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; }", "mvn -Popenshift oc:deploy", "get pods -n myapp NAME READY STATUS RESTARTS AGE camel-example-spring-boot-xml-2-deploy 0/1 Completed 0 13m camel-example-spring-boot-xml-2-x78rk 1/1 Running 0 13m camel-example-spring-boot-xml-s2i-2-build 0/1 Completed 0 14m", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: csb-demo-monitor name: csb-demo-monitor spec: endpoints: - interval: 30s port: http scheme: http path: /actuator/prometheus selector: matchLabels: app: camel-example-spring-boot-xml", "apply -f servicemonitor.yml servicemonitor.monitoring.coreos.com/csb-demo-monitor \"myapp\" created", "get servicemonitor NAME AGE csb-demo-monitor 9m17s" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/monitoring-csb-integrations
Chapter 10. The mClock OSD scheduler
Chapter 10. The mClock OSD scheduler As a storage administrator, you can implement the Red Hat Ceph Storage's quality of service (QoS) using mClock queueing scheduler. This is based on an adaptation of the mClock algorithm called dmClock. The mClock OSD scheduler provides the desired QoS using configuration profiles to allocate proper reservation, weight, and limit tags to the service types. The mClock OSD scheduler performs the QoS calculations for the different device types, that is SSD or HDD, by using the OSD's IOPS capability (determined automatically) and maximum sequential bandwidth capability (See osd_mclock_max_sequential_bandwidth_hdd and osd_mclock_max_sequential_bandwidth_ssd in The mclock configuration options section). 10.1. Comparison of mClock OSD scheduler with WPQ OSD scheduler The mClock OSD scheduler replaces the Weighted Priority Queue (WPQ) OSD scheduler as a default scheduler in Red Hat Ceph Storage 6.1. Important The mClock scheduler is supported for BlueStore OSDs. The WPQ OSD scheduler features a strict sub-queue, which is de-queued before the normal queue. The WPQ removes operations from a queue in relation to their priorities to prevent depletion of any queue. This helps in cases where some Ceph OSDs are more overloaded than others. The mClock OSD scheduler currently features an immediate queue, into which operations that require immediate response are queued. The immediate queue is not handled by mClock and functions as a simple first in, first out queue and is given the first priority. Operations, such as OSD replication operations, OSD operation replies, peering, recoveries marked with the highest priority, and so forth, are queued into the immediate queue. All other operations are enqueued into the mClock queue that works according to the mClock algorithm. The mClock queue, mclock_scheduler , prioritizes operations based on which bucket they belong to, that is pg recovery , pg scrub , snap trim , client op , and pg deletion . With background operations in progress, the average client throughputs, that is the input and output operations per second (IOPS), are significantly higher and latencies are lower with the mClock profiles when compared to the WPQ scheduler. That is because of mClock's effective allocation of the QoS parameters. Additional Resources See the mClock profiles section for more information. 10.2. The allocation of input and output resources This section describes how the QoS controls work internally with reservation, limit, and weight allocation. The user is not expected to set these controls as the mClock profiles automatically set them. Tuning these controls can only be performed using the available mClock profiles. The dmClock algorithm allocates the input and output (I/O) resources of the Ceph cluster in proportion to weights. It implements the constraints of minimum reservation and maximum limitation to ensure the services can compete for the resources fairly. Currently, the mclock_scheduler operation queue divides Ceph services involving I/O resources into following buckets: client op : the input and output operations per second (IOPS) issued by a client. pg deletion : the IOPS issued by primary Ceph OSD. snap trim : the snapshot trimming-related requests. pg recovery : the recovery-related requests. pg scrub : the scrub-related requests. The resources are partitioned using the following three sets of tags, meaning that the share of each type of service is controlled by these three tags: Reservation Limit Weight Reservation The minimum IOPS allocated for the service. The more reservation a service has, the more resources it is guaranteed to possess, as long as it requires so. For example, a service with the reservation set to 0.1 (or 10%) always has 10% of the OSD's IOPS capacity allocated for itself. Therefore, even if the clients start to issue large amounts of I/O requests, they do not exhaust all the I/O resources and the service's operations are not depleted even in a cluster with high load. Limit The maximum IOPS allocated for the service. The service does not get more than the set number of requests per second serviced, even if it requires so and no other services are competing with it. If a service crosses the enforced limit, the operation remains in the operation queue until the limit is restored. Note If the value is set to 0 (disabled), the service is not restricted by the limit setting and it can use all the resources if there is no other competing operation. This is represented as "MAX" in the mClock profiles. Note The reservation and limit parameter allocations are per-shard, based on the type of backing device, that is HDD or SSD, under the Ceph OSD. See OSD Object storage daemon configuration options for more details about osd_op_num_shards_hdd and osd_op_num_shards_ssd parameters. Weight The proportional share of capacity if extra capacity or system is not enough. The service can use a larger portion of the I/O resource, if its weight is higher than its competitor's. Note The reservation and limit values for a service are specified in terms of a proportion of the total IOPS capacity of the OSD. The proportion is represented as a percentage in the mClock profiles. The weight does not have a unit. The weights are relative to one another, so if one class of requests has a weight of 9 and another a weight of 1, then the requests are performed at a 9 to 1 ratio. However, that only happens once the reservations are met and those values include the operations performed under the reservation phase. Important If the weight is set to W , then for a given class of requests the one that enters has a weight tag of 1/W and the weight tag, or the current time, whichever is larger. That means, if W is too large and thus 1/W is too small, the calculated tag might never be assigned as it gets a value of the current time. Therefore, values for weight should be always under the number of requests expected to be serviced each second. 10.3. Factors that impact mClock operation queues There are three factors that can reduce the impact of the mClock operation queues within Red Hat Ceph Storage: The number of shards for client operations. The number of operations in the operation sequencer. The usage of distributed system for Ceph OSDs The number of shards for client operations Requests to a Ceph OSD are sharded by their placement group identifier. Each shard has its own mClock queue and these queues neither interact, nor share information amongst them. The number of shards can be controlled with these configuration options: osd_op_num_shards osd_op_num_shards_hdd osd_op_num_shards_ssd A lower number of shards increase the impact of the mClock queues, but might have other damaging effects. Note Use the default number of shards as defined by the configuration options osd_op_num_shards , osd_op_num_shards_hdd , and osd_op_num_shards_ssd . The number of operations in the operation sequencer Requests are transferred from the operation queue to the operation sequencer, in which they are processed. The mClock scheduler is located in the operation queue. It determines which operation to transfer to the operation sequencer. The number of operations allowed in the operation sequencer is a complex issue. The aim is to keep enough operations in the operation sequencer so it always works on some, while it waits for disk and network access to complete other operations. However, mClock no longer has control over an operation that is transferred to the operation sequencer. Therefore, to maximize the impact of mClock, the goal is also to keep as few operations in the operation sequencer as possible. The configuration options that influence the number of operations in the operation sequencer are: bluestore_throttle_bytes bluestore_throttle_deferred_bytes bluestore_throttle_cost_per_io bluestore_throttle_cost_per_io_hdd bluestore_throttle_cost_per_io_ssd Note Use the default values as defined by the bluestore_throttle_bytes and bluestore_throttle_deferred_bytes options. However, these options can be determined during the benchmarking phase. The usage of distributed system for Ceph OSDs The third factor that affects the impact of the mClock algorithm is the usage of a distributed system, where requests are made to multiple Ceph OSDs, and each Ceph OSD can have multiple shards. However, Red Hat Ceph Storage currently uses the mClock algorithm, which is not a distributed version of mClock. Note dmClock is the distributed version of mClock. Additional Resources See Object Storage Daemon (OSD) configuration options for more details about osd_op_num_shards_hdd and osd_op_num_shards_ssd parameters. See BlueStore configuration options for more details about BlueStore throttle parameters. See Manually benchmarking OSDs for more information. 10.4. The mClock configuration To make the mClock more user-friendly and intuitive, the mClock configuration profiles are introduced in Red Hat Ceph Storage 6. The mClock profiles hide the low-level details from users, making it easier to configure and use mClock. The following input parameters are required for an mClock profile to configure the quality of service (QoS) related parameters: The total capacity of input and output operations per second (IOPS) for each Ceph OSD. This is determined automatically. The maximum sequential bandwidth capacity (MiB/s) of each OS. See osd_mclock_max_sequential_bandwidth_[hdd/ssd] option An mClock profile type to be enabled. The default is balanced . Using the settings in the specified profile, a Ceph OSD determines and applies the lower-level mClock and Ceph parameters. The parameters applied by the mClock profile make it possible to tune the QoS between the client I/O and background operations in the OSD. Additional Resources See The Ceph OSD capacity determination for more information about the automated OSD capacity determination. 10.5. mClock clients The mClock scheduler handles requests from different types of Ceph services. Each service is considered by mClock as a type of client. Depending on the type of requests handled, mClock clients are classified into the buckets: Client - Handles input and output (I/O) requests issued by external clients of Ceph. Background recovery - Handles internal recovery requests. Background best-effort - Handles internal backfill, scrub, snap trim, and placement group (PG) deletion requests. The mClock scheduler derives the cost of an operation used in the QoS calculations from osd_mclock_max_capacity_iops_hdd | osd_mclock_max_capacity_iops_ssd , osd_mclock_max_sequential_bandwidth_hdd | osd_mclock_max_sequential_bandwidth_ssd and osd_op_num_shards_hdd | osd_op_num_shards_ssd parameters. 10.6. mClock profiles An mClock profile is a configuration setting. When applied to a running Red Hat Ceph Storage cluster, it enables the throttling of the IOPS operations belonging to different client classes, such as background recovery, scrub , snap trim , client op , and pg deletion . The mClock profile uses the capacity limits and the mClock profile type selected by the user to determine the low-level mClock resource control configuration parameters and applies them transparently. Other Red Hat Ceph Storage configuration parameters are also applied. The low-level mClock resource control parameters are the reservation, limit, and weight that provide control of the resource shares. The mClock profiles allocate these parameters differently for each client type. 10.6.1. mClock profile types mClock profiles can be classified into built-in and custom profiles. If any mClock profile is active, the following Red Hat Ceph Storage configuration sleep options get disabled, which means they are set to 0 : osd_recovery_sleep osd_recovery_sleep_hdd osd_recovery_sleep_ssd osd_recovery_sleep_hybrid osd_scrub_sleep osd_delete_sleep osd_delete_sleep_hdd osd_delete_sleep_ssd osd_delete_sleep_hybrid osd_snap_trim_sleep osd_snap_trim_sleep_hdd osd_snap_trim_sleep_ssd osd_snap_trim_sleep_hybrid It is to ensure that mClock scheduler is able to determine when to pick the operation from its operation queue and transfer it to the operation sequencer. This results in the desired QoS being provided across all its clients. Custom profile This profile gives users complete control over all the mClock configuration parameters. It should be used with caution and is meant for advanced users, who understand mClock and Red Hat Ceph Storage related configuration options. Built-in profiles When a built-in profile is enabled, the mClock scheduler calculates the low-level mClock parameters, that is, reservation, weight, and limit, based on the profile enabled for each client type. The mClock parameters are calculated based on the maximum Ceph OSD capacity provided beforehand. Therefore, the following mClock configuration options cannot be modified when using any of the built-in profiles: osd_mclock_scheduler_client_res osd_mclock_scheduler_client_wgt osd_mclock_scheduler_client_lim osd_mclock_scheduler_background_recovery_res osd_mclock_scheduler_background_recovery_wgt osd_mclock_scheduler_background_recovery_lim osd_mclock_scheduler_background_best_effort_res osd_mclock_scheduler_background_best_effort_wgt osd_mclock_scheduler_background_best_effort_lim Note These defaults cannot be changed using any of the config subsystem commands like config set , config daemon or config tell commands. Although the above command(s) report success, the mclock QoS parameters are reverted to their respective built-in profile defaults. The following recovery and backfill related Ceph options are overridden to mClock defaults: Warning Do not change these options as the built-in profiles are optimized based on them. Changing these defaults can result in unexpected performance outcomes. osd_max_backfills osd_recovery_max_active osd_recovery_max_active_hdd osd_recovery_max_active_ssd The following options show the mClock defaults which is same as the current defaults to maximize the performance of the foreground client operations: osd_max_backfills Original default 1 mClock default 1 osd_recovery_max_active Original default 0 mClock default 0 osd_recovery_max_active_hdd Original default 3 mClock default 3 osd_recovery_max_active_sdd Original default 10 mClock default 10 Note The above mClock defaults can be modified, only if necessary, by enabling osd_mclock_override_recovery_settings that is by default set as false . See Modifying backfill and recovery options to modify these parameters. Built-in profile types Users can choose from the following built-in profile types: balanced (default) high_client_ops high_recovery_ops Note The values mentioned in the list below represent the proportion of the total IOPS capacity of the Ceph OSD allocated for the service type. balanced : The default mClock profile is set to balanced because it represents a compromise between prioritizing client IO or recovery IO. It allocates equal reservation or priority to client operations and background recovery operations. Background best-effort operations are given lower reservation and therefore take longer to complete when there are competing operations. This profile meets the normal or steady state requirements of the cluster which is the case when external client performance requirements is not critical and there are other background operations that still need attention within the OSD. There might be instances that necessitate giving higher priority to either client operations or recovery operations. To meet such requirements you can choose either the high_client_ops profile to prioritize client IO or the high_recovery_ops profile to prioritize recovery IO. These profiles are discussed further below. Service type: client Reservation 50% Limit MAX Weight 1 Service type: background recovery Reservation 50% Limit MAX Weight 1 Service type: background best-effort Reservation MIN Limit 90% Weight 1 high_client_ops This profile optimizes client performance over background activities by allocating more reservation and limit to client operations as compared to background operations in the Ceph OSD. This profile, for example, can be enabled to provide the needed performance for I/O intensive applications for a sustained period of time at the cost of slower recoveries. The list below shows the resource control parameters set by the profile: Service type: client Reservation 60% Limit MAX Weight 2 Service type: background recovery Reservation 40% Limit MAX Weight 1 Service type: background best-effort Reservation MIN Limit 70% Weight 1 high_recovery_ops This profile optimizes background recovery performance as compared to external clients and other background operations within the Ceph OSD. For example, it could be temporarily enabled by an administrator to accelerate background recoveries during non-peak hours. The list below shows the resource control parameters set by the profile: Service type: client Reservation 30% Limit MAX Weight 1 Service type: background recovery Reservation 70% Limit MAX Weight 2 Service type: background best-effort Reservation MIN Limit MAX Weight 1 Additional Resources See the The mClock configuration options for more information about mClock configuration options. 10.6.2. Changing an mClock profile The default mClock profile is set to balanced . The other types of the built-in profile are high_client_ops and high_recovery_ops . Note The custom profile is not recommended unless you are an advanced user. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor host. Procedure Log into the Cephadm shell: Example Set the osd_mclock_profile option: Syntax Example This example changes the profile to allow faster recoveries on osd.0 . Note For optimal performance the profile must be set on all Ceph OSDs by using the following command: Syntax 10.6.3. Switching between built-in and custom profiles The following steps describe switching from built-in profile to custom profile and vice-versa. You might want to switch to the custom profile if you want complete control over all the mClock configuration options. However, it is recommended not to use the custom profile unless you are an advanced user. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor host. Switch from built-in profile to custom profile Log into the Cephadm shell: Example Switch to the custom profile: Syntax Example Note For optimal performance the profile must be set on all Ceph OSDs by using the following command: Example Optional: After switching to the custom profile, modify the desired mClock configuration options: Syntax Example This example changes the client reservation IOPS ratio for a specific OSD osd.0 to 0.5 (50%) Important Change the reservations of other services, such as background recovery and background best-effort accordingly to ensure that the sum of the reservations does not exceed the maximum proportion (1.0) of the IOPS capacity of the OSD. Switch from custom profile to built-in profile Log into the cephadm shell: Example Set the desired built-in profile: Syntax Example This example sets the built-in profile to high_client_ops on all Ceph OSDs. Determine the existing custom mClock configuration settings in the database: Example Remove the custom mClock configuration settings determined earlier: Syntax Example This example removes the configuration option osd_mclock_scheduler_client_res that was set on all Ceph OSDs. After all existing custom mClock configuration settings are removed from the central configuration database, the configuration settings related to high_client_ops are applied. Verify the settings on Ceph OSDs: Syntax Example Additional Resources See mClock profile types for the list of the mClock configuration options that cannot be modified with built-in profiles. 10.6.4. Switching temporarily between mClock profiles This section contains steps to temporarily switch between mClock profiles. Warning This section is for advanced users or for experimental testing. Do not use the below commands on a running storage cluster as it could have unexpected outcomes. Note The configuration changes on a Ceph OSD using the below commands are temporary and are lost when the Ceph OSD is restarted. Important The configuration options that are overridden using the commands described in this section cannot be modified further using the ceph config set osd. OSD_ID command. The changes do not take effect until a given Ceph OSD is restarted. This is intentional, as per the configuration subsystem design. However, any further modifications can still be made temporarily using these commands. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor host. Procedure Log into the Cephadm shell: Example Run the following command to override the mClock settings: Syntax Example This example overrides the osd_mclock_profile option on osd.0 . Optional: You can use the alternative to the ceph tell osd. OSD_ID injectargs command: Syntax Example Note The individual QoS related configuration options for the custom profile can also be modified temporarily using the above commands. 10.6.5. Degraded and Misplaced Object Recovery Rate With mClock Profiles Degraded object recovery is categorized into the background recovery bucket. Across all mClock profiles, degraded object recovery is given higher priority when compared to misplaced object recovery because degraded objects present a data safety issue not present with objects that are merely misplaced. Backfill or the misplaced object recovery operation is categorized into the background best-effort bucket. According to the balanced and high_client_ops mClock profiles, background best-effort client is not constrained by reservation (set to zero) but is limited to use a fraction of the participating OSD's capacity if there are no other competing services. Therefore, with the balanced or high_client_ops profile and with other background competing services active, backfilling rates are expected to be slower when compared to the WeightedPriorityQueue (WPQ) scheduler. If higher backfill rates are desired, please follow the steps mentioned in the section below. Improving backfilling rates For faster backfilling rate when using either balanced or high_client_ops profile, follow the below steps: Switch to the 'high_recovery_ops' mClock profile for the duration of the backfills. See Changing an mClock profile to achieve this. Once the backfilling phase is complete, switch the mClock profile to the previously active profile. In case there is no significant improvement in the backfilling rate with the 'high_recovery_ops' profile, continue to the step. Switch the mClock profile back to the previously active profile. Modify 'osd_max_backfills' to a higher value, for example, 3 . See Modifying backfills and recovery options to achieve this. Once the backfilling is complete, 'osd_max_backfills' can be reset to the default value of 1 by following the same procedure mentioned in step 3. Warning Please note that modifying osd_max_backfills may result in other operations, for example, client operations may experience higher latency during the backfilling phase. Therefore, users are recommended to increase osd_max_backfills in small increments to minimize performance impact to other operations in the cluster. 10.6.6. Modifying backfills and recovery options Modify the backfills and recovery options with the ceph config set command. The backfill or recovery options that can be modified are listed in mClock profile types . Warning This section is for advanced users or for experimental testing. Do not use the below commands on a running storage cluster as it could have unexpected outcomes. Modify the values only for experimental testing, or if the cluster is unable to handle the values or it shows poor performance with the default settings. Important The modification of the mClock default backfill or recovery options is restricted by the osd_mclock_override_recovery_settings option, which is set to false by default. If you attempt to modify any default backfill or recovery options without setting osd_mclock_override_recovery_settings to true , it resets the options back to the mClock defaults along with a warning message logged in the cluster log. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor host. Procedure Log into the Cephadm shell: Example Set the osd_mclock_override_recovery_settings configuration option to true on all Ceph OSDs: Example Set the desired backfills or recovery option: Syntax Example Wait a few seconds and verify the configuration for the specific OSD: Syntax Example Reset the osd_mclock_override_recovery_settings configuration option to false on all OSDs: Example 10.7. The Ceph OSD capacity determination The Ceph OSD capacity in terms of total IOPS is determined automatically during the Ceph OSD initialization. This is achieved by running the Ceph OSD bench tool and overriding the default value of osd_mclock_max_capacity_iops_[hdd, ssd] option depending on the device type. No other action or input is expected from the user to set the Ceph OSD capacity. Mitigation of unrealistic Ceph OSD capacity from the automated procedure In certain conditions, the Ceph OSD bench tool might show unrealistic or inflated results depending on the drive configuration and other environment related conditions. To mitigate the performance impact due to this unrealistic capacity, a couple of threshold configuration options depending on the OSD device type are defined and used: osd_mclock_iops_capacity_threshold_hdd = 500 osd_mclock_iops_capacity_threshold_ssd = 80000 The following automated step is performed: Fallback to using default OSD capacity If the Ceph OSD bench tool reports a measurement that exceeds the above threshold values, the fallback mechanism reverts to the default value of osd_mclock_max_capacity_iops_hdd or osd_mclock_max_capacity_iops_ssd . The threshold configuration options can be reconfigured based on the type of drive used. A cluster warning is logged in case the measurement exceeds the threshold: Example Important If the default capacity does not accurately represent the Ceph OSD capacity, it is highly recommended to run a custom benchmark using the preferred tool, for example Fio, on the drive and then override the osd_mclock_max_capacity_iops_[hdd, ssd] option as described in Specifying maximum OSD capacity . Additional Resources See Manually benchmarking OSDs to manually benchmark Ceph OSDs or manually tune the BlueStore throttle parameters. See The mClock configuration options for more information about the osd_mclock_max_capacity_iops_[hdd, ssd] and osd_mclock_iops_capacity_threshold_[hdd, ssd] options. 10.7.1. Verifying the capacity of an OSD You can verify the capacity of a Ceph OSD after setting up the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor host. Procedure Log into the Cephadm shell: Example Verify the capacity of a Ceph OSD: Syntax Example 10.7.2. Manually benchmarking OSDs To manually benchmark a Ceph OSD, any existing benchmarking tool, for example Fio, can be used. Regardless of the tool or command used, the steps below remain the same. Important The number of shards and BlueStore throttle parameters have an impact on the mClock operation queues. Therefore, it is critical to set these values carefully in order to maximize the impact of the mclock scheduler. See Factors that impact mClock operation queues for more information about these values. Note The steps in this section are only necessary if you want to override the Ceph OSD capacity determined automatically during the OSD initialization. Note If you have already determined the benchmark data and wish to manually override the maximum OSD capacity for a Ceph OSD, skip to the Specifying maximum OSD capacity section. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor host. Procedure Log into the Cephadm shell: Example Benchmark a Ceph OSD: Syntax where: TOTAL_BYTES : Total number of bytes to write. BYTES_PER_WRITE : Block size per write. OBJ_SIZE : Bytes per object. NUM_OBJS : Number of objects to write. Example 10.7.3. Determining the correct BlueStore throttle values This optional section details the steps used to determine the correct BlueStore throttle values. The steps use the default shards. Important Before running the test, clear the caches to get an accurate measurement. Clear the OSD caches between each benchmark run using the following command: Syntax Example Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node hosting the OSDs that you wish to benchmark. Procedure Log into the Cephadm shell: Example Run a simple 4KiB random write workload on an OSD: Syntax Example 1 The overall throughput obtained from the output of the osd bench command. This value is the baseline throughput, when the default BlueStore throttle options are in effect. Note the overall throughput, that is IOPS, obtained from the output of the command. If the intent is to determine the BlueStore throttle values for your environment, set bluestore_throttle_bytes and bluestore_throttle_deferred_bytes options to 32 KiB, that is, 32768 Bytes: Syntax Example Otherwise, you can skip to the section Specifying maximum OSD capacity . Run the 4KiB random write test as before using an OSD bench command: Example Notice the overall throughput from the output and compare the value against the baseline throughput recorded earlier. If the throughput does not match with the baseline, increase the BlueStore throttle options by multiplying by 2. Repeat the steps by running the 4KiB random write test, comparing the value against the baseline throughput, and increasing the BlueStore throttle options by multiplying by 2, until the obtained throughput is very close to the baseline value. Note For example, during benchmarking on a machine with NVMe SSDs, a value of 256 KiB for both BlueStore throttle and deferred bytes was determined to maximize the impact of mClock. For HDDs, the corresponding value was 40 MiB, where the overall throughput was roughly equal to the baseline throughput. In general for HDDs, the BlueStore throttle values are expected to be higher when compared to SSDs. 10.7.4. Specifying maximum OSD capacity You can override the maximum Ceph OSD capacity automatically set during OSD initialization. These steps are optional. Perform the following steps if the default capacity does not accurately represent the Ceph OSD capacity. Note Ensure that you determine the benchmark data first, as described in Manually benchmarking OSDs . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor host. Procedure Log into the Cephadm shell: Example Set osd_mclock_max_capacity_iops_[hdd, ssd] option for an OSD: Syntax Example This example sets the maximum capacity for osd.0 , where an underlying device type is HDD, to 350 IOPS.
[ "cephadm shell", "ceph config set osd. OSD_ID osd_mclock_profile VALUE", "ceph config set osd.0 osd_mclock_profile high_recovery_ops", "ceph config set osd osd_mclock_profile VALUE", "cephadm shell", "ceph config set osd. OSD_ID osd_mclock_profile custom", "ceph config set osd.0 osd_mclock_profile custom", "ceph config set osd osd_mclock_profile custom", "ceph config set osd. OSD_ID MCLOCK_CONFIGURATION_OPTION VALUE", "ceph config set osd.0 osd_mclock_scheduler_client_res 0.5", "cephadm shell", "ceph config set osd osd_mclock_profile MCLOCK_PROFILE", "ceph config set osd osd_mclock_profile high_client_ops", "ceph config dump", "ceph config rm osd MCLOCK_CONFIGURATION_OPTION", "ceph config rm osd osd_mclock_scheduler_client_res", "ceph config show osd. OSD_ID", "ceph config show osd.0", "cephadm shell", "ceph tell osd. OSD_ID injectargs '-- MCLOCK_CONFIGURATION_OPTION = VALUE '", "ceph tell osd.0 injectargs '--osd_mclock_profile=high_recovery_ops'", "ceph daemon osd. OSD_ID config set MCLOCK_CONFIGURATION_OPTION VALUE", "ceph daemon osd.0 config set osd_mclock_profile high_recovery_ops", "cephadm shell", "ceph config set osd osd_mclock_override_recovery_settings true", "ceph config set osd OPTION VALUE", "ceph config set osd osd_max_backfills_ 5", "ceph config show osd. OSD_ID_ | grep OPTION", "ceph config show osd.0 | grep osd_max_backfills", "ceph config set osd osd_mclock_override_recovery_settings false", "2022-10-27T15:30:23.270+0000 7f9b5dbe95c0 0 log_channel(cluster) log [WRN] : OSD bench result of 39546.479392 IOPS exceeded the threshold limit of 25000.000000 IOPS for osd.1. IOPS capacity is unchanged at 21500.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].", "cephadm shell", "ceph config show osd. OSD_ID osd_mclock_max_capacity_iops_[hdd, ssd]", "ceph config show osd.0 osd_mclock_max_capacity_iops_ssd 21500.000000", "cephadm shell", "ceph tell osd. OSD_ID bench [ TOTAL_BYTES ] [ BYTES_PER_WRITE ] [ OBJ_SIZE ] [ NUM_OBJS ]", "ceph tell osd.0 bench 12288000 4096 4194304 100 { \"bytes_written\": 12288000, \"blocksize\": 4096, \"elapsed_sec\": 1.3718913019999999, \"bytes_per_sec\": 8956977.8466311768, \"iops\": 2186.7621695876896 }", "ceph tell osd. OSD_ID cache drop", "ceph tell osd.0 cache drop", "cephadm shell", "ceph tell osd. OSD_ID bench 12288000 4096 4194304 100", "ceph tell osd.0 bench 12288000 4096 4194304 100 { \"bytes_written\": 12288000, \"blocksize\": 4096, \"elapsed_sec\": 1.3718913019999999, \"bytes_per_sec\": 8956977.8466311768, \"iops\": 2186.7621695876896 1 }", "ceph config set osd. OSD_ID bluestore_throttle_bytes 32768 ceph config set osd. OSD_ID bluestore_throttle_deferred_bytes 32768", "ceph config set osd.0 bluestore_throttle_bytes 32768 ceph config set osd.0 bluestore_throttle_deferred_bytes 32768", "ceph tell osd.0 bench 12288000 4096 4194304 100", "cephadm shell", "ceph config set osd. OSD_ID osd_mclock_max_capacity_iops_[hdd,ssd] VALUE", "ceph config set osd.0 osd_mclock_max_capacity_iops_hdd 350" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/administration_guide/the-mclock-osd-scheduler
12.2. About Host Entry Configuration Properties
12.2. About Host Entry Configuration Properties A host entry can contain information about the host that is outside its system configuration, such as its physical location, MAC address, keys, and certificates. This information can be set when the host entry is created if it is created manually; otherwise, most of this information needs to be added to the host entry after the host is enrolled in the domain. Table 12.1. Host Configuration Properties UI Field Command-Line Option Description Description --desc = description A description of the host. Locality --locality = locality The geographic location of the host. Location --location = location The physical location of the host, such as its data center rack. Platform --platform = string The host hardware or architecture. Operating system --os = string The operating system and version for the host. MAC address --macaddress = address The MAC address for the host. This is a multi-valued attribute. The MAC address is used by the NIS plug-in to create a NIS ethers map for the host. SSH public keys --sshpubkey = string The full SSH public key for the host. This is a multi-valued attribute, so multiple keys can be set. Principal name (not editable) --principalname = principal The Kerberos principal name for the host. This defaults to the host name during the client installation, unless a different principal is explicitly set in the -p . This can be changed using the command-line tools, but cannot be changed in the UI. Set One-Time Password --password = string Sets a password for the host which can be used in bulk enrollment. - --random Generates a random password to be used in bulk enrollment. - --certificate = string A certificate blob for the host. - --updatedns This sets whether the host can dynamically update its DNS entries if its IP address changes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/host-attr
Chapter 12. Prometheus and Grafana metrics under Red Hat Quay
Chapter 12. Prometheus and Grafana metrics under Red Hat Quay Red Hat Quay exports a Prometheus - and Grafana-compatible endpoint on each instance to allow for easy monitoring and alerting. 12.1. Exposing the Prometheus endpoint 12.1.1. Standalone Red Hat Quay When using podman run to start the Quay container, expose the metrics port 9091 : The metrics will now be available: USD curl quay.example.com:9091/metrics See Monitoring Quay with Prometheus and Grafana for details on configuring Prometheus and Grafana to monitor Quay repository counts. 12.1.2. Red Hat Quay Operator Determine the cluster IP for the quay-metrics service: USD oc get services -n quay-enterprise NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.61.161 <none> 80/TCP,8089/TCP 18h example-registry-clair-postgres ClusterIP 172.30.122.136 <none> 5432/TCP 18h example-registry-quay-app ClusterIP 172.30.72.79 <none> 443/TCP,80/TCP,8081/TCP,55443/TCP 18h example-registry-quay-config-editor ClusterIP 172.30.185.61 <none> 80/TCP 18h example-registry-quay-database ClusterIP 172.30.114.192 <none> 5432/TCP 18h example-registry-quay-metrics ClusterIP 172.30.37.76 <none> 9091/TCP 18h example-registry-quay-redis ClusterIP 172.30.157.248 <none> 6379/TCP 18h Connect to your cluster and access the metrics using the cluster IP and port for the quay-metrics service: USD oc debug node/master-0 sh-4.4# curl 172.30.37.76:9091/metrics # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 4.0447e-05 go_gc_duration_seconds{quantile="0.25"} 6.2203e-05 ... 12.1.3. Setting up Prometheus to consume metrics Prometheus needs a way to access all Red Hat Quay instances running in a cluster. In the typical setup, this is done by listing all the Red Hat Quay instances in a single named DNS entry, which is then given to Prometheus. 12.1.4. DNS configuration under Kubernetes A simple Kubernetes service can be configured to provide the DNS entry for Prometheus. 12.1.5. DNS configuration for a manual cluster SkyDNS is a simple solution for managing this DNS record when not using Kubernetes. SkyDNS can run on an etcd cluster. Entries for each Red Hat Quay instance in the cluster can be added and removed in the etcd store. SkyDNS will regularly read them from there and update the list of Quay instances in the DNS record accordingly. 12.2. Introduction to metrics Red Hat Quay provides metrics to help monitor the registry, including metrics for general registry usage, uploads, downloads, garbage collection, and authentication. 12.2.1. General registry statistics General registry statistics can indicate how large the registry has grown. Metric name Description quay_user_rows Number of users in the database quay_robot_rows Number of robot accounts in the database quay_org_rows Number of organizations in the database quay_repository_rows Number of repositories in the database quay_security_scanning_unscanned_images_remaining_total Number of images that are not scanned by the latest security scanner Sample metrics output # HELP quay_user_rows number of users in the database # TYPE quay_user_rows gauge quay_user_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 3 # HELP quay_robot_rows number of robot accounts in the database # TYPE quay_robot_rows gauge quay_robot_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 2 # HELP quay_org_rows number of organizations in the database # TYPE quay_org_rows gauge quay_org_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 2 # HELP quay_repository_rows number of repositories in the database # TYPE quay_repository_rows gauge quay_repository_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 4 # HELP quay_security_scanning_unscanned_images_remaining number of images that are not scanned by the latest security scanner # TYPE quay_security_scanning_unscanned_images_remaining gauge quay_security_scanning_unscanned_images_remaining{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 5 12.2.2. Queue items The queue items metrics provide information on the multiple queues used by Quay for managing work. Metric name Description quay_queue_items_available Number of items in a specific queue quay_queue_items_locked Number of items that are running quay_queue_items_available_unlocked Number of items that are waiting to be processed Metric labels queue_name: The name of the queue. One of: exportactionlogs: Queued requests to export action logs. These logs are then processed and put in storage. A link is then sent to the requester via email. namespacegc: Queued namespaces to be garbage collected notification: Queue for repository notifications to be sent out repositorygc: Queued repositories to be garbage collected secscanv4: Notification queue specific for Clair V4 dockerfilebuild: Queue for Quay docker builds imagestoragereplication: Queued blob to be replicated across multiple storages chunk_cleanup: Queued blob segments that needs to be deleted. This is only used by some storage implementations, for example, Swift. For example, the queue labelled repositorygc contains the repositories marked for deletion by the repository garbage collection worker. For metrics with a queue_name label of repositorygc : quay_queue_items_locked is the number of repositories currently being deleted. quay_queue_items_available_unlocked is the number of repositories waiting to get processed by the worker. Sample metrics output # HELP quay_queue_items_available number of queue items that have not expired # TYPE quay_queue_items_available gauge quay_queue_items_available{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0 ... # HELP quay_queue_items_available_unlocked number of queue items that have not expired and are not locked # TYPE quay_queue_items_available_unlocked gauge quay_queue_items_available_unlocked{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0 ... # HELP quay_queue_items_locked number of queue items that have been acquired # TYPE quay_queue_items_locked gauge quay_queue_items_locked{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0 12.2.3. Garbage collection metrics These metrics show you how many resources have been removed from garbage collection (gc). They show many times the gc workers have run and how many namespaces, repositories, and blobs were removed. Metric name Description quay_gc_iterations_total Number of iterations by the GCWorker quay_gc_namespaces_purged_total Number of namespaces purged by the NamespaceGCWorker quay_gc_repos_purged_total Number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker quay_gc_storage_blobs_deleted_total Number of storage blobs deleted Sample metrics output # TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189714e+09 ... # HELP quay_gc_iterations_total number of iterations by the GCWorker # TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189433e+09 ... # HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker # TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 .... # TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.631782319018925e+09 ... # HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker # TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189059e+09 ... # HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted # TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... 12.2.3.1. Multipart uploads metrics The multipart uploads metrics show the number of blobs uploads to storage (S3, Rados, GoogleCloudStorage, RHOCS). These can help identify issues when Quay is unable to correctly upload blobs to storage. Metric name Description quay_multipart_uploads_started_total Number of multipart uploads to Quay storage that started quay_multipart_uploads_completed_total Number of multipart uploads to Quay storage that completed Sample metrics output # TYPE quay_multipart_uploads_completed_created gauge quay_multipart_uploads_completed_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823308284895e+09 ... # HELP quay_multipart_uploads_completed_total number of multipart uploads to Quay storage that completed # TYPE quay_multipart_uploads_completed_total counter quay_multipart_uploads_completed_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 # TYPE quay_multipart_uploads_started_created gauge quay_multipart_uploads_started_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823308284352e+09 ... # HELP quay_multipart_uploads_started_total number of multipart uploads to Quay storage that started # TYPE quay_multipart_uploads_started_total counter quay_multipart_uploads_started_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... 12.2.4. Image push / pull metrics A number of metrics are available related to pushing and pulling images. 12.2.4.1. Image pulls total Metric name Description quay_registry_image_pulls_total The number of images downloaded from the registry. Metric labels protocol: the registry protocol used (should always be v2) ref: ref used to pull - tag, manifest status: http return code of the request 12.2.4.2. Image bytes pulled Metric name Description quay_registry_image_pulled_estimated_bytes_total The number of bytes downloaded from the registry Metric labels protocol: the registry protocol used (should always be v2) 12.2.4.3. Image pushes total Metric name Description quay_registry_image_pushes_total The number of images uploaded from the registry. Metric labels protocol: the registry protocol used (should always be v2) pstatus: http return code of the request pmedia_type: the uploaded manifest type 12.2.4.4. Image bytes pushed Metric name Description quay_registry_image_pushed_bytes_total The number of bytes uploaded to the registry Sample metrics output # HELP quay_registry_image_pushed_bytes_total number of bytes pushed to the registry # TYPE quay_registry_image_pushed_bytes_total counter quay_registry_image_pushed_bytes_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application"} 0 ... 12.2.5. Authentication metrics The authentication metrics provide the number of authentication requests, labeled by type and whether it succeeded or not. For example, this metric could be used to monitor failed basic authentication requests. Metric name Description quay_authentication_attempts_total Number of authentication attempts across the registry and API Metric labels auth_kind: The type of auth used, including: basic oauth credentials success: true or false Sample metrics output # TYPE quay_authentication_attempts_created gauge quay_authentication_attempts_created{auth_kind="basic",host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application",success="True"} 1.6317843039374158e+09 ... # HELP quay_authentication_attempts_total number of authentication attempts across the registry and API # TYPE quay_authentication_attempts_total counter quay_authentication_attempts_total{auth_kind="basic",host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application",success="True"} 2 ...
[ "sudo podman run -d --rm -p 80:8080 -p 443:8443 -p 9091:9091 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.9.10", "curl quay.example.com:9091/metrics", "oc get services -n quay-enterprise NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.61.161 <none> 80/TCP,8089/TCP 18h example-registry-clair-postgres ClusterIP 172.30.122.136 <none> 5432/TCP 18h example-registry-quay-app ClusterIP 172.30.72.79 <none> 443/TCP,80/TCP,8081/TCP,55443/TCP 18h example-registry-quay-config-editor ClusterIP 172.30.185.61 <none> 80/TCP 18h example-registry-quay-database ClusterIP 172.30.114.192 <none> 5432/TCP 18h example-registry-quay-metrics ClusterIP 172.30.37.76 <none> 9091/TCP 18h example-registry-quay-redis ClusterIP 172.30.157.248 <none> 6379/TCP 18h", "oc debug node/master-0 sh-4.4# curl 172.30.37.76:9091/metrics HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile=\"0\"} 4.0447e-05 go_gc_duration_seconds{quantile=\"0.25\"} 6.2203e-05", "HELP quay_user_rows number of users in the database TYPE quay_user_rows gauge quay_user_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 3 HELP quay_robot_rows number of robot accounts in the database TYPE quay_robot_rows gauge quay_robot_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 2 HELP quay_org_rows number of organizations in the database TYPE quay_org_rows gauge quay_org_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 2 HELP quay_repository_rows number of repositories in the database TYPE quay_repository_rows gauge quay_repository_rows{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"65\",process_name=\"globalpromstats.py\"} 4 HELP quay_security_scanning_unscanned_images_remaining number of images that are not scanned by the latest security scanner TYPE quay_security_scanning_unscanned_images_remaining gauge quay_security_scanning_unscanned_images_remaining{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 5", "HELP quay_queue_items_available number of queue items that have not expired TYPE quay_queue_items_available gauge quay_queue_items_available{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0 HELP quay_queue_items_available_unlocked number of queue items that have not expired and are not locked TYPE quay_queue_items_available_unlocked gauge quay_queue_items_available_unlocked{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0 HELP quay_queue_items_locked number of queue items that have been acquired TYPE quay_queue_items_locked gauge quay_queue_items_locked{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"63\",process_name=\"exportactionlogsworker.py\",queue_name=\"exportactionlogs\"} 0", "TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189714e+09 HELP quay_gc_iterations_total number of iterations by the GCWorker TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189433e+09 HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 . TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.631782319018925e+09 HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189059e+09 HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0", "TYPE quay_multipart_uploads_completed_created gauge quay_multipart_uploads_completed_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823308284895e+09 HELP quay_multipart_uploads_completed_total number of multipart uploads to Quay storage that completed TYPE quay_multipart_uploads_completed_total counter quay_multipart_uploads_completed_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_multipart_uploads_started_created gauge quay_multipart_uploads_started_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823308284352e+09 HELP quay_multipart_uploads_started_total number of multipart uploads to Quay storage that started TYPE quay_multipart_uploads_started_total counter quay_multipart_uploads_started_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0", "HELP quay_registry_image_pushed_bytes_total number of bytes pushed to the registry TYPE quay_registry_image_pushed_bytes_total counter quay_registry_image_pushed_bytes_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\"} 0", "TYPE quay_authentication_attempts_created gauge quay_authentication_attempts_created{auth_kind=\"basic\",host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\",success=\"True\"} 1.6317843039374158e+09 HELP quay_authentication_attempts_total number of authentication attempts across the registry and API TYPE quay_authentication_attempts_total counter quay_authentication_attempts_total{auth_kind=\"basic\",host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"221\",process_name=\"registry:application\",success=\"True\"} 2" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/prometheus-metrics-under-quay-enterprise
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) using Red Hat OpenStack Platform clusters. Note Both internal and external OpenShift Data Foundation clusters are supported on Red Hat OpenStack Platform. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Internal mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploy standalone Multicloud Object Gateway component External mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/preface-ocs-osp
19.3. The Virtual Hardware Details Window
19.3. The Virtual Hardware Details Window The virtual hardware details window displays information about the virtual hardware configured for the guest. Virtual hardware resources can be added, removed and modified in this window. To access the virtual hardware details window, click the icon in the toolbar. Figure 19.3. The virtual hardware details icon Clicking the icon displays the virtual hardware details window. Figure 19.4. The virtual hardware details window 19.3.1. Applying Boot Options to Guest Virtual Machines Using virt-manager you can select how the guest virtual machine will act on boot. The boot options will not take effect until the guest virtual machine reboots. You can either power down the virtual machine before making any changes, or you can reboot the machine afterwards. If you do not do either of these options, the changes will happen the time the guest reboots. Procedure 19.1. Configuring boot options From the Virtual Machine Manager Edit menu, select Virtual Machine Details . From the side panel, select Boot Options and then complete any or all of the following optional steps: To indicate that this guest virtual machine should start each time the host physical machine boots, select the Autostart check box. To indicate the order in which guest virtual machine should boot, click the Enable boot menu check box. After this is checked, you can then check the devices you want to boot from and using the arrow keys change the order that the guest virtual machine will use when booting. If you want to boot directly from the Linux kernel, expand the Direct kernel boot menu. Fill in the Kernel path , Initrd path , and the Kernel arguments that you want to use. Click Apply . Figure 19.5. Configuring boot options 19.3.2. Attaching USB Devices to a Guest Virtual Machine Note In order to attach the USB device to the guest virtual machine, you first must attach it to the host physical machine and confirm that the device is working. If the guest is running, you need to shut it down before proceeding. Procedure 19.2. Attaching USB devices using Virt-Manager Open the guest virtual machine's Virtual Machine Details screen. Click Add Hardware In the Add New Virtual Hardware popup, select USB Host Device , select the device you want to attach from the list and Click Finish . Figure 19.6. Add USB Device To use the USB device in the guest virtual machine, start the guest virtual machine. 19.3.3. USB Redirection USB re-direction is best used in cases where there is a host physical machine that is running in a data center. The user connects to his/her guest virtual machine from a local machine or thin client. On this local machine there is a SPICE client. The user can attach any USB device to the thin client and the SPICE client will redirect the device to the host physical machine on the data center so it can be used by the VM that is running on the thin client. Procedure 19.3. Redirecting USB devices Open the guest virtual machine's Virtual Machine Details screen. Click Add Hardware In the Add New Virtual Hardware popup, select USB Redirection . Make sure to select Spice channel from the Type drop-down menu and click Finish . Figure 19.7. Add New Virtual Hardware window Open the Virtual Machine menu and select Redirect USB device . A pop-up window opens with a list of USB devices. Figure 19.8. Select a USB device Select a USB device for redirection by checking its check box and click OK .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guests_with_the_Virtual_Machine_Manager_virt_manager-The_virtual_hardware_details_window
Chapter 5. Managing policies
Chapter 5. Managing policies As mentioned previously, policies define the conditions that must be satisfied before granting access to an object. Procedure Click the Policy tab to view all policies associated with a resource server. Policies On this tab, you can view the list of previously created policies as well as create and edit a policy. To create a new policy, select a policy type from the Create policy item list in the upper right corner. Details about each policy type are described in this section. 5.1. User-based policy You can use this type of policy to define conditions for your permissions where a set of one or more users is permitted to access an object. To create a new user-based policy, select User in the item list in the upper right corner of the policy listing. Add a User Policy 5.1.1. Configuration Name A human-readable and unique string identifying the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Users Specifies which users are given access by this policy. Logic The logic of this policy to apply after the other conditions have been evaluated. Additional resources Positive and negative logic 5.2. Role-based policy You can use this type of policy to define conditions for your permissions where a set of one or more roles is permitted to access an object. By default, roles added to this policy are not specified as required and the policy will grant access if the user requesting access has been granted any of these roles. However, you can specify a specific role as required if you want to enforce a specific role. You can also combine required and non-required roles, regardless of whether they are realm or client roles. Role policies can be useful when you need more restricted role-based access control (RBAC), where specific roles must be enforced to grant access to an object. For instance, you can enforce that a user must consent to allowing a client application (which is acting on the user's behalf) to access the user's resources. You can use Red Hat Single Sign-On Client Scope Mapping to enable consent pages or even enforce clients to explicitly provide a scope when obtaining access tokens from a Red Hat Single Sign-On server. To create a new role-based policy, select Role in the item list in the upper right corner of the policy listing. Add Role Policy 5.2.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Realm Roles Specifies which realm roles are permitted by this policy. Client Roles Specifies which client roles are permitted by this policy. To enable this field must first select a Client . Logic The logic of this policy to apply after the other conditions have been evaluated. Additional resources Positive and negative logic 5.2.2. Defining a role as required When creating a role-based policy, you can specify a specific role as Required . When you do that, the policy will grant access only if the user requesting access has been granted all the required roles. Both realm and client roles can be configured as such. Example of a required role To specify a role as required, select the Required checkbox for the role you want to configure as required. Required roles can be useful when your policy defines multiple roles but only a subset of them are mandatory. In this case, you can combine realm and client roles to enable an even more fine-grained role-based access control (RBAC) model for your application. For example, you can have policies specific for a client and require a specific client role associated with that client. Or you can enforce that access is granted only in the presence of a specific realm role. You can also combine both approaches within the same policy. 5.3. JavaScript-based policy Warning If your policy implementation is using Attribute based access control (ABAC) as in the examples below, then please make sure that users are not able to edit the protected attributes and the corresponding attributes are read-only. See the details in the Threat model mitigation chapter . You can use this type of policy to define conditions for your permissions using JavaScript. It is one of the rule-based policy types supported by Red Hat Single Sign-On, and provides flexibility to write any policy based on the Evaluation API . To create a new JavaScript-based policy, select JavaScript in the item list in the upper right corner of the policy listing. Note By default, JavaScript Policies can not be uploaded to the server. You should prefer deploying your JS Policies directly to the server as described in JavaScript Providers . 5.3.1. Creating a JS policy from a deployed JAR file Red Hat Single Sign-On allows you to deploy a JAR file in order to deploy scripts to the server. Please, take a look at JavaScript Providers for more details. Once you have your scripts deployed, you should be able to select the scripts you deployed from the list of available policy providers. 5.3.2. Examples 5.3.2.1. Checking for attributes from the evaluation context Here is a simple example of a JavaScript-based policy that uses attribute-based access control (ABAC) to define a condition based on an attribute obtained from the execution context: const context = USDevaluation.getContext(); const contextAttributes = context.getAttributes(); if (contextAttributes.containsValue('kc.client.network.ip_address', '127.0.0.1')) { USDevaluation.grant(); } 5.3.2.2. Checking for attributes from the current identity Here is a simple example of a JavaScript-based policy that uses attribute-based access control (ABAC) to define a condition based on an attribute obtained associated with the current identity: const context = USDevaluation.getContext(); const identity = context.getIdentity(); const attributes = identity.getAttributes(); const email = attributes.getValue('email').asString(0); if (email.endsWith('@keycloak.org')) { USDevaluation.grant(); } Where these attributes are mapped from whatever claim is defined in the token that was used in the authorization request. 5.3.2.3. Checking for roles granted to the current identity You can also use Role-Based Access Control (RBAC) in your policies. In the example below, we check if a user is granted with a keycloak_user realm role: const context = USDevaluation.getContext(); const identity = context.getIdentity(); if (identity.hasRealmRole('keycloak_user')) { USDevaluation.grant(); } Or you can check if a user is granted with a my-client-role client role, where my-client is the client id of the client application: const context = USDevaluation.getContext(); const identity = context.getIdentity(); if (identity.hasClientRole('my-client', 'my-client-role')) { USDevaluation.grant(); } 5.3.2.4. Checking for roles granted to an user To check for realm roles granted to an user: const realm = USDevaluation.getRealm(); if (realm.isUserInRealmRole('marta', 'role-a')) { USDevaluation.grant(); } Or for client roles granted to an user: const realm = USDevaluation.getRealm(); if (realm.isUserInClientRole('marta', 'my-client', 'some-client-role')) { USDevaluation.grant(); } 5.3.2.5. Checking for roles granted to a group To check for realm roles granted to a group: const realm = USDevaluation.getRealm(); if (realm.isGroupInRole('/Group A/Group D', 'role-a')) { USDevaluation.grant(); } 5.3.2.6. Pushing arbitrary claims to the resource server To push arbitrary claims to the resource server in order to provide additional information on how permissions should be enforced: const permission = USDevaluation.getPermission(); // decide if permission should be granted if (granted) { permission.addClaim('claim-a', 'claim-a'); permission.addClaim('claim-a', 'claim-a1'); permission.addClaim('claim-b', 'claim-b'); } 5.3.2.7. Checking for group membership const realm = USDevaluation.getRealm(); if (realm.isUserInGroup('marta', '/Group A/Group B')) { USDevaluation.grant(); } 5.3.2.8. Mixing different access control mechanisms You can also use a combination of several access control mechanisms. The example below shows how roles(RBAC) and claims/attributes(ABAC) checks can be used within the same policy. In this case we check if user is granted with admin role or has an e-mail from keycloak.org domain: const context = USDevaluation.getContext(); const identity = context.getIdentity(); const attributes = identity.getAttributes(); const email = attributes.getValue('email').asString(0); if (identity.hasRealmRole('admin') || email.endsWith('@keycloak.org')) { USDevaluation.grant(); } Note When writing your own rules, keep in mind that the USDevaluation object is an object implementing org.keycloak.authorization.policy.evaluation.Evaluation . For more information about what you can access from this interface, see the Evaluation API . 5.4. Time-based policy You can use this type of policy to define time conditions for your permissions. To create a new time-based policy, select Time in the item list in the upper right corner of the policy listing. Add Time Policy 5.4.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Not Before Defines the time before which access must not be granted. Permission is granted only if the current date/time is later than or equal to this value. Not On or After Defines the time after which access must not be granted. Permission is granted only if the current date/time is earlier than or equal to this value. Day of Month Defines the day of month that access must be granted. You can also specify a range of dates. In this case, permission is granted only if the current day of the month is between or equal to the two values specified. Month Defines the month that access must be granted. You can also specify a range of months. In this case, permission is granted only if the current month is between or equal to the two values specified. Year Defines the year that access must be granted. You can also specify a range of years. In this case, permission is granted only if the current year is between or equal to the two values specified. Hour Defines the hour that access must be granted. You can also specify a range of hours. In this case, permission is granted only if current hour is between or equal to the two values specified. Minute Defines the minute that access must be granted. You can also specify a range of minutes. In this case, permission is granted only if the current minute is between or equal to the two values specified. Logic The logic of this policy to apply after the other conditions have been evaluated. Access is only granted if all conditions are satisfied. Red Hat Single Sign-On will perform an AND based on the outcome of each condition. Additional resources Positive and negative logic 5.5. Aggregated policy As mentioned previously, Red Hat Single Sign-On allows you to build a policy of policies, a concept referred to as policy aggregation. You can use policy aggregation to reuse existing policies to build more complex ones and keep your permissions even more decoupled from the policies that are evaluated during the processing of authorization requests. To create a new aggregated policy, select Aggregated in the item list located in the right upper corner of the policy listing. Add an aggregated policy Let's suppose you have a resource called Confidential Resource that can be accessed only by users from the keycloak.org domain and from a certain range of IP addresses. You can create a single policy with both conditions. However, you want to reuse the domain part of this policy to apply to permissions that operates regardless of the originating network. You can create separate policies for both domain and network conditions and create a third policy based on the combination of these two policies. With an aggregated policy, you can freely combine other policies and then apply the new aggregated policy to any permission you want. Note When creating aggregated policies, be mindful that you are not introducing a circular reference or dependency between policies. If a circular dependency is detected, you cannot create or update the policy. 5.5.1. Configuration Name A human-readable and unique string describing the policy. We strongly suggest that you use names that are closely related with your business and security requirements, so you can identify them more easily and also know what they mean. Description A string with more details about this policy. Apply Policy Defines a set of one or more policies to associate with the aggregated policy. To associate a policy you can either select an existing policy or create a new one by selecting the type of the policy you want to create. Decision Strategy The decision strategy for this permission. Logic The logic of this policy to apply after the other conditions have been evaluated. Additional resources Positive and negative logic 5.5.2. Decision strategy for aggregated policies When creating aggregated policies, you can also define the decision strategy that will be used to determine the final decision based on the outcome from each policy. Unanimous The default strategy if none is provided. In this case, all policies must evaluate to a positive decision for the final decision to be also positive. Affirmative In this case, at least one policy must evaluate to a positive decision in order for the final decision to be also positive. Consensus In this case, the number of positive decisions must be greater than the number of negative decisions. If the number of positive and negative decisions is the same, the final decision will be negative. 5.6. Client-based policy You can use this type of policy to define conditions for your permissions where a set of one or more clients is permitted to access an object. To create a new client-based policy, select Client in the item list in the upper right corner of the policy listing. Add a Client Policy 5.6.1. Configuration Name A human-readable and unique string identifying the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Clients Specifies which clients are given access by this policy. Logic The logic of this policy to apply after the other conditions have been evaluated. Additional resources Positive and negative logic 5.7. Group-based policy You can use this type of policy to define conditions for your permissions where a set of one or more groups (and their hierarchies) is permitted to access an object. To create a new group-based policy, select Group in the item list in the upper right corner of the policy listing. Group Policy 5.7.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Groups Claim Specifies the name of the claim in the token holding the group names and/or paths. Usually, authorization requests are processed based on an ID Token or Access Token previously issued to a client acting on behalf of some user. If defined, the token must include a claim from where this policy is going to obtain the groups the user is a member of. If not defined, user's groups are obtained from your realm configuration. Groups Allows you to select the groups that should be enforced by this policy when evaluating permissions. After adding a group, you can extend access to children of the group by marking the checkbox Extend to Children . If left unmarked, access restrictions only applies to the selected group. Logic The logic of this policy to apply after the other conditions have been evaluated. Additional resources Positive and negative logic 5.7.2. Extending access to child groups By default, when you add a group to this policy, access restrictions will only apply to members of the selected group. Under some circumstances, it might be necessary to allow access not only to the group itself but to any child group in the hierarchy. For any group added you can mark a checkbox Extend to Children in order to extend access to child groups. Extending access to child groups In the example above, the policy is granting access for any user member of IT or any of its children. 5.8. Client scope-based policy You can use this type of policy to define conditions for your permissions where a set of one or more client scopes is permitted to access an object. By default, client scopes added to this policy are not specified as required and the policy will grant access if the client requesting access has been granted any of these client scopes. However, you can specify a specific client scope as required if you want to enforce a specific client scope. To create a new client scope-based policy, select Client Scope in the item list in the upper right corner of the policy listing. Add Client Scope Policy 5.8.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Client Scopes Specifies which client scopes are permitted by this policy. Logic The logic of this policy to apply after the other conditions have been evaluated. Additional resources Positive and negative logic 5.8.2. Defining a client scope as required When creating a client scope-based policy, you can specify a specific client scope as Required . When you do that, the policy will grant access only if the client requesting access has been granted all the required client scopes. Example of required client scope To specify a client scope as required, select the Required checkbox for the client scope you want to configure as required. Required client scopes can be useful when your policy defines multiple client scopes but only a subset of them are mandatory. 5.9. Regex-Based Policy You can use this type of policy to define regex conditions for your permissions. To create a new regex-based policy, select Regex in the item list in the upper right corner of the policy listing. Add Regex Policy 5.9.1. Configuration Name A human-readable and unique string describing the policy. A best practice is to use names that are closely related to your business and security requirements, so you can identify them more easily. Description A string containing details about this policy. Target Claim Specifies the name of the target claim in the token. Regex Pattern Specifies the regex pattern. Logic The Logic of this policy to apply after the other conditions have been evaluated. 5.10. Positive and negative logic Policies can be configured with positive or negative logic. Briefly, you can use this option to define whether the policy result should be kept as it is or be negated. For example, suppose you want to create a policy where only users not granted with a specific role should be given access. In this case, you can create a role-based policy using that role and set its Logic field to Negative . If you keep Positive , which is the default behavior, the policy result will be kept as it is. 5.11. Policy evaluation API When writing rule-based policies using JavaScript, Red Hat Single Sign-On provides an Evaluation API that provides useful information to help determine whether a permission should be granted. This API consists of a few interfaces that provide you access to information, such as The permission being evaluated, representing both the resource and scopes being requested. The attributes associated with the resource being requested Runtime environment and any other attribute associated with the execution context Information about users such as group membership and roles The main interface is org.keycloak.authorization.policy.evaluation.Evaluation , which defines the following contract: public interface Evaluation { /** * Returns the {@link ResourcePermission} to be evaluated. * * @return the permission to be evaluated */ ResourcePermission getPermission(); /** * Returns the {@link EvaluationContext}. Which provides access to the whole evaluation runtime context. * * @return the evaluation context */ EvaluationContext getContext(); /** * Returns a {@link Realm} that can be used by policies to query information. * * @return a {@link Realm} instance */ Realm getRealm(); /** * Grants the requested permission to the caller. */ void grant(); /** * Denies the requested permission. */ void deny(); } When processing an authorization request, Red Hat Single Sign-On creates an Evaluation instance before evaluating any policy. This instance is then passed to each policy to determine whether access is GRANT or DENY . Policies determine this by invoking the grant() or deny() methods on an Evaluation instance. By default, the state of the Evaluation instance is denied, which means that your policies must explicitly invoke the grant() method to indicate to the policy evaluation engine that permission should be granted. Additional resources JavaDocs Documentation . 5.11.1. The evaluation context The evaluation context provides useful information to policies during their evaluation. public interface EvaluationContext { /** * Returns the {@link Identity} that represents an entity (person or non-person) to which the permissions must be granted, or not. * * @return the identity to which the permissions must be granted, or not */ Identity getIdentity(); /** * Returns all attributes within the current execution and runtime environment. * * @return the attributes within the current execution and runtime environment */ Attributes getAttributes(); } From this interface, policies can obtain: The authenticated Identity Information about the execution context and runtime environment The Identity is built based on the OAuth2 Access Token that was sent along with the authorization request, and this construct has access to all claims extracted from the original token. For example, if you are using a Protocol Mapper to include a custom claim in an OAuth2 Access Token you can also access this claim from a policy and use it to build your conditions. The EvaluationContext also gives you access to attributes related to both the execution and runtime environments. For now, there only a few built-in attributes. Table 5.1. Execution and Runtime Attributes Name Description Type kc.time.date_time Current date and time String. Format MM/dd/yyyy hh:mm:ss kc.client.network.ip_address IPv4 address of the client String kc.client.network.host Client's host name String kc.client.id The client id String kc.client.user_agent The value of the 'User-Agent' HTTP header String[] kc.realm.name The name of the realm String
[ "const context = USDevaluation.getContext(); const contextAttributes = context.getAttributes(); if (contextAttributes.containsValue('kc.client.network.ip_address', '127.0.0.1')) { USDevaluation.grant(); }", "const context = USDevaluation.getContext(); const identity = context.getIdentity(); const attributes = identity.getAttributes(); const email = attributes.getValue('email').asString(0); if (email.endsWith('@keycloak.org')) { USDevaluation.grant(); }", "const context = USDevaluation.getContext(); const identity = context.getIdentity(); if (identity.hasRealmRole('keycloak_user')) { USDevaluation.grant(); }", "const context = USDevaluation.getContext(); const identity = context.getIdentity(); if (identity.hasClientRole('my-client', 'my-client-role')) { USDevaluation.grant(); }", "const realm = USDevaluation.getRealm(); if (realm.isUserInRealmRole('marta', 'role-a')) { USDevaluation.grant(); }", "const realm = USDevaluation.getRealm(); if (realm.isUserInClientRole('marta', 'my-client', 'some-client-role')) { USDevaluation.grant(); }", "const realm = USDevaluation.getRealm(); if (realm.isGroupInRole('/Group A/Group D', 'role-a')) { USDevaluation.grant(); }", "const permission = USDevaluation.getPermission(); // decide if permission should be granted if (granted) { permission.addClaim('claim-a', 'claim-a'); permission.addClaim('claim-a', 'claim-a1'); permission.addClaim('claim-b', 'claim-b'); }", "const realm = USDevaluation.getRealm(); if (realm.isUserInGroup('marta', '/Group A/Group B')) { USDevaluation.grant(); }", "const context = USDevaluation.getContext(); const identity = context.getIdentity(); const attributes = identity.getAttributes(); const email = attributes.getValue('email').asString(0); if (identity.hasRealmRole('admin') || email.endsWith('@keycloak.org')) { USDevaluation.grant(); }", "public interface Evaluation { /** * Returns the {@link ResourcePermission} to be evaluated. * * @return the permission to be evaluated */ ResourcePermission getPermission(); /** * Returns the {@link EvaluationContext}. Which provides access to the whole evaluation runtime context. * * @return the evaluation context */ EvaluationContext getContext(); /** * Returns a {@link Realm} that can be used by policies to query information. * * @return a {@link Realm} instance */ Realm getRealm(); /** * Grants the requested permission to the caller. */ void grant(); /** * Denies the requested permission. */ void deny(); }", "public interface EvaluationContext { /** * Returns the {@link Identity} that represents an entity (person or non-person) to which the permissions must be granted, or not. * * @return the identity to which the permissions must be granted, or not */ Identity getIdentity(); /** * Returns all attributes within the current execution and runtime environment. * * @return the attributes within the current execution and runtime environment */ Attributes getAttributes(); }" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/authorization_services_guide/policy_overview
Chapter 21. Ceph Sink
Chapter 21. Ceph Sink Upload data to an Ceph Bucket managed by a Object Storage Gateway. In the header, you can optionally set the file / ce-file property to specify the name of the file to upload. If you do not set the property in the header, the Kamelet uses the exchange ID for the file name. 21.1. Configuration Options The following table summarizes the configuration options available for the ceph-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key. string bucketName * Bucket Name The Ceph Bucket name. string cephUrl * Ceph Url Address Set the Ceph Object Storage Address Url. string "http://ceph-storage-address.com" secretKey * Secret Key The secret key. string zoneGroup * Bucket Zone Group The bucket zone group. string autoCreateBucket Autocreate Bucket Specifies to automatically create the bucket. boolean false keyName Key Name The key name for saving an element in the bucket. string Note Fields marked with an asterisk (*) are mandatory. 21.2. Dependencies At runtime, the ceph-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:aws2-s3 camel:kamelet 21.3. Usage This section describes how you can use the ceph-sink . 21.3.1. Knative Sink You can use the ceph-sink Kamelet as a Knative sink by binding it to a Knative object. ceph-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-sink properties: accessKey: "The Access Key" bucketName: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" 21.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 21.3.1.2. Procedure for using the cluster CLI Save the ceph-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f ceph-sink-binding.yaml 21.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel ceph-sink -p "sink.accessKey=The Access Key" -p "sink.bucketName=The Bucket Name" -p "sink.cephUrl=http://ceph-storage-address.com" -p "sink.secretKey=The Secret Key" -p "sink.zoneGroup=The Bucket Zone Group" This command creates the KameletBinding in the current namespace on the cluster. 21.3.2. Kafka Sink You can use the ceph-sink Kamelet as a Kafka sink by binding it to a Kafka topic. ceph-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-sink properties: accessKey: "The Access Key" bucketName: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" 21.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 21.3.2.2. Procedure for using the cluster CLI Save the ceph-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f ceph-sink-binding.yaml 21.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic ceph-sink -p "sink.accessKey=The Access Key" -p "sink.bucketName=The Bucket Name" -p "sink.cephUrl=http://ceph-storage-address.com" -p "sink.secretKey=The Secret Key" -p "sink.zoneGroup=The Bucket Zone Group" This command creates the KameletBinding in the current namespace on the cluster. 21.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/ceph-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-sink properties: accessKey: \"The Access Key\" bucketName: \"The Bucket Name\" cephUrl: \"http://ceph-storage-address.com\" secretKey: \"The Secret Key\" zoneGroup: \"The Bucket Zone Group\"", "apply -f ceph-sink-binding.yaml", "kamel bind channel:mychannel ceph-sink -p \"sink.accessKey=The Access Key\" -p \"sink.bucketName=The Bucket Name\" -p \"sink.cephUrl=http://ceph-storage-address.com\" -p \"sink.secretKey=The Secret Key\" -p \"sink.zoneGroup=The Bucket Zone Group\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-sink properties: accessKey: \"The Access Key\" bucketName: \"The Bucket Name\" cephUrl: \"http://ceph-storage-address.com\" secretKey: \"The Secret Key\" zoneGroup: \"The Bucket Zone Group\"", "apply -f ceph-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic ceph-sink -p \"sink.accessKey=The Access Key\" -p \"sink.bucketName=The Bucket Name\" -p \"sink.cephUrl=http://ceph-storage-address.com\" -p \"sink.secretKey=The Secret Key\" -p \"sink.zoneGroup=The Bucket Zone Group\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/ceph-sink
Chapter 1. Red Hat Process Automation Manager components
Chapter 1. Red Hat Process Automation Manager components The product is made up of Business Central and KIE Server. Business Central is the graphical user interface where you create and manage business rules. You can install Business Central in a Red Hat JBoss EAP instance or on the Red Hat OpenShift Container Platform (OpenShift). Business Central is also available as a standalone JAR file. You can use the Business Central standalone JAR file to run Business Central without deploying it to an application server. KIE Server is the server where rules and other artifacts are executed. It is used to instantiate and execute rules and solve planning problems. You can install KIE Server in a Red Hat JBoss EAP instance, in a Red Hat JBoss EAP cluster, on OpenShift, in an Oracle WebLogic server instance, in an IBM WebSphere Application Server instance, or as a part of Spring Boot application. You can configure KIE Server to run in managed or unmanaged mode. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). A KIE container is a specific version of a project. If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain KIE containers.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/components-con_execution-server
Chapter 6. Support
Chapter 6. Support Red Hat and Microsoft are committed to providing excellent support for .NET and are working together to resolve any problems that occur on Red Hat supported platforms. At a high level, Red Hat supports the installation, configuration, and running of the .NET component in Red Hat Enterprise Linux (RHEL). Red Hat can also provide "commercially reasonable" support for issues we can help with, for example, NuGet access problems, permissions issues, firewalls, and application questions. If the issue is a defect or vulnerability in .NET, we actively work with Microsoft to resolve it. .NET 8.0 is supported on RHEL 8.9 and later, RHEL 9.3 and later, and Red Hat OpenShift Container Platform versions 3.3 and later. See .NET Core Life Cycle for information about the .NET support policy 6.1. Contact options There are a couple of ways you can get support, depending on how you are using .NET. If you are using .NET on-premises, you can contact either Red Hat Support or Microsoft directly. If you are using .NET in Microsoft Azure, you can contact either Red Hat Support or Azure Support to receive Integrated Support. Integrated Support is a collaborative support agreement between Red Hat and Microsoft. Customers using Red Hat products in Microsoft Azure are mutual customers, so both companies are united to provide the best troubleshooting and support experience possible. If you are using .NET on IBM Z, IBM LinuxONE, or IBM Power, you can contact Red Hat Support . If the Red Hat Support Engineer assigned to your case needs assistance from IBM, the Red Hat Support Engineer will collaborate with IBM directly without any action required from you. 6.2. Frequently asked questions Here are four of the most common support questions for Integrated Support. When do I access Integrated Support? You can engage Red Hat Support directly. If the Red Hat Support Engineer assigned to your case needs assistance from Microsoft, the Red Hat Support Engineer will collaborate with Microsoft directly without any action required from you. Likewise on the Microsoft side, they have a process for directly collaborating with Red Hat Support Engineers. What happens after I file a support case? Once the Red Hat support case has been created, a Red Hat Support Engineer will be assigned to the case and begin collaborating on the issue with you and your Microsoft Support Engineer. You should expect a response to the issue based on Red Hat's Production Support Service Level Agreement . What if I need further assistance? Contact Red Hat Support for assistance in creating your case or with any questions related to this process. You can view any of your open cases here. How do I engage Microsoft for support for an Azure platform issue? If you have support from Microsoft, you can open a case using whatever process you typically would follow. If you do not have support with Microsoft, you can always get support from Microsoft Support . 6.3. Additional support resources The Resources page at Red Hat Developers provides a wealth of information, including: Getting started documents Knowledgebase articles and solutions Blog posts .NET documentation is hosted on a Microsoft website. Here are some additional topics to explore: .NET ASP.NET Core C# F# Visual Basic You can also see more support policy information at Red Hat and Microsoft Azure Certified Cloud & Service Provider Support Policies .
null
https://docs.redhat.com/en/documentation/net/8.0/html/release_notes_for_.net_8.0_rpm_packages/support_release-notes-for-dotnet-rpms
Chapter 113. KafkaUserTemplate schema reference
Chapter 113. KafkaUserTemplate schema reference Used in: KafkaUserSpec Full list of KafkaUserTemplate schema properties Specify additional labels and annotations for the secret created by the User Operator. An example showing the KafkaUserTemplate apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 # ... 113.1. KafkaUserTemplate schema properties Property Description secret Template for KafkaUser resources. The template allows users to specify how the Secret with password or TLS certificates is generated. ResourceTemplate
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkausertemplate-reference
Chapter 12. Installing a cluster on Azure in a restricted network
Chapter 12. Installing a cluster on Azure in a restricted network In OpenShift Container Platform version 4.16, you can install a cluster on Microsoft Azure in a restricted network by creating an internal mirror of the installation release content on an existing Azure Virtual Network (VNet). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster requires internet access to use the Azure APIs. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VNet in Azure. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VNet. You must use a user-provisioned VNet that satisfies one of the following requirements: The VNet contains the mirror registry The VNet has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 12.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 12.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 12.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Restricted cluster with Azure Firewall You can use Azure Firewall to restrict the outbound routing for the Virtual Network (VNet) that is used to install the OpenShift Container Platform cluster. For more information, see providing user-defined routing with Azure Firewall . You can create a OpenShift Container Platform cluster in a restricted network by using VNet with Azure Firewall and configuring the user-defined routing. Important If you are using Azure Firewall for restricting internet access, you must set the publish field to Internal in the install-config.yaml file. This is because Azure Firewall does not work properly with Azure public load balancers . 12.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 12.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 12.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 12.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x * Allows connections to Azure APIs. You must set a Destination Service Tag to AzureCloud . [1] x x * Denies connections to the internet. You must set a Destination Service Tag to Internet . [1] x x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 12.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 12.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 12.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 12.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 12.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VNet to install the cluster under the platform.azure field: networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4 1 Replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Replace <vnet> with the existing virtual network name. 3 Replace <control_plane_subnet> with the existing subnet name to deploy the control plane machines. 4 Replace <compute_subnet> with the existing subnet name to deploy compute machines. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Important Azure Firewall does not work seamlessly with Azure Public Load balancers. Thus, when using Azure Firewall for restricting internet access, the publish field in install-config.yaml should be set to Internal . Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 12.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 12.6.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 12.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 12.6.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 12.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 12.6.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 12.6.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 12.6.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 26 1 10 14 21 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 When using Azure Firewall to restrict Internet access, you must configure outbound routing to send traffic through the Azure Firewall. Configuring user-defined routing prevents exposing external endpoints in your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide the imageContentSources section from the output of the command to mirror the repository. 26 How to publish the user-facing endpoints of your cluster. When using Azure Firewall to restrict Internet access, set publish to Internal to deploy a private cluster. The user-facing endpoints then cannot be accessed from the internet. The default value is External . 12.6.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 12.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 12.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 12.8.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 12.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 12.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 12.8.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 12.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 12.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 12.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 26", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/installing-restricted-networks-azure-installer-provisioned
23.11. Initializing the Hard Disk
23.11. Initializing the Hard Disk If no readable partition tables are found on existing hard disks, the installation program asks to initialize the hard disk. This operation makes any existing data on the hard disk unreadable. If your system has a brand new hard disk with no operating system installed, or you have removed all partitions on the hard disk, click Re-initialize drive . The installation program presents you with a separate dialog for each disk on which it cannot read a valid partition table. Click the Ignore all button or Re-initialize all button to apply the same answer to all devices. Figure 23.33. Warning screen - initializing DASD Figure 23.34. Warning screen - initializing FCP LUN Certain RAID systems or other nonstandard configurations may be unreadable to the installation program and the prompt to initialize the hard disk may appear. The installation program responds to the physical disk structures it is able to detect. To enable automatic initializing of hard disks for which it turns out to be necessary, use the kickstart command zerombr (refer to Chapter 32, Kickstart Installations ). This command is required when performing an unattended installation on a system with previously initialized disks. Warning If you have a nonstandard disk configuration that can be detached during installation and detected and configured afterward, power off the system, detach it, and restart the installation.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-initialize-hdd-s390
3.13. Software Collection Configuration Files Support
3.13. Software Collection Configuration Files Support By default, configuration files in a Software Collection are stored within the /opt/ provider /%{scl} file system hierarchy. To make configuration files more accessible and easier to manage, you are advised to use the nfsmountable macro that redefines the _sysconfdir macro. This results in configuration files being created underneath the /etc/opt/ provider /%{scl}/ directory, outside of the /opt/ provider /%{scl} file system hierarchy. For example, a configuration file example.conf is normally stored in the /etc directory in the base system installation. If the configuration file is a part of a software_collection Software Collection and the nfsmountable macro is defined, the path to the configuration file in software_collection is as follows: For more information about using the nfsmountable macro, see Section 3.1, "Using Software Collections over NFS" .
[ "/etc/opt/ provider / software_collection / example.conf" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-software_collection_configuration_files_support
Chapter 1. Red Hat build of Keycloak 26.0
Chapter 1. Red Hat build of Keycloak 26.0 1.1. Overview Red Hat build of Keycloak is based on the Keycloak project, which enables you to secure your web applications by providing Web SSO capabilities based on popular standards such as OpenID Connect, OAuth 2.0, and SAML 2.0. The Red Hat build of Keycloak server acts as an OpenID Connect or SAML-based identity provider (IdP), allowing your enterprise user directory or third-party IdP to secure your applications by using standards-based security tokens. While preserving the power and functionality of Red Hat Single Sign-on, Red Hat build of Keycloak is faster, more flexible, and efficient. Red Hat build of Keycloak is an application built with Quarkus, which provides developers with flexibility and modularity. Quarkus provides a framework that is optimized for a container-first approach and provides many features for developing cloud-native applications. 1.2. Updates for 26.0.10 This release contains several fixed issues and CVE fixes. 1.2.1. CVE fixes CVE-2025-0604 Authentication Bypass Due to Missing LDAP Bind After Password Reset in Keycloak CVE-2025-1391 Keycloak could allow incorrect assignment of an organization to a user 1.3. Updates for 26.0.9 This release contains several fixed issues . 1.4. Updates for 26.0.8 This release contains several fixed issues and CVE fixes. 1.4.1. CVE fixes CVE-2024-11734 Unrestricted admin use of system and environment variables A security vulnerability has been identified that could enable administrative users to access sensitive data if certain configurations are set, potentially revealing internal details from the server environment. CVE-2024-11736 Denial of Service in Red Hat build of Keycloak Server via Security Headers A potential Denial of Service (DoS) vulnerability has been identified in Red Hat build of Keycloak. An administrative user with the ability to modify realm settings could exploit this issue to cause service disruptions. If triggered, it may prevent users from accessing applications that rely on Keycloak or using its management consoles within the affected realm. 1.4.2. Deprecating using system variables in the realm configuration To favor a more secure server runtime and avoid to accidentally expose system variables, you are now forced to specify which system variables you want to expose by using the spi-admin-allowed-system-variables configuration option when starting the server. In future releases, this capability will be removed in favor of preventing any usage of system variables in the realm configuration. 1.5. Updates for 26.0.7 This release contains several fixed issues and these additional updates. 1.5.1. Container images use OpenJDK 21 With this release, the container images use OpenJDK 21, which provides better performance than OpenJDK 17. 1.5.2. getAll() methods deprecated in certain APIs getAll() methods in Organizations and OrganizationMembers APIs are now deprecated and will be removed in the major release. Instead, use corresponding list(first, max) methods in Organizations and OrganizationMembers APIs. 1.6. Updates for 26.0.6 This release contains several fixed issues and the following additional changes. 1.6.1. CVE fixes CVE-2024-10451 Sensitive Data Exposure in Keycloak Build Process CVE-2024-10270 Keycloak Denial of Service CVE-2024-10492 Keycloak path traversal CVE-2024-9666 Keycloak proxy header handling Denial-of-Service [DoS] vulnerability CVE-2024-10039 Keycloak TLS passthrough 1.6.2. Updated documentation for X.509 client certificate lookup by proxy Potential vulnerable configurations have been identified in the X.509 client certificate lookup when using a reverse proxy. If you have configured the client certificate lookup by a proxy header, additional configuration steps might be required. For more detail, see Enabling client certificate lookup . 1.6.3. Admin events might include more details In this release, admin events might hold additional details about the context when the event is fired. After the upgrade, you find the database schema has a new column DETAILS_JSON in the ADMIN_EVENT_ENTITY table. 1.6.4. Security improvements for the key resolvers While using the REALM_FILESEPARATOR_KEY key resolver, Red Hat build of Keycloak now restricts access to FileVault secrets outside of its realm. Characters that could cause path traversal when specifying the expression placeholder in the Administration Console are now prohibited. Additionally, the KEY_ONLY key resolver now escapes the _ character to prevent reading secrets that would otherwise be linked to another realm when the REALM_UNDERSCORE_KEY resolver is used. The escaping simply replaces _ with __ , so, for example, USD{vault.my_secret} now looks for a file named my__secret . Because this is a breaking change, a warning is logged to ease the transition. 1.7. New features and enhancements The following release notes apply to Red Hat build of Keycloak 26.0.5, the first 26.0 release of the product. 1.7.1. Java support Red Hat build of Keycloak now supports OpenJDK 21. OpenJDK 17 support is deprecated and will be removed in a following release in favor of OpenJDK 21. 1.7.2. Keycloak JavaScript adapter now standalone Keycloak JavaScript adapter is now a standalone library and is therefore no longer served statically from the Red Hat build of Keycloak server. The goal is to de-couple the library from the Red Hat build of Keycloak server, so that it can be refactored independently, simplifying the code and making it easier to maintain in the future. Additionally, the library is now free of third-party dependencies, which makes it more lightweight and easier to use in different environments. For a complete breakdown of the changes consult the Upgrading Guide . 1.7.3. Organizations, multi-tenancy, and Customer Identity and Access Management This release introduces organizations. The feature leverages the existing Identity and Access Management (IAM) capabilities of Red Hat build of Keycloak to address Customer Identity and Access Management (CIAM) use cases like Business-to-Business (B2B) and Business-to-Business-to-Consumer (B2B2C) integrations. By using the existing capabilities from a realm, the first release of this feature provides the very core capabilities to allow a realm to integrate with business partners and customers: Manage Organizations Manage Organization Member Onboard members using different strategies such as invitation links and brokering Decorate tokens with additional metadata about the organization that the subject belongs to For more details, see Managing organizations . 1.7.4. Using organizations for multiple instances of a social broker in a realm You can now have multiple instances of the same social broker in a realm. Normally, a realm does not need multiple instances of the same social broker. However, with the organization feature, you can link different instances of the same social broker to different organizations. When creating a social broker, provide an Alias and optionally a Display name just like any other broker. 1.7.5. Identity Providers no longer available from the realm representation To help with scaling realms and organizations with many identity providers, the realm representation no longer holds the list of identity providers. However, they are still available from the realm representation when exporting a realm. For more details, see Identity providers . 1.7.6. User sessions persisted by default versions of Red Hat build of Keycloak stored only offline user and offline client sessions in the databases. The new feature, persistent-user-sessions, stores online user sessions and online client sessions in both memory and the database. As a result, a user can stay logged in even if all instances of Red Hat build of Keycloak are restarted or upgraded. For more details, see Persistent user sessions . This feature is enabled by default. If you want to disable it, see the Volatile user sessions procedure in the Configuring distributed caches chapter for more details. 1.7.7. Account Console and Admin Console changes 1.7.7.1. New Admin Console default login theme A new version of the keycloak login theme exists. The v2 version provides an improved look and feel, including support for switching automatically to a dark theme based on user preferences. The version ( v1 ) is deprecated, and will be removed in a future release. The default login theme for all new realms will be keycloak.v2 . Also, existing realms that never explicitly set a login theme will be switched to keycloak.v2 . 1.7.7.2. PatternFly 5 for Admin and Account Consoles In Red Hat build of Keycloak 24, the Welcome page wss updated to use PatternFly 5 , the latest version of the design system that underpins the user interface of Red Hat build of Keycloak. In this release, the Admin Console and Account Console are also updated to use PatternFly 5. If you want to extend and customize the Admin Console and Account Console, review the changes in PatternFly 5 and update your customizations accordingly. 1.7.8. Customizable Footer in login Themes This release introduced the capability to easily add a custom footer to the login pages for the base/login and keycloak.v2/login theme. The new footer.ftl template provides a content macro that is rendered at the bottom of the "login box". To use a custom footer, create a footer.ftl file in your custom login theme with the desired content. For more details, see Adding a custom footer to a login theme . 1.7.9. You are already logged in message The Red Hat build of Keycloak 24 release provided improvements for when a user is authenticated in parallel in multiple browser tabs. However, this improvement did not address the case when an authentication session expired. In this release, when a user is already logged in to one browser tab and an authentication session expired in another browser tab, Red Hat build of Keycloak redirects back to the client application with an OIDC/SAML error. As a result, the client application can immediately retry authentication, which should usually automatically log in the application because of the SSO session. Note that the message You are already logged in does not appear to the end user when an authentication session expires and user is already logged-in. You may consider updating your applications to handle this error. For more details, see authentication sessions . 1.7.10. Searching by user attribute is now case sensitive When searching for users by user attribute, Red Hat build of Keycloak no longer searches for user attribute names forcing lower case comparisons. The goal of this change was to speed up searches by using the Red Hat build of Keycloak native index on the user attribute table. If your database collation is case-insensitive, your search results will stay the same. If your database collation is case-sensitive, you might see fewer search results than before. 1.7.11. Password policy for check if password contains Username Red Hat build of Keycloak supports a new password policy that allows you to deny user passwords which contains the user username. 1.7.12. Required actions improvements In the Admin Console, you can now configure some actions in the Required actions tab of a particular realm. Currently, the Update password is the only built-in configurable required action. It supports setting Maximum Age of Authentication , which is the maximum time users can update their password by the kc_action parameter (used for instance for a password update in the Account Console) without re-authentication. The sorting of required actions is also improved. When multiple actions are required during authentication, all actions are sorted together regardless of whether those are actions set during authentication (for instance by the kc_action parameter) or actions added to the user account manually by an administrator. 1.7.13. Default client profile for SAML The default client profile for secured SAML clients was added. When browsing through client policies of a realm in the Admin Console, you see a new client profile saml-security-profile . When it is used, there are security best practices applied for SAML clients. For example, signatures are enforced, SAML Redirect binding is disabled, and wildcard redirect URLs are prohibited. 1.7.14. Improving performance for deletion of user consents When a client scope or the full realm is deleted, the associated user consents should also be removed. A new index over the table USER_CONSENT_CLIENT_SCOPE has been added to increase the performance. Note that, if the table contains more than 300,000 entries, Red Hat build of Keycloak skips the creation of the indexes during the automatic schema migration and logs the SQL statements to the console instead. The statements must be run manually in the database after Red Hat build of Keycloak startup. 1.7.15. New generalized event types for credentials Generalized events now exist for updating ( UPDATE_CREDENTIAL ) and removing ( REMOVE_CREDENTIAL ) a credential. The credential type is described in the credential_type attribute of the events. The new event types are supported by the Email Event Listener. The following event types are now deprecated and will be removed in a future version: UPDATE_PASSWORD , UPDATE_PASSWORD_ERROR , UPDATE_TOTP , UPDATE_TOTP_ERROR , REMOVE_TOTP , REMOVE_TOTP_ERROR . 1.7.16. Management port for metrics and health endpoints Metrics and health checks endpoints are no longer accessible through the standard Red Hat build of Keycloak server port. As these endpoints should be hidden from the outside world, they can be accessed on a separate default management port 9000 . Using a separate port prevents exposure to the users as standard Red Hat build of Keycloak endpoints in Kubernetes environments. The new management interface provides a new set of options, which you can configure. The Red Hat build of Keycloak Operator assumes the management interface is enabled by default. For more details, see Configuring the Management Interface . 1.7.16.1. Metrics for embedded caches enabled by default Metrics for the embedded caches are now enabled by default. To enable histograms for latencies, set the option cache-metrics-histograms-enabled to true . 1.7.16.2. Metrics for HTTP endpoints enabled by default The metrics provided by Red Hat build of Keycloak now include HTTP server metrics starting with http_server . See below for some examples. Use the new options http-metrics-histograms-enabled and http-metrics-slos to enable default histogram buckets or specific buckets for service level objectives (SLOs). Read more about histograms in the Prometheus documentation about histograms on how to use the additional metrics series provided in http_server_requests_seconds_bucket . 1.7.17. Client libraries updates 1.7.17.1. Dedicated release cycle for the client libraries Starting at this release, some Red Hat build of Keycloak client libraries will have release cycles independent of the Red Hat build of Keycloak server release cycle. The 26.0 release may be the last one when the client libraries are released together with the Red Hat build of Keycloak server. From now on, the client libraries may be released at a different time than the Red Hat build of Keycloak server. The client libraries are these artifacts: Java admin client - Maven artifact org.keycloak:keycloak-admin-client Java authorization client - Maven artifact org.keycloak:keycloak-authz-client Java policy enforcer - Maven artifact org.keycloak:keycloak-policy-enforcer 1.7.17.2. Compatibility of the client libraries with the server Starting at this release, client libraries are tested and supported with the same server version and a few major server versions. For details about supported versions of client libraries with server versions, see the Upgrading Guide . 1.7.18. Cookie updates 1.7.18.1. SameSite attribute set for all cookies Previously, the following cookies did not set the SameSite attribute, which in recent browser versions results in these cookies defaulting to SameSite=Lax : KC_STATE_CHECKER now sets SameSite=Strict KC_RESTART now sets SameSite=None KEYCLOAK_LOCALE now sets SameSite=None KEYCLOAK_REMEMBER_ME now sets SameSite=None The default value SameSite=Lax causes issues with POST based bindings, which is mostly applicable to SAML, but it is also used in some OpenID Connect / OAuth 2.0 flows. 1.7.18.2. Removing KC_AUTH_STATE cookie The cookie KC_AUTH_STATE is removed and it is no longer set by the Red Hat build of Keycloak server as this server no longer needs this cookie. 1.7.19. Lightweight access tokens 1.7.19.1. Lightweight access token to be even more lightweight In releases, the support for lightweight access token was added. In this release, the even more built-in claims are removed from the lightweight access token. The claims are added by protocol mappers. Some affect even the regular access tokens or ID tokens as they were not strictly required by the OIDC specification. Claims sub and auth_time are added by protocol mappers now, which are configured by default on the new client scope basic , which is added automatically to all the clients. The claims are still added to the ID token and access token as before, but not to the lightweight access token. Claim nonce is added only to the ID token now. It is not added to a regular access token or lightweight access token. For backwards compatibility, you can add this claim to an access token by protocol mapper, which needs to be explicitly configured. Claim session_state is not added to any token now. You can still add it by protocol mapper. The other dedicated claim sid is still supported by the specification, which was available in versions and has exactly the same value. For more details, see New default client scope basic . 1.7.19.2. Lightweight access tokens for Admin REST API Lightweight access tokens can now be used on the admin REST API. The security-admin-console and admin-cli clients are now using lightweight access tokens by default, so "Always Use Lightweight Access Token" and "Full Scope Allowed" are now enabled on these two clients. However, the behavior in the Admin Console should effectively remain the same. Be cautious if you have made changes to these two clients and if you are using them for other purposes. 1.7.20. Support for application/jwt media-type in token introspection endpoint You can use the HTTP Header Accept: application/jwt when invoking a token introspection endpoint. When enabled for a particular client, it returns a claim jwt from the token introspection endpoint with the full JWT access token, which can be useful especially when the client calling introspection endpoint used a lightweight access token. 1.7.21. Password changes 1.7.21.1. Argon2 password hashing Argon2 is now the default password hashing algorithm used by Red Hat build of Keycloak in a non-FIPS environment. Argon2 was the winner of the 2015 password hashing competition and is the recommended hashing algorithm by OWASP . In Red Hat build of Keycloak 24, the default hashing iterations for PBKDF2 were increased from 27.5K to 210K, resulting in a more than 10 times increase in the amount of CPU time required to generate a password hash. With Argon2, you can achieve better security with almost the same CPU time as with releases of Red Hat build of Keycloak. One downside is Argon2 requires more memory, which is a requirement to be resistant against GPU attacks. The defaults for Argon2 in Red Hat build of Keycloak require 7MB per-hashing request. To prevent excessive memory and CPU usage, the parallel computation of hashes by Argon2 is by default limited to the number of cores available to the JVM. To support the memory intensive nature of Argon2, the default GC is updated from ParallelGC to G1GC for a better heap utilization. Note that Argon2 is not compliant with FIPS 140-2. So if you are in the FIPS environment, the default algorithm will be still PBKDF2. Also note that if you are on non-FIPS environment and you plan to migrate to the FIPS environment, consider changing the password policy to a FIPS compliant algorithm such as pbkdf2-sha512 at the outset. Otherwise, users will not be able to log in after they switch to the FIPS environment. 1.7.21.2. Password policy for check if password contains Username Red Hat build of Keycloak supports a new password policy that allows you to deny a user password that contains the user username. 1.7.22. Passkeys improvements (preview) The support for Passkeys conditional UI was added. When this preview feature is enabled, a dedicated authenticator is available. You can select from a list of available passkeys accounts and authenticate a user based on that. 1.7.23. Authenticator for override existing IDP link during first-broker-login A new authenticator Confirm override existing link exists. This authenticator allows you to override the linked IDP username for the Red Hat build of Keycloak user, which was previously linked to different IDP identity. For more details, see override existing broker link . 1.7.24. Authorization changes 1.7.24.1. Breaking fix in authorization client library For users of the keycloak-authz-client library, calling AuthorizationResource.getPermissions(... ) now correctly returns a List<Permission> . Previously, it would return a List<Map> at runtime, even though the method declaration advertised List<Permission> . This fix will break code that relied on casting the List or its contents to List<Map> . 1.7.24.2. IDs are no longer set when exporting authorization settings for a client When exporting the authorization settings for a client, the IDs for resources, scopes, and policies are no longer set. As a result, you can now import the settings from one client to another client. 1.7.25. Group-related events no longer fired when removing a realm With the goal of improving the scalability of groups, groups are now removed directly from the database when removing a realm. As a result, group-related events, such as the GroupRemovedEvent , are no longer fired when removing a realm. For information, see Group-related events . 1.7.26. Configuring the LDAP Connection Pool In this release, the LDAP connection pool configuration relies solely on system properties. For more details, see Configuring the connection pool . 1.7.27. New LDAP users are enabled by default when using Microsoft Active Directory If you are using Microsoft AD and creating users through the administrative interfaces, the user will created as enabled by default. In versions, it was only possible to update the user status after setting a (non-temporary) password to the user. This behavior was not consistent with other built-in user storages as well as not consistent with others LDAP vendors supported by the LDAP provider. 1.7.28. The java-keystore key provider supports more algorithms and vault secrets The java-keystore key provider, which allows loading a realm key from an external java keystore file, has been modified to manage all Red Hat build of Keycloak algorithms. Also, the keystore and key secrets, which are needed to retrieve the actual key from the store, can be configured using the vault . Therefore, a Red Hat build of Keycloak realm can externalize any key to the encrypted file without sensitive data stored in the database. For more information about this subject, see Configuring realm keys . 1.7.29. Small changes in session lifespan and idle calculations In versions, the session max lifespan and idle timeout calculation was slightly different when validating if a session was still valid. Now that validation uses the same code as the rest of the project. If the session is using the remember me feature, the idle timeout and max lifespan are the maximum value between the common SSO and the remember me configuration values. 1.7.30. New Hostname options Due to the complexity of the hostname configuration settings, this release includes Hostname v2 options. The original host name options have been removed. If you have custom hostname settings, you should migrate to the new options. Note that the behavior behind these options has also changed. For more details, see New hostname options . 1.7.31. Logging enhancements 1.7.31.1. Syslog for remote logging Red Hat build of Keycloak now supports the Syslog protocol for remote logging based on the protocol defined in RFC 5424 . If you enable the syslog handler, it sends all log events to a remote syslog server. For more information, see Configuring logging . 1.7.31.2. Different log levels for log handlers You can now specify log levels for all available log handlers, such as console , file , or syslog . This more fine-grained approach means that you can control logging over the whole application to match your specific needs. For more information, see Configuring logging . 1.7.32. All cache options are runtime You can now specify the cache , cache-stack , and cache-config-file options during runtime. This change eliminates the need to execute the build phase and rebuild your image with these options. For more details, see Specify cache options at runtime . 1.7.33. Improvements for highly available multi-site deployments Red Hat build of Keycloak 26 introduces significant improvements to the recommended high availability multi-site architecture, most notably: Red Hat build of Keycloak deployments are now able to handle user requests simultaneously in both sites. Active monitoring of the connectivity between the sites is now required to update the replication between the sites in case of a failure. The loadbalancer blueprint has been updated to use the AWS Global Accelerator as this avoids prolonged fail-over times caused by DNS caching by clients. Persistent user sessions are now a requirement of the architecture. Consequently, user sessions will be kept on Red Hat build of Keycloak or Data Grid upgrades. For details on implementation, see Highly available multi-site deployments . 1.7.34. Method getExp added to SingleUseObjectKeyModel As a consequence of the removal of deprecated methods from AccessToken , IDToken , and JsonWebToken , the SingleUseObjectKeyModel also changed to keep consistency with the method names related to expiration values. For more details, see SingleUseObjectKeyModel . 1.7.35. Support for PostgreSQL 16 The supported and tested databases now include PostgreSQL 16. 1.7.36. Infinispan marshalling changes to Infinispan Protostream Marshalling is the process of converting Java objects into bytes to send them across the network between Red Hat build of Keycloak servers. With Red Hat build of Keycloak 26, the marshalling format is changed from JBoss Marshalling to Infinispan Protostream. Warning JBoss Marshalling and Infinispan Protostream are not compatible with each other and incorrect usage may lead to data loss. Consequently, all caches are cleared when upgrading to this version. Infinispan Protostream is based on Protocol Buffers (proto 3), which has the advantage of backwards/forwards compatibility. 1.7.37. Keycloak CR changes 1.7.37.1. Keycloak CR supports standard scheduling options The Keycloak CR now exposes first class properties for controlling the scheduling of your Keycloak Pods. The scheduling stanza exposes optional standard Kubernetes affinity, tolerations, topology spread constraints, and the priority class name to fine tune the scheduling and placement of your server Pods. For more details, see Scheduling . 1.7.37.2. KeycloakRealmImport CR supports placeholder replacement The KeycloakRealmImport CR now exposes spec.placeholders to create environment variables for placeholder replacement in the import. For more details, see Realm import . 1.7.38. Admin Bootstrapping and Recovery In releases, regaining access to a Red Hat build of Keycloak instance when all admin users were locked out was a challenging process. However, Red Hat build of Keycloak now offers new methods to bootstrap a temporary admin account and recover lost admin access. You can now run the start or start-dev commands with specific options to create a temporary admin account. Also, a new dedicated command is introduced, allowing users to quickly regain admin access. Consequently, the environment variables KEYCLOAK_ADMIN and KEYCLOAK_ADMIN_PASSWORD have been deprecated. You should use KC_BOOTSTRAP_ADMIN_USERNAME and KC_BOOTSTRAP_ADMIN_PASSWORD instead. These are also general options, so they may be specified via the cli or other config sources, for example --bootstrap-admin-username=admin . For more information, see the temporary admin account . 1.7.39. OpenTelemetry Tracing Preview feature The underlying Quarkus support for OpenTelemetry Tracing has been exposed to Red Hat build of Keycloak and allows you to obtain application traces for better observability. This feature includes the ability to locate performance bottlenecks, determine the cause of application failures, and trace a request through the distributed system. For more information, see Enabling tracing . 1.7.40. DPoP improvements The DPoP (OAuth 2.0 Demonstrating Proof-of-Possession) preview feature has improvements. The DPoP is now supported for all grant types. With releases, this feature was supported only for the authorization_code grant type. Support also exists for the DPoP token type on the UserInfo endpoint. 1.7.41. Adding support for ECDH-ES encryption key management algorithms Now Red Hat build of Keycloak allows configuring ECDH-ES, ECDH-ES+A128KW, ECDH-ES+A192KW or ECDH-ES+A256KW as the encryption key management algorithm for clients. The Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES) specification introduces three new header parameters for the JWT: epk , apu and apv . Currently Red Hat build of Keycloak implementation only manages the compulsory epk while the other two (which are optional) are never added to the header. For more information about those algorithms please refer to the JSON Web Algorithms (JWA) . Also, a new key provider, ecdh-generated , is available to generate realm keys and support for ECDH algorithms is added into the Java KeyStore provider. 1.7.42. Persisting revoked access tokens across restarts In this release, revoked access tokens are written to the database and reloaded when the cluster is restarted by default when using the embedded caches. For more details, see Persisting removed access tokens . 1.7.43. Client Attribute condition in Client Policies The condition based on the client-attribute was added into Client Policies. You can use this condition to specify for the clients with the specified client attribute having a specified value. Use either an AND or OR condition when evaluating this condition as mentioned in the documentation for client policies. 1.7.44. Adding support for ECDH-ES encryption key management algorithms Now Red Hat build of Keycloak allows configuring ECDH-ES, ECDH-ES+A128KW, ECDH-ES+A192KW or ECDH-ES+A256KW as the encryption key management algorithm for clients. The Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES) specification introduces three new header parameters for the JWT: epk , apu and apv . Currently, Red Hat build of Keycloak implementation only manages the compulsory epk while the other two (which are optional) are never added to the header. For more information about those algorithms, see the JSON Web Algorithms (JWA) . Also, a new key provider, ecdh-generated , is available to generate realm keys and support for ECDH algorithms is added into the Java KeyStore provider. 1.7.45. Option proxy-trusted-addresses added The proxy-trusted-addresses can be used when the proxy-headers option is set to specify an allowlist of trusted proxy addresses. If the proxy address for a given request is not trusted, then the respective proxy header values will not be used. For details, see Trusted Proxies . 1.7.46. Option proxy-protocol-enabled added The proxy-protocol-enabled option controls whether the server should use the HA PROXY protocol when serving requests from behind a proxy. When set to true, the remote address returned will be the one from the actual connecting client. For details, see Proxy Protocol . 1.7.47. Option to reload trust and key material added The https-certificates-reload-period option can be set to define the reloading period of key store, trust store, and certificate files referenced by https-* options. Use -1 to disable reloading. Defaults to 1h (one hour). For details, see Certificate and Key Reloading . 1.7.48. Options to configure cache max-count added The --cache-embedded-USD{CACHE_NAME}-max-count= can be set to define an upper bound on the number of cache entries in the specified cache. 1.7.49. The https-trust-store-* options have been undeprecated Based on the community feedback, we decided to undeprecate https-trust-store-* options to allow better granularity in trusted certificates. 1.8. Deprecated features In sections, some features have already been mentioned as deprecated. The following sections provide details on other deprecated features. 1.8.1. Resteasy util class is deprecated org.keycloak.common.util.Resteasy has been deprecated. You should use the org.keycloak.util.KeycloakSessionUtil to obtain the KeycloakSession instead. It is highly recommended to avoid obtaining the KeycloakSession by means other than when creating your custom provider. 1.8.2. Property origin in the UserRepresentation is deprecated The origin property in the UserRepresentation is deprecated and planned to be removed in a future release. Instead, prefer using the federationLink property to obtain the provider to which a user is linked with. 1.8.3. Deprecations in keycloak-common module The following items have been deprecated for removal in upcoming Red Hat build of Keycloak versions with no replacement: org.keycloak.common.util.reflections.Reflections.newInstance(java.lang.Class<T>) org.keycloak.common.util.reflections.Reflections.newInstance(java.lang.Class<?>, java.lang.String) org.keycloak.common.util.reflections.SetAccessiblePrivilegedAction org.keycloak.common.util.reflections.UnSetAccessiblePrivilegedAction 1.8.4. Deprecations in keycloak-services module The class UserSessionCrossDCManager is deprecated and planned to be removed in a future version of Red Hat build of Keycloak. Read the UserSessionCrossDCManager Javadoc for the alternative methods to use. 1.8.5. Deprecated Account REST endpoint for removing credential The Account REST endpoint for removing the credential of the user is deprecated. Starting at this version, the Account Console no longer uses this endpoint. It is replaced by the Delete Credential application-initiated. 1.8.6. Deprecated keycloak login Theme The keycloak login theme has been deprecated in favor of the new keycloak.v2 and will be removed in a future version. While it remains the default for the new realms for compatibility reasons, it is strongly recommended to switch all the realm themes to keycloak.v2 . 1.8.7. Method encode deprecated on PasswordHashProvider Method String encode(String rawPassword, int iterations) on the interface org.keycloak.credential.hash.PasswordHashProvider is deprecated. The method will be removed in one of the future Red Hat build of Keycloak releases. 1.8.8. Deprecated theme variables The following variables were deprecated in the Admin theme and will be removed in a future version: authServerUrl . Use serverBaseUrl instead. authUrl . Use adminBaseUrl instead. The following variables were deprecated in the Account theme and will be removed in a future version: authServerUrl . Use serverBaseUrl instead, note serverBaseUrl does not include trailing slash. authUrl . Use serverBaseUrl instead, note serverBaseUrl does not include trailing slash. 1.8.9. Methods to get and set current refresh token in client session are now deprecated The methods String getCurrentRefreshToken() , void setCurrentRefreshToken(String currentRefreshToken) , int getCurrentRefreshTokenUseCount() , and void setCurrentRefreshTokenUseCount(int currentRefreshTokenUseCount) in the interface org.keycloak.models.AuthenticatedClientSessionModel are deprecated. They have been replaced by similar methods that require an identifier as a parameter such as getRefreshToken(String reuseId) to manage multiple refresh tokens within a client session. The methods will be removed in one of the future Red Hat build of Keycloak releases. 1.9. Removed features In sections, some features have already been mentioned as removed. The following sections provide details on other removed features. 1.9.1. Support for the UMD distribution removed The UMD distribution Universal Module Definition of the Keycloak JS library has been removed. This means that the library is no longer exposed as a global variable, and instead must be imported as a module . This change is in line with modern JavaScript development practices, and allows for a more consitent experience between browsers and build tooling, and generally results in more predictable code with less side-effects. If you are using a bundler such as Vite or Webpack nothing changes, you'll have the same experience as before. If you are using the library directly in the browser, you'll need to update your code to import the library as a module: <!-- Before --> <script src="/path/to/keycloak.js"></script> <script> const keycloak = new Keycloak(); </script> <!-- After --> <script type="module"> import Keycloak from '/path/to/keycloak.js'; const keycloak = new Keycloak(); </script> You can also opt to use an import map make the import of the library less verbose: <script type="importmap"> { "imports": { "keycloak-js": "/path/to/keycloak.js" } } </script> <script type="module"> // The library can now be imported without specifying the full path, providing a similar experience as with a bundler. import Keycloak from 'keycloak-js'; const keycloak = new Keycloak(); </script> If you are using TypeScript you may need to update your tsconfig.json to be able to resolve the library: { "compilerOptions": { "moduleResolution": "Bundler" } } 1.9.2. CollectionUtil intesection method removed The method org.keycloak.common.util.CollectionUtil.intersection has been removed. You should use the 'java.util.Collection.retainAll' instead on an existing collection. 1.9.3. Account Console v2 theme removed The Account Console v2 theme has been removed from Red Hat build of Keycloak. This theme was deprecated in Red Hat build of Keycloak 24 and replaced by the Account Console v3 theme. If you are still using this theme, you should migrate to the Account Console v3 theme. 1.9.4. Original host name options were removed These options were replaced by new options referred to as Hostname v2. For more details, see Configuring the hostname (v2) and New hostname options . 1.9.5. Proxy option removed The proxy option was deprecated in Red Hat build of Keycloak 24 and was replaced by the proxy-headers option in combination with hostname options as needed. For more details, see Using a reverse proxy . 1.9.6. Most Java adapters removed Most Java adapters are now removed from the Red Hat build of Keycloak codebase and downloads pages. For OAuth 2.0/OIDC, this includes removal of the EAP adapter, Servlet Filter adapter, KeycloakInstalled desktop adapter, the jaxrs-oauth-client adapter, JAAS login modules, Spring adapter and SpringBoot adapters. For SAML, this includes removal of the Servlet filter adapter. SAML adapters are still supported with JBoss EAP. The generic Authorization Client library is still supported. You can use it in combination with any other OAuth 2.0 or OpenID Connect libraries. You can check the quickstarts for some examples where this authorization client library is used together with the 3rd party Java adapters like Elytron OIDC or SpringBoot. You can check the quickstarts also for the example of SAML adapter used with WildFly. 1.9.7. OSGi metadata removed Since all of the Java adapters that used OSGi metadata have been removed, OSGi metadata are no longer generated for our jars. 1.9.8. Legacy cookies removed Red Hat build of Keycloak no longer sends _LEGACY cookies, which where introduced as a work-around to older browsers not supporting the SameSite flag on cookies. The _LEGACY cookies also served another purpose, which was to allow login from an insecure context. Although an insecure context is never recommended in production deployments of Red Hat build of Keycloak, it is fairly frequent to access Red Hat build of Keycloak over http outside of localhost . As an alternative to the _LEGACY cookies, Red Hat build of Keycloak no longer sets the secure flag; but it does set SameSite=Lax instead of SameSite=None when it detects an insecure context is used. 1.9.9. EnvironmentDependentProviderFactory removed The method EnvironmentDependentProviderFactory.isSupported() was deprecated for several releases and has now been removed. Instead, implement isSupported(Config.Scope config) . 1.9.10. Support for legacy redirect_uri parameter and SPI options are removed versions of Red Hat build of Keycloak had supported automatic logout of the user and redirecting to the application by opening logout endpoint URL such as http(s)://example-host/auth/realms/my-realm-name/protocol/openid-connect/logout?redirect_uri=encodedRedirectUri . This functionality was deprecated in Red Hat build of Keycloak 18 and has been removed in this version in favor of following the OpenID Connect specification. As part of this change the following related configuration options for the SPI have been removed: --spi-login-protocol-openid-connect-legacy-logout-redirect-uri --spi-login-protocol-openid-connect-suppress-logout-confirmation-screen If you were still making use these options or the redirect_uri parameter for logout you should implement the OpenID Connect RP-Initiated Logout specification instead. 1.9.11. org.keycloak:keycloak-model-legacy removed The module org.keycloak:keycloak-model-legacy module was deprecated in a release and is removed in this release. Use the org.keycloak:keycloak-model-storage module instead. 1.9.12. Offline session preloading removed The old behavior to preload offline sessions at startup is now removed after it has been deprecated in the release. 1.9.13. setOrCreateChild() method removed from JavaScript Admin Client The groups.setOrCreateChild() method has been removed from that JavaScript-based Admin Client. If you are still using this method, start using the createChildGroup() or updateChildGroup() methods instead. 1.9.14. Removed session_state claim The session_state claim, which contains the same value as the sid claim, is now removed from all tokens as it is not required according to the OpenID Connect Front-Channel Logout and OpenID Connect Back-Channel Logout specifications. The session_state claim remains present in the Access Token Response in accordance with OpenID Connect Session Management specification. Note that the setSessionState() method is also removed from the IDToken class in favor of the setSessionId() method, and the getSessionState() method is now deprecated. A new Session State (session_state) mapper is also included and can be assigned to client scopes (for instance basic client scope) to revert to the old behavior. If an old version of the JS adapter is used, the Session State (session_state) mapper should also be used by using client scopes as described above. 1.9.15. Grace period for idle sessions removed when persistent sessions are enabled versions of Red Hat build of Keycloak added a grace period of two minutes to idle times of user and client sessions. This was added due to a architecture where session refresh times were replicated asynchronously in a cluster. With persistent user sessions, this is no longer necessary, and therefore the grace period is now removed. To keep the old behavior, update your realm configuration and extend the session and client idle times by two minutes. 1.9.16. Adapter and misc BOM files removed The org.keycloak.bom:keycloak-adapter-bom and org.keycloak.bom:keycloak-misc-bom BOM files are removed. The adapter BOM was no longer useful because most of the Java adapters are removed. The misc BOM had contained only one artifact, keycloak-test-helper , and that artifact is also removed in this release. 1.9.17. keycloak-test-helper removed The maven artifact org.keycloak:keycloak-test-helper is removed in this release. The artifact provided a few helper methods for dealing with a Java admin client. If you use the helper methods, it is possible to fork them into your application if needed. 1.9.18. JEE admin-client removed The JEE admin-client is removed in this release, however, the Jakarta admin-client is still supported. 1.9.19. Deprecated methods from certain classes removed The following methods were removed from the AccessToken class: expiration . Use the exp method instead. notBefore . Use the nbf method instead. issuedAt . Use the iat method instead. The following methods were removed from the IDToken class: getAuthTime and setAuthTime . Use the getAuth_time and setAuth_time methods, respectively. notBefore . Use the nbf method instead. issuedAt . Use the iat method instead. setSessionState . Use the setSessionId method instead (See the details above in the section about session_state claim) The following methods were removed from the JsonWebToken class: expiration . Use the exp method instead. notBefore . Use the nbf method instead. issuedAt . Use the iat method instead. You should also expect both exp and nbf claims not set in tokens as they are optional. Previously, these claims were being set with a value of 0 what does not make mush sense because their value should be a valid NumericDate . 1.9.20. Deprecated cookie methods removed The following APIs for setting custom cookies have been removed: ServerCookie - replaced by NewCookie.Builder LocaleSelectorProvider.KEYCLOAK_LOCALE - replaced by CookieType.LOCALE HttpCookie - replaced by NewCookie.Builder HttpResponse.setCookieIfAbsent(HttpCookie cookie) - replaced by HttpResponse.setCookieIfAbsent(NewCookie cookie) 1.9.21. Deprecated LinkedIn provider removed In version 22.0, the OAuh 2.0 social provider for LinkedIn was replaced by a new OpenId Connect implementation. The legacy provider was deprecated but not removed, just in case it was still functional in some existing realms. Red Hat build of Keycloak 26 is removing the old provider and its associated linkedin-oauth feature. From now on, the default LinkedIn social provider is the only option available. 1.10. Fixed issues Each release includes fixed issues: Red Hat build of Keycloak 26.0.10 fixed Issues Red Hat build of Keycloak 26.0.9 fixed Issues Red Hat build of Keycloak 26.0.8 fixed Issues Red Hat build of Keycloak 26.0.7 fixed Issues Red Hat build of Keycloak 26.0.6 fixed Issues Red Hat build of Keycloak 26.0.x fixed issues . 1.11. Supported configurations For the supported configurations for Red Hat build of Keycloak 26.0, see Supported configurations . 1.12. Component details For the list of supported component versions for Red Hat build of Keycloak 26.0, see Component details .
[ "http_server_active_requests 1.0 http_server_requests_seconds_count{method=\"GET\",outcome=\"SUCCESS\",status=\"200\",uri=\"/realms/{realm}/protocol/{protocol}/auth\"} 1.0 http_server_requests_seconds_sum{method=\"GET\",outcome=\"SUCCESS\",status=\"200\",uri=\"/realms/{realm}/protocol/{protocol}/auth\"} 0.048717142", "<!-- Before --> <script src=\"/path/to/keycloak.js\"></script> <script> const keycloak = new Keycloak(); </script> <!-- After --> <script type=\"module\"> import Keycloak from '/path/to/keycloak.js'; const keycloak = new Keycloak(); </script>", "<script type=\"importmap\"> { \"imports\": { \"keycloak-js\": \"/path/to/keycloak.js\" } } </script> <script type=\"module\"> // The library can now be imported without specifying the full path, providing a similar experience as with a bundler. import Keycloak from 'keycloak-js'; const keycloak = new Keycloak(); </script>", "{ \"compilerOptions\": { \"moduleResolution\": \"Bundler\" } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/release_notes/red_hat_build_of_keycloak_26_0
4.2. Controlling Root Access
4.2. Controlling Root Access When administering a home machine, the user must perform some tasks as the root user or by acquiring effective root privileges using a setuid program, such as sudo or su . A setuid program is one that operates with the user ID ( UID ) of the program's owner rather than the user operating the program. Such programs are denoted by an s in the owner section of a long format listing, as in the following example: Note The s may be upper case or lower case. If it appears as upper case, it means that the underlying permission bit has not been set. For the system administrator of an organization, however, choices must be made as to how much administrative access users within the organization should have to their machines. Through a PAM module called pam_console.so , some activities normally reserved only for the root user, such as rebooting and mounting removable media, are allowed for the first user that logs in at the physical console. However, other important system administration tasks, such as altering network settings, configuring a new mouse, or mounting network devices, are not possible without administrative privileges. As a result, system administrators must decide how much access the users on their network should receive. 4.2.1. Disallowing Root Access If an administrator is uncomfortable allowing users to log in as root for these or other reasons, the root password should be kept secret, and access to runlevel one or single user mode should be disallowed through boot loader password protection (see Section 4.2.5, "Securing the Boot Loader" for more information on this topic.) The following are four different ways that an administrator can further ensure that root logins are disallowed: Changing the root shell To prevent users from logging in directly as root , the system administrator can set the root account's shell to /sbin/nologin in the /etc/passwd file. Table 4.2. Disabling the Root Shell Effects Does Not Affect Prevents access to a root shell and logs any such attempts. The following programs are prevented from accessing the root account: login gdm kdm xdm su ssh scp sftp Programs that do not require a shell, such as FTP clients, mail clients, and many setuid programs. The following programs are not prevented from accessing the root account: sudo FTP clients Email clients Disabling root access using any console device (tty) To further limit access to the root account, administrators can disable root logins at the console by editing the /etc/securetty file. This file lists all devices the root user is allowed to log into. If the file does not exist at all, the root user can log in through any communication device on the system, whether through the console or a raw network interface. This is dangerous, because a user can log in to their machine as root using Telnet, which transmits the password in plain text over the network. By default, Red Hat Enterprise Linux 7's /etc/securetty file only allows the root user to log in at the console physically attached to the machine. To prevent the root user from logging in, remove the contents of this file by typing the following command at a shell prompt as root : To enable securetty support in the KDM, GDM, and XDM login managers, add the following line: to the files listed below: /etc/pam.d/gdm /etc/pam.d/gdm-autologin /etc/pam.d/gdm-fingerprint /etc/pam.d/gdm-password /etc/pam.d/gdm-smartcard /etc/pam.d/kdm /etc/pam.d/kdm-np /etc/pam.d/xdm Warning A blank /etc/securetty file does not prevent the root user from logging in remotely using the OpenSSH suite of tools because the console is not opened until after authentication. Table 4.3. Disabling Root Logins Effects Does Not Affect Prevents access to the root account using the console or the network. The following programs are prevented from accessing the root account: login gdm kdm xdm Other network services that open a tty Programs that do not log in as root , but perform administrative tasks through setuid or other mechanisms. The following programs are not prevented from accessing the root account: su sudo ssh scp sftp Disabling root SSH logins To prevent root logins through the SSH protocol, edit the SSH daemon's configuration file, /etc/ssh/sshd_config , and change the line that reads: to read as follows: Table 4.4. Disabling Root SSH Logins Effects Does Not Affect Prevents root access using the OpenSSH suite of tools. The following programs are prevented from accessing the root account: ssh scp sftp Programs that are not part of the OpenSSH suite of tools. Using PAM to limit root access to services PAM, through the /lib/security/pam_listfile.so module, allows great flexibility in denying specific accounts. The administrator can use this module to reference a list of users who are not allowed to log in. To limit root access to a system service, edit the file for the target service in the /etc/pam.d/ directory and make sure the pam_listfile.so module is required for authentication. The following is an example of how the module is used for the vsftpd FTP server in the /etc/pam.d/vsftpd PAM configuration file (the \ character at the end of the first line is not necessary if the directive is on a single line): This instructs PAM to consult the /etc/vsftpd.ftpusers file and deny access to the service for any listed user. The administrator can change the name of this file, and can keep separate lists for each service or use one central list to deny access to multiple services. If the administrator wants to deny access to multiple services, a similar line can be added to the PAM configuration files, such as /etc/pam.d/pop and /etc/pam.d/imap for mail clients, or /etc/pam.d/ssh for SSH clients. For more information about PAM, see The Linux-PAM System Administrator's Guide , located in the /usr/share/doc/pam-<version>/html/ directory. Table 4.5. Disabling Root Using PAM Effects Does Not Affect Prevents root access to network services that are PAM aware. The following services are prevented from accessing the root account: login gdm kdm xdm ssh scp sftp FTP clients Email clients Any PAM aware services Programs and services that are not PAM aware. 4.2.2. Allowing Root Access If the users within an organization are trusted and computer-literate, then allowing them root access may not be an issue. Allowing root access by users means that minor activities, like adding devices or configuring network interfaces, can be handled by the individual users, leaving system administrators free to deal with network security and other important issues. On the other hand, giving root access to individual users can lead to the following issues: Machine Misconfiguration - Users with root access can misconfigure their machines and require assistance to resolve issues. Even worse, they might open up security holes without knowing it. Running Insecure Services - Users with root access might run insecure servers on their machine, such as FTP or Telnet, potentially putting usernames and passwords at risk. These services transmit this information over the network in plain text. Running Email Attachments As Root - Although rare, email viruses that affect Linux do exist. A malicious program poses the greatest threat when run by the root user. Keeping the audit trail intact - Because the root account is often shared by multiple users, so that multiple system administrators can maintain the system, it is impossible to figure out which of those users was root at a given time. When using separate logins, the account a user logs in with, as well as a unique number for session tracking purposes, is put into the task structure, which is inherited by every process that the user starts. When using concurrent logins, the unique number can be used to trace actions to specific logins. When an action generates an audit event, it is recorded with the login account and the session associated with that unique number. Use the aulast command to view these logins and sessions. The --proof option of the aulast command can be used suggest a specific ausearch query to isolate auditable events generated by a particular session. For more information about the Audit system, see Chapter 7, System Auditing . 4.2.3. Limiting Root Access Rather than completely denying access to the root user, the administrator may want to allow access only through setuid programs, such as su or sudo . For more information on su and sudo , see the Gaining Privileges chapter in Red Hat Enterprise Linux 7 System Administrator's Guide, and the su(1) and sudo(8) man pages. 4.2.4. Enabling Automatic Logouts When the user is logged in as root , an unattended login session may pose a significant security risk. To reduce this risk, you can configure the system to automatically log out idle users after a fixed period of time. As root , add the following line at the beginning of the /etc/profile file to make sure the processing of this file cannot be interrupted: trap "" 1 2 3 15 As root , insert the following lines to the /etc/profile file to automatically log out after 120 seconds: export TMOUT=120 readonly TMOUT The TMOUT variable terminates the shell if there is no activity for the specified number of seconds (set to 120 in the above example). You can change the limit according to the needs of the particular installation. 4.2.5. Securing the Boot Loader The primary reasons for password protecting a Linux boot loader are as follows: Preventing Access to Single User Mode - If attackers can boot the system into single user mode, they are logged in automatically as root without being prompted for the root password. Warning Protecting access to single user mode with a password by editing the SINGLE parameter in the /etc/sysconfig/init file is not recommended. An attacker can bypass the password by specifying a custom initial command (using the init= parameter) on the kernel command line in GRUB 2. It is recommended to password-protect the GRUB 2 boot loader, as described in the Protecting GRUB 2 with a Password chapter in Red Hat Enterprise Linux 7 System Administrator's Guide. Preventing Access to the GRUB 2 Console - If the machine uses GRUB 2 as its boot loader, an attacker can use the GRUB 2 editor interface to change its configuration or to gather information using the cat command. Preventing Access to Insecure Operating Systems - If it is a dual-boot system, an attacker can select an operating system at boot time, for example DOS, which ignores access controls and file permissions. Red Hat Enterprise Linux 7 includes the GRUB 2 boot loader on the Intel 64 and AMD64 platform. For a detailed look at GRUB 2, see the Working With the GRUB 2 Boot Loader chapter in Red Hat Enterprise Linux 7 System Administrator's Guide. 4.2.5.1. Disabling Interactive Startup Pressing the I key at the beginning of the boot sequence allows you to start up your system interactively. During an interactive startup, the system prompts you to start up each service one by one. However, this may allow an attacker who gains physical access to your system to disable the security-related services and gain access to the system. To prevent users from starting up the system interactively, as root , disable the PROMPT parameter in the /etc/sysconfig/init file: 4.2.6. Protecting Hard and Symbolic Links To prevent malicious users from exploiting potential vulnerabilities caused by unprotected hard and symbolic links, Red Hat Enterprise Linux 7 includes a feature that only allows links to be created or followed provided certain conditions are met. In case of hard links, one of the following needs to be true: The user owns the file to which they link. The user already has read and write access to the file to which they link. In case of symbolic links, processes are only permitted to follow links when outside of world-writeable directories with sticky bits, or one of the following needs to be true: The process following the symbolic link is the owner of the symbolic link. The owner of the directory is the same as the owner of the symbolic link. This protection is turned on by default. It is controlled by the following options in the /usr/lib/sysctl.d/50-default.conf file: fs.protected_hardlinks = 1 fs.protected_symlinks = 1 To override the default settings and disable the protection, create a new configuration file called, for example, 51-no-protect-links.conf in the /etc/sysctl.d/ directory with the following content: fs.protected_hardlinks = 0 fs.protected_symlinks = 0 Note Note that in order to override the default system settings, the new configuration file needs to have the .conf extension, and it needs to be read after the default system file (the files are read in lexicographic order, therefore settings contained in a file with a higher number at the beginning of the file name take precedence). See the sysctl.d (5) manual page for more detailed information about the configuration of kernel parameters at boot using the sysctl mechanism.
[ "~]USD ls -l /bin/su -rwsr-xr-x. 1 root root 34904 Mar 10 2011 /bin/su", "echo > /etc/securetty", "auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so", "#PermitRootLogin yes", "PermitRootLogin no", "auth required /lib/security/pam_listfile.so item=user sense=deny file=/etc/vsftpd.ftpusers onerr=succeed", "trap \"\" 1 2 3 15", "export TMOUT=120 readonly TMOUT", "PROMPT=no", "fs.protected_hardlinks = 1 fs.protected_symlinks = 1", "fs.protected_hardlinks = 0 fs.protected_symlinks = 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Controlling_Root_Access
Chapter 6. Managing DNS using Capsule
Chapter 6. Managing DNS using Capsule Satellite can manage DNS records using your Capsule. DNS management contains updating and removing DNS records from existing DNS zones. A Capsule has multiple DNS providers that you can use to integrate Satellite with your existing DNS infrastructure or deploy a new one. After you have enabled DNS, your Capsule can manipulate any DNS server that complies with RFC 2136 using the dns_nsupdate provider. Other providers provide more direct integration, such as dns_infoblox for Infoblox . Available DNS providers dns_infoblox - For more information, see Using Infoblox as DHCP and DNS Providers in Provisioning hosts . dns_nsupdate - Dynamic DNS update using nsupdate. For more information, see Using Infoblox as DHCP and DNS Providers in Provisioning hosts . dns_nsupdate_gss - Dynamic DNS update with GSS-TSIG. For more information, see Section 4.4.1, "Configuring dynamic DNS update with GSS-TSIG authentication" .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_capsule_server/managing_dns_using_smart_proxy_capsule
Upgrading connected Red Hat Satellite to 6.16
Upgrading connected Red Hat Satellite to 6.16 Red Hat Satellite 6.16 Upgrade Satellite Server and Capsule Red Hat Satellite Documentation Team [email protected]
[ "satellite-maintain service stop", "satellite-maintain service start", "satellite-installer --foreman-proxy-dhcp-managed=false --foreman-proxy-dns-managed=false", "satellite-maintain self-upgrade", "satellite-maintain upgrade check", "satellite-maintain upgrade run", "reboot", "yum clean metadata", "satellite-maintain self-upgrade", "grep foreman_url /etc/foreman-proxy/settings.yml", "satellite-maintain upgrade check", "satellite-maintain upgrade run", "reboot", "runuser -l postgres -c \"psql -d foreman -c \\\"UPDATE pg_extension SET extowner = (SELECT oid FROM pg_authid WHERE rolname='foreman') WHERE extname='evr';\\\"\"", "satellite-installer --foreman-db-host newpostgres.example.com --katello-candlepin-db-host newpostgres.example.com --foreman-proxy-content-pulpcore-postgresql-host newpostgres.example.com", "satellite-maintain packages install leapp leapp-upgrade-el8toel9", "leapp preupgrade", "leapp upgrade", "journalctl -u leapp_resume -f", "satellite-maintain packages unlock", "satellite-maintain packages lock", "subscription-manager release --unset", "2024-01-29T20:50:09 [W|app|] Could not create role 'Ansible Roles Manager': ERF73-0602 [Foreman::PermissionMissingException]: some permissions were not found:", "satellite-maintain health check --label duplicate_permissions", "foreman-rake db:seed", "satellite-maintain health check --label duplicate_permissions" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/upgrading_connected_red_hat_satellite_to_6.16/index
Chapter 4. Rebooting the overcloud
Chapter 4. Rebooting the overcloud After you perform a minor Red Hat OpenStack Platform (RHOSP) update to the latest 17.0 version, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates provide performance and security benefits. Plan downtime to perform the reboot procedures. Use the following guidance to understand how to reboot different node types: If you reboot all nodes in one role, reboot each node individually. If you reboot all nodes in a role simultaneously, service downtime can occur during the reboot operation. Complete the reboot procedures on the nodes in the following order: Section 4.1, "Rebooting Controller and composable nodes" Section 4.2, "Rebooting a Ceph Storage (OSD) cluster" Section 4.3, "Rebooting Compute nodes" 4.1. Rebooting Controller and composable nodes Reboot Controller nodes and standalone nodes based on composable roles, and exclude Compute nodes and Ceph Storage nodes. Procedure Log in to the node that you want to reboot. Optional: If the node uses Pacemaker resources, stop the cluster: [tripleo-admin@overcloud-controller-0 ~]USD sudo pcs cluster stop Reboot the node: [tripleo-admin@overcloud-controller-0 ~]USD sudo reboot Wait until the node boots. Verification Verify that the services are enabled. If the node uses Pacemaker services, check that the node has rejoined the cluster: [tripleo-admin@overcloud-controller-0 ~]USD sudo pcs status If the node uses Systemd services, check that all services are enabled: [tripleo-admin@overcloud-controller-0 ~]USD sudo systemctl status If the node uses containerized services, check that all containers on the node are active: [tripleo-admin@overcloud-controller-0 ~]USD sudo podman ps 4.2. Rebooting a Ceph Storage (OSD) cluster Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes. Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide . Procedure Log in to a Ceph Monitor or Controller node that is running the ceph-mon service, and disable Ceph Storage cluster rebalancing temporarily: USD sudo cephadm shell -- ceph osd set noout USD sudo cephadm shell -- ceph osd set norebalance Note If you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you set the noout and norebalance flags. For example: sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring . Select the first Ceph Storage node that you want to reboot and log in to the node. Reboot the node: Wait until the node boots. Log in to the node and check the Ceph cluster status: USD sudo cephadm -- shell ceph status Check that the pgmap reports all pgs as normal ( active+clean ). Log out of the node, reboot the node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes. When complete, log in to a Ceph Monitor or Controller node that is running the ceph-mon service and enable Ceph cluster rebalancing: USD sudo cephadm shell -- ceph osd unset noout USD sudo cephadm shell -- ceph osd unset norebalance Note If you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you unset the noout and norebalance flags. For example: sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring Perform a final status check to verify that the cluster reports HEALTH_OK : USD sudo cephadm shell ceph status 4.3. Rebooting Compute nodes To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, the Migrating instances workflow outlines the steps you must complete to migrate instances from the Compute node that you want to reboot. Migrating instances workflow Decide whether to migrate instances to another Compute node before rebooting the node. Select and disable the Compute node that you want to reboot so that it does not provision new instances. Migrate the instances to another Compute node. Reboot the empty Compute node. Enable the empty Compute node. Prerequisites Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting. Review the list of migration constraints that you might encounter when you migrate virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute Service for Instance Creation . If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots: NovaResumeGuestsStateOnHostBoot Determines whether to return instances to the same state on the Compute node after reboot. When set to False , the instances remain down and you must start them manually. The default value is False . NovaResumeGuestsShutdownTimeout Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to 0 . The default value is 300 . For more information about overcloud parameters and their usage, see Overcloud Parameters . Procedure Log in to the undercloud as the stack user. List all Compute nodes and their UUIDs: USD source ~/stackrc (undercloud) USD metalsmith list | grep compute Identify the UUID of the Compute node that you want to reboot. From the overcloud, select a Compute node and disable it: USD source ~/overcloudrc (overcloud)USD openstack compute service list (overcloud)USD openstack compute service set <hostname> nova-compute --disable Replace <hostname> with the hostname of your Compute node. List all instances on the Compute node: (overcloud)USD openstack server list --host <hostname> --all-projects Optional: To migrate the instances to another Compute node, complete the following steps: If you decide to migrate the instances to another Compute node, use one of the following commands: To migrate the instance to a different host, run the following command: (overcloud) USD openstack server migrate <instance_id> --live <target_host> --wait Replace <instance_id> with your instance ID. Replace <target_host> with the host that you are migrating the instance to. Let nova-scheduler automatically select the target host: (overcloud) USD nova live-migration <instance_id> Live migrate all instances at once: USD nova host-evacuate-live <hostname> Note The nova command might cause some deprecation warnings, which are safe to ignore. Wait until migration completes. Confirm that the migration was successful: (overcloud) USD openstack server list --host <hostname> --all-projects Continue to migrate instances until none remain on the Compute node. Log in to the Compute node and reboot the node: [tripleo-admin@overcloud-compute-0 ~]USD sudo reboot Wait until the node boots. Re-enable the Compute node: USD source ~/overcloudrc (overcloud) USD openstack compute service set <hostname> nova-compute --enable Check that the Compute node is enabled: (overcloud) USD openstack compute service list
[ "[tripleo-admin@overcloud-controller-0 ~]USD sudo pcs cluster stop", "[tripleo-admin@overcloud-controller-0 ~]USD sudo reboot", "[tripleo-admin@overcloud-controller-0 ~]USD sudo pcs status", "[tripleo-admin@overcloud-controller-0 ~]USD sudo systemctl status", "[tripleo-admin@overcloud-controller-0 ~]USD sudo podman ps", "sudo cephadm -- shell ceph status", "sudo cephadm shell -- ceph osd set noout sudo cephadm shell -- ceph osd set norebalance", "sudo reboot", "sudo cephadm -- shell ceph status", "sudo cephadm shell -- ceph osd unset noout sudo cephadm shell -- ceph osd unset norebalance", "sudo cephadm shell ceph status", "source ~/stackrc (undercloud) USD metalsmith list | grep compute", "source ~/overcloudrc (overcloud)USD openstack compute service list (overcloud)USD openstack compute service set <hostname> nova-compute --disable", "(overcloud)USD openstack server list --host <hostname> --all-projects", "(overcloud) USD openstack server migrate <instance_id> --live <target_host> --wait", "(overcloud) USD nova live-migration <instance_id>", "nova host-evacuate-live <hostname>", "(overcloud) USD openstack server list --host <hostname> --all-projects", "[tripleo-admin@overcloud-compute-0 ~]USD sudo reboot", "source ~/overcloudrc (overcloud) USD openstack compute service set <hostname> nova-compute --enable", "(overcloud) USD openstack compute service list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/keeping_red_hat_openstack_platform_updated/assembly_rebooting-the-overcloud_keeping-updated
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices on any platform, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. You can also deploy OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem. For instructions, see Deploying OpenShift Data Foundation in external mode . External mode deployment works on clusters that are detected as non-cloud. If your cluster is not detected correctly, open up a bug in Bugzilla . Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . After completing the preparatory steps, perform the following procedures: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on any platform . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_any_platform/preparing_to_deploy_openshift_data_foundation
Chapter 3. Authentication [config.openshift.io/v1]
Chapter 3. Authentication [config.openshift.io/v1] Description Authentication specifies cluster-wide settings for authentication (like OAuth and webhook token authenticators). The canonical name of an instance is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 3.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description oauthMetadata object oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. serviceAccountIssuer string serviceAccountIssuer is the identifier of the bound service account token issuer. The default is https://kubernetes.default.svc WARNING: Updating this field will not result in immediate invalidation of all bound tokens with the issuer value. Instead, the tokens issued by service account issuer will continue to be trusted for a time period chosen by the platform (currently set to 24h). This time period is subject to change over time. This allows internal components to transition to use new service account issuer without service distruption. type string type identifies the cluster managed, user facing authentication mode in use. Specifically, it manages the component that responds to login attempts. The default is IntegratedOAuth. webhookTokenAuthenticator object webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. Can only be set if "Type" is set to "None". webhookTokenAuthenticators array webhookTokenAuthenticators is DEPRECATED, setting it has no effect. webhookTokenAuthenticators[] object deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. 3.1.2. .spec.oauthMetadata Description oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.1.3. .spec.webhookTokenAuthenticator Description webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. Can only be set if "Type" is set to "None". Type object Required kubeConfig Property Type Description kubeConfig object kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. 3.1.4. .spec.webhookTokenAuthenticator.kubeConfig Description kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.5. .spec.webhookTokenAuthenticators Description webhookTokenAuthenticators is DEPRECATED, setting it has no effect. Type array 3.1.6. .spec.webhookTokenAuthenticators[] Description deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. Type object Property Type Description kubeConfig object kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. 3.1.7. .spec.webhookTokenAuthenticators[].kubeConfig Description kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.8. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description integratedOAuthMetadata object integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. 3.1.9. .status.integratedOAuthMetadata Description integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/config.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/config.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 3.2.1. /apis/config.openshift.io/v1/authentications HTTP method DELETE Description delete collection of Authentication Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 3.2. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body Authentication schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 3.2.2. /apis/config.openshift.io/v1/authentications/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the Authentication HTTP method DELETE Description delete an Authentication Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body Authentication schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 3.2.3. /apis/config.openshift.io/v1/authentications/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the Authentication HTTP method GET Description read status of the specified Authentication Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Authentication schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/authentication-config-openshift-io-v1
Chapter 4. Adding security notices
Chapter 4. Adding security notices With Red Hat Advanced Cluster Security for Kubernetes, you can add security notices that users see when they log in. You can also set up an organization-wide message or disclaimers on the top or bottom of the RHACS portal. This message can serve as a reminder of corporate policies and notify employees of the appropriate policies. Alternatively, you might want to display these messages for legal reasons, for example, to warn users that their actions are audited. 4.1. Adding a custom login message The display of a warning message before login warns malicious or uninformed users about the consequences of their actions. Prerequisites You must have the Administration role with read permission to view the login message configuration options. You must have the Administration role with write permission to modify, enable or disable the login message. Procedure In the RHACS portal, go to Platform Configuration System Configuration . On the System Configuration view header, click Edit . Enter your login message in the Login Configuration section. To enable the login message, turn on the toggle in the Login Configuration section. Click Save . 4.2. Adding a custom header and footer You can place custom text in a header and footer and configure the text and its background color. Prerequisites You must have the Administration role with read permission to view the custom header and footer configuration options. You must have the Administration role with write permission to modify, enable or disable the custom header and footer. Procedure In the RHACS portal, go to Platform Configuration System Configuration . On the System Configuration view header, click Edit . Under the Header Configuration and Footer Configuration sections, enter the header and footer text. Customize the header and footer Text Color , Size , and Background Color . To enable the header, turn on the toggle in the Header Configuration section. To enable the footer, turn on the toggle in the Footer Configuration section. Click Save .
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/configuring/add-security-notices
Chapter 5. Fencing: Configuring STONITH
Chapter 5. Fencing: Configuring STONITH STONITH is an acronym for "Shoot The Other Node In The Head" and it protects your data from being corrupted by rogue nodes or concurrent access. Just because a node is unresponsive, this does not mean it is not accessing your data. The only way to be 100% sure that your data is safe, is to fence the node using STONITH so we can be certain that the node is truly offline, before allowing the data to be accessed from another node. STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere. For more complete general information on fencing and its importance in a Red Hat High Availability cluster, see Fencing in a Red Hat High Availability Cluster . 5.1. Available STONITH (Fencing) Agents Use the following command to view of list of all available STONITH agents. You specify a filter, then this command displays only the STONITH agents that match the filter.
[ "pcs stonith list [ filter ]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-fencing-haar
Preface
Preface Important Deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS with hosted control planes is a technology preview feature. Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Service Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See Technology Preview Features Support Scope for more information.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/pr01
7.35. curl
7.35. curl 7.35.1. RHSA-2015:1254 - Moderate: curl security, bug fix, and enhancement update Updated curl packages that fix multiple security issues, several bugs, and add two enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. The curl packages provide the libcurl library and the curl utility for downloading files from servers using various protocols, including HTTP, FTP, and LDAP. Security Fixes CVE-2014-3613 It was found that the libcurl library did not correctly handle partial literal IP addresses when parsing received HTTP cookies. An attacker able to trick a user into connecting to a malicious server could use this flaw to set the user's cookie to a crafted domain, making other cookie-related issues easier to exploit. CVE-2014-3707 A flaw was found in the way the libcurl library performed the duplication of connection handles. If an application set the CURLOPT_COPYPOSTFIELDS option for a handle, using the handle's duplicate could cause the application to crash or disclose a portion of its memory. CVE-2014-8150 It was discovered that the libcurl library failed to properly handle URLs with embedded end-of-line characters. An attacker able to make an application using libcurl to access a specially crafted URL via an HTTP proxy could use this flaw to inject additional headers to the request or construct additional requests. CVE-2015-3143 , CVE-2015-3148 It was discovered that libcurl implemented aspects of the NTLM and Negotatiate authentication incorrectly. If an application uses libcurl and the affected mechanisms in a specifc way, certain requests to a previously NTLM-authenticated server could appears as sent by the wrong authenticated user. Additionally, the initial set of credentials for HTTP Negotiate-authenticated requests could be reused in subsequent requests, although a different set of credentials was specified. Red Hat would like to thank the cURL project for reporting these issues. Bug Fixes BZ# 1154059 An out-of-protocol fallback to SSL version 3.0 (SSLv3.0) was available with libcurl. Attackers could abuse the fallback to force downgrade of the SSL version. The fallback has been removed from libcurl. Users requiring this functionality can explicitly enable SSLv3.0 through the libcurl API. BZ# 883002 A single upload transfer through the FILE protocol opened the destination file twice. If the inotify kernel subsystem monitored the file, two events were produced unnecessarily. The file is now opened only once per upload. BZ# 1008178 Utilities using libcurl for SCP/SFTP transfers could terminate unexpectedly when the system was running in FIPS mode. BZ# 1009455 Using the "--retry" option with the curl utility could cause curl to terminate unexpectedly with a segmentation fault. Now, adding "--retry" no longer causes curl to crash. BZ# 1120196 The "curl --trace-time" command did not use the correct local time when printing timestamps. Now, "curl --trace-time" works as expected. BZ# 1146528 The valgrind utility could report dynamically allocated memory leaks on curl exit. Now, curl performs a global shutdown of the NetScape Portable Runtime (NSPR) library on exit, and valgrind no longer reports the memory leaks. BZ# 1161163 Previously, libcurl returned an incorrect value of the CURLINFO_HEADER_SIZE field when a proxy server appended its own headers to the HTTP response. Now, the returned value is valid. Red Hat would like to thank the cURL project for reporting these issues. Enhancements BZ# 1012136 The "--tlsv1.0", "--tlsv1.1", and "--tlsv1.2" options are available for specifying the minor version of the TLS protocol to be negotiated by NSS. The "--tlsv1" option now negotiates the highest version of the TLS protocol supported by both the client and the server. BZ# 1058767 , BZ# 1156422 It is now possible to explicitly enable or disable the ECC and the new AES cipher suites to be used for TLS. All curl users are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-curl
Chapter 1. Red Hat OpenStack Platform identity providers
Chapter 1. Red Hat OpenStack Platform identity providers You can deploy one of two methods to authenticate for a user to authenticate to an identity provider (IdP): You can connect the Red Hat OpenStack Platform (RHOSP) Identity service (keystone) to an IdP using LDAP (Lightweight Directory Access Protocol) You can use federation, in which the IdP sends an assertion to the Identity service granting the user access to the cloud. While LDAP is used as a central authority for identity management and authentication, federation can be used to build single sign-on solutions. For information on connecting RHOSP services to an LDAP directory service by using either Active Directory (AD) or Red Hat Identity Manager (IdM), see the following resources: Integrating OpenStack Identity (keystone) with Active Directory Integrating OpenStack Identity (keystone) with Red Hat Identity Manager (IdM) For information on connecting RHOSP to IdM using Red Hat Single Sign-On for a federated solution, see the following resources: Federation using Red Hat OpenStack Platform and Red Hat Single Sign-On Federation using Red Hat OpenStack Platform and Active Directory Federation Services
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/integrating_openstack_identity_with_external_user_management_services/assembly_identity-providers
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/making-open-source-more-inclusive
Chapter 32. event
Chapter 32. event This chapter describes the commands under the event command. 32.1. event trigger create Create new trigger. Usage: Table 32.1. Positional arguments Value Summary name Event trigger name workflow_id Workflow id exchange Event trigger exchange topic Event trigger topic event Event trigger event name workflow_input Workflow input Table 32.2. Command arguments Value Summary -h, --help Show this help message and exit --params PARAMS Workflow params Table 32.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 32.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 32.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 32.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 32.2. event trigger delete Delete trigger. Usage: Table 32.7. Positional arguments Value Summary event_trigger_id Id of event trigger(s). Table 32.8. Command arguments Value Summary -h, --help Show this help message and exit 32.3. event trigger list List all event triggers. Usage: Table 32.9. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 32.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 32.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 32.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 32.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 32.4. event trigger show Show specific event trigger. Usage: Table 32.14. Positional arguments Value Summary event_trigger Event trigger id Table 32.15. Command arguments Value Summary -h, --help Show this help message and exit Table 32.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 32.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 32.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 32.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack event trigger create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--params PARAMS] name workflow_id exchange topic event [workflow_input]", "openstack event trigger delete [-h] event_trigger_id [event_trigger_id ...]", "openstack event trigger list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack event trigger show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] event_trigger" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/event
Chapter 1. Planning a GFS2 file system deployment
Chapter 1. Planning a GFS2 file system deployment The Red Hat Global File System 2 (GFS2) file system is a 64-bit symmetric cluster file system which provides a shared name space and manages coherency between multiple nodes sharing a common block device. A GFS2 file system is intended to provide a feature set which is as close as possible to a local file system, while at the same time enforcing full cluster coherency between nodes. To achieve this, the nodes employ a cluster-wide locking scheme for file system resources. This locking scheme uses communication protocols such as TCP/IP to exchange locking information. In a few cases, the Linux file system API does not allow the clustered nature of GFS2 to be totally transparent; for example, programs using POSIX locks in GFS2 should avoid using the GETLK function since, in a clustered environment, the process ID may be for a different node in the cluster. In most cases however, the functionality of a GFS2 file system is identical to that of a local file system. The Red Hat Enterprise Linux (RHEL) Resilient Storage Add-On provides GFS2, and it depends on the RHEL High Availability Add-On to provide the cluster management required by GFS2. The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes. To get the best performance from GFS2, it is important to take into account the performance considerations which stem from the underlying design. Just like a local file system, GFS2 relies on the page cache in order to improve performance by local caching of frequently used data. In order to maintain coherency across the nodes in the cluster, cache control is provided by the glock state machine. Important Make sure that your deployment of the Red Hat High Availability Add-On meets your needs and can be supported. Consult with an authorized Red Hat representative to verify your configuration prior to deployment. 1.1. Key GFS2 parameters to determine There are a number of key GFS2 parameters you should plan for before you install and configure a GFS2 file system. GFS2 nodes Determine which nodes in the cluster will mount the GFS2 file systems. Number of file systems Determine how many GFS2 file systems to create initially. More file systems can be added later. File system name Each GFS2 file system should have a unique name. This name is usually the same as the LVM logical volume name and is used as the DLM lock table name when a GFS2 file system is mounted. For example, this guide uses file system names mydata1 and mydata2 in some example procedures. Journals Determine the number of journals for your GFS2 file systems. GFS2 requires one journal for each node in the cluster that needs to mount the file system. For example, if you have a 16-node cluster but need to mount only the file system from two nodes, you need only two journals. GFS2 allows you to add journals dynamically at a later point with the gfs2_jadd utility as additional servers mount a file system. Storage devices and partitions Determine the storage devices and partitions to be used for creating logical volumes (using lvmlockd ) in the file systems. Time protocol Make sure that the clocks on the GFS2 nodes are synchronized. It is recommended that you use the Precision Time Protocol (PTP) or, if necessary for your configuration, the Network Time Protocol (NTP) software provided with your Red Hat Enterprise Linux distribution. The system clocks in GFS2 nodes must be within a few minutes of each other to prevent unnecessary inode time stamp updating. Unnecessary inode time stamp updating severely impacts cluster performance. Note You may see performance problems with GFS2 when many create and delete operations are issued from more than one node in the same directory at the same time. If this causes performance problems in your system, you should localize file creation and deletions by a node to directories specific to that node as much as possible. 1.2. GFS2 support considerations To be eligible for support from Red Hat for a cluster running a GFS2 file system, you must take into account the support policies for GFS2 file systems. Note For full information about Red Hat's support policies, requirements, and limitations for RHEL High Availability clusters, see Support Policies for RHEL High Availability Clusters . 1.2.1. Maximum file system and cluster size The following table summarizes the current maximum file system size and number of nodes that GFS2 supports. Table 1.1. GFS2 Support Limits Parameter Maximum Number of nodes 16 (x86, Power8 on PowerVM) 4 (s390x under z/VM) File system size 100TB on all supported architectures GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. If your system requires larger GFS2 file systems than are currently supported, contact your Red Hat service representative. When determining the size of your file system, you should consider your recovery needs. Running the fsck.gfs2 command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk subsystem failure, recovery time is limited by the speed of your backup media. For information about the amount of memory the fsck.gfs2 command requires, see Determining required memory for running fsck.gfs2 . 1.2.2. Minimum cluster size Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, Red Hat does not support the use of GFS2 as a single-node file system, with the following exceptions: Red Hat supports single-node GFS2 file systems for mounting snapshots of cluster file systems as might be needed, for example, for backup purposes. A single-node cluster mounting GFS2 file systems (which uses DLM) is supported for the purposes of a secondary-site Disaster Recovery (DR) node. This exception is for DR purposes only and not for transferring the main cluster workload to the secondary site. For example, copying off the data from the filesystem mounted on the secondary site while the primary site is offline is supported. However, migrating a workload from the primary site directly to a single-node cluster secondary site is unsupported. If the full work load needs to be migrated to the single-node secondary site then the secondary site must be the same size as the primary site. Red Hat recommends that when you mount a GFS2 file system in a single-node cluster you specify the errors=panic mount option so that the single-node cluster will panic when a GFS2 withdraw occurs since the single-node cluster will not be able to fence itself when encountering file system errors. Red Hat supports a number of high-performance single-node file systems that are optimized for single node and thus have generally lower overhead than a cluster file system. Red Hat recommends using these file systems in preference to GFS2 in cases where only a single node needs to mount the file system. For information about the file systems that Red Hat Enterprise Linux 8 supports, see Managing file systems . 1.2.3. Shared storage considerations While a GFS2 file system may be used outside of LVM, Red Hat supports only GFS2 file systems that are created on a shared LVM logical volume. When you configure a GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared storage. Asymmetric cluster configurations in which some nodes have access to the shared storage and others do not are not supported. This does not require that all nodes actually mount the GFS2 file system itself. 1.3. GFS2 formatting considerations To format your GFS2 file system to optimize performance, you should take these recommendations into account. Important Make sure that your deployment of the Red Hat High Availability Add-On meets your needs and can be supported. Consult with an authorized Red Hat representative to verify your configuration prior to deployment. File System Size: Smaller Is Better GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100TB. Note that even though GFS2 large file systems are possible, that does not mean they are recommended. The rule of thumb with GFS2 is that smaller is better: it is better to have 10 1TB file systems than one 10TB file system. There are several reasons why you should keep your GFS2 file systems small: Less time is required to back up each file system. Less time is required if you need to check the file system with the fsck.gfs2 command. Less memory is required if you need to check the file system with the fsck.gfs2 command. In addition, fewer resource groups to maintain mean better performance. Of course, if you make your GFS2 file system too small, you might run out of space, and that has its own consequences. You should consider your own use cases before deciding on a size. Block Size: Default (4K) Blocks Are Preferred The mkfs.gfs2 command attempts to estimate an optimal block size based on device topology. In general, 4K blocks are the preferred block size because 4K is the default page size (memory) for Red Hat Enterprise Linux. Unlike some other file systems, GFS2 does most of its operations using 4K kernel buffers. If your block size is 4K, the kernel has to do less work to manipulate the buffers. It is recommended that you use the default block size, which should yield the highest performance. You may need to use a different block size only if you require efficient storage of many very small files. Journal Size: Default (128MB) Is Usually Optimal When you run the mkfs.gfs2 command to create a GFS2 file system, you may specify the size of the journals. If you do not specify a size, it will default to 128MB, which should be optimal for most applications. Some system administrators might think that 128MB is excessive and be tempted to reduce the size of the journal to the minimum of 8MB or a more conservative 32MB. While that might work, it can severely impact performance. Like many journaling file systems, every time GFS2 writes metadata, the metadata is committed to the journal before it is put into place. This ensures that if the system crashes or loses power, you will recover all of the metadata when the journal is automatically replayed at mount time. However, it does not take much file system activity to fill an 8MB journal, and when the journal is full, performance slows because GFS2 has to wait for writes to the storage. It is generally recommended to use the default journal size of 128MB. If your file system is very small (for example, 5GB), having a 128MB journal might be impractical. If you have a larger file system and can afford the space, using 256MB journals might improve performance. Size and Number of Resource Groups When a GFS2 file system is created with the mkfs.gfs2 command, it divides the storage into uniform slices known as resource groups. It attempts to estimate an optimal resource group size (ranging from 32MB to 2GB). You can override the default with the -r option of the mkfs.gfs2 command. Your optimal resource group size depends on how you will use the file system. Consider how full it will be and whether or not it will be severely fragmented. You should experiment with different resource group sizes to see which results in optimal performance. It is a best practice to experiment with a test cluster before deploying GFS2 into full production. If your file system has too many resource groups, each of which is too small, block allocations can waste too much time searching tens of thousands of resource groups for a free block. The more full your file system, the more resource groups that will be searched, and every one of them requires a cluster-wide lock. This leads to slow performance. If, however, your file system has too few resource groups, each of which is too big, block allocations might contend more often for the same resource group lock, which also impacts performance. For example, if you have a 10GB file system that is carved up into five resource groups of 2GB, the nodes in your cluster will fight over those five resource groups more often than if the same file system were carved into 320 resource groups of 32MB. The problem is exacerbated if your file system is nearly full because every block allocation might have to look through several resource groups before it finds one with a free block. GFS2 tries to mitigate this problem in two ways: First, when a resource group is completely full, it remembers that and tries to avoid checking it for future allocations until a block is freed from it. If you never delete files, contention will be less severe. However, if your application is constantly deleting blocks and allocating new blocks on a file system that is mostly full, contention will be very high and this will severely impact performance. Second, when new blocks are added to an existing file (for example, by appending) GFS2 will attempt to group the new blocks together in the same resource group as the file. This is done to increase performance: on a spinning disk, seek operations take less time when they are physically close together. The worst case scenario is when there is a central directory in which all the nodes create files because all of the nodes will constantly fight to lock the same resource group. 1.4. Considerations for GFS2 in a cluster When determining the number of nodes that your system will contain, note that there is a trade-off between high availability and performance. With a larger number of nodes, it becomes increasingly difficult to make workloads scale. For that reason, Red Hat does not support using GFS2 for cluster file system deployments greater than 16 nodes. Deploying a cluster file system is not a "drop in" replacement for a single node deployment. Red Hat recommends that you allow a period of around 8-12 weeks of testing on new installations in order to test the system and ensure that it is working at the required performance level. During this period, any performance or functional issues can be worked out and any queries should be directed to the Red Hat support team. Red Hat recommends that customers considering deploying clusters have their configurations reviewed by Red Hat support before deployment to avoid any possible support issues later on. 1.5. Hardware considerations Take the following hardware considerations into account when deploying a GFS2 file system. Use higher quality storage options GFS2 can operate on cheaper shared storage options, such as iSCSI or Fibre Channel over Ethernet (FCoE), but you will get better performance if you buy higher quality storage with larger caching capacity. Red Hat performs most quality, sanity, and performance tests on SAN storage with Fibre Channel interconnect. As a general rule, it is always better to deploy something that has been tested first. Test network equipment before deploying Higher quality, faster network equipment makes cluster communications and GFS2 run faster with better reliability. However, you do not have to purchase the most expensive hardware. Some of the most expensive network switches have problems passing multicast packets, which are used for passing fcntl locks (flocks), whereas cheaper commodity network switches are sometimes faster and more reliable. Red Hat recommends trying equipment before deploying it into full production.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_gfs2_file_systems/assembly_planning-gfs2-deployment-configuring-gfs2-file-systems
Satellite Overview, Concepts, and Deployment Considerations
Satellite Overview, Concepts, and Deployment Considerations Red Hat Satellite 6.11 Planning Satellite Deployment Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/index
Chapter 88. Plugin schema reference
Chapter 88. Plugin schema reference Used in: Build Property Description name The unique name of the connector plugin. Will be used to generate the path where the connector artifacts will be stored. The name has to be unique within the KafkaConnect resource. The name has to follow the following pattern: ^[a-z][-_a-z0-9]*[a-z]USD . Required. string artifacts List of artifacts which belong to this connector plugin. Required. JarArtifact , TgzArtifact , ZipArtifact , MavenArtifact , OtherArtifact array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-plugin-reference
Web console
Web console OpenShift Container Platform 4.15 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc edit console.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: \"\" 1 status: consoleURL: \"\" 2", "oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config binaryData: console-custom-logo.png: <base64-encoded_logo> ... 1", "oc edit consoles.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console", "oc get clusteroperator console -o yaml", "oc get consoles.operator.openshift.io -o yaml", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called \"Launcher\" under \"namespace\" or \"project\" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "oc adm create-login-template > login.html", "oc adm create-provider-selection-template > providers.html", "oc adm create-error-template > errors.html", "oc create secret generic login-template --from-file=login.html -n openshift-config", "oc create secret generic providers-template --from-file=providers.html -n openshift-config", "oc create secret generic error-template --from-file=errors.html -n openshift-config", "oc edit oauths cluster", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template", "apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs", "apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce'", "apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links spec: description: | This is an example of download links displayName: example links: - href: 'https://www.example.com/public/example.tar' text: example for linux - href: 'https://www.example.com/public/example.mac.zip' text: example for mac - href: 'https://www.example.com/public/example.win.zip' text: example for windows", "apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - \"bin/bash\" - \"-c\" - \"for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done\" restartPolicy: Never", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin visibility: state: Enabled - id: dev visibility: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin requiresAccessReview: - group: rbac.authorization.k8s.io resource: clusterroles verb: list - id: dev state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin visibility: state: AccessReview accessReview: missing: - resource: deployment verb: list required: - resource: namespaces verb: list - id: dev visibility: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Disabled disabled: - BuilderImage - Devfile - HelmChart", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Disabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Enabled enabled: - BuilderImage - Devfile - HelmChart -", "conster Header: React.FC = () => { const { t } = useTranslation('plugin__console-demo-plugin'); return <h1>{t('Hello, World!')}</h1>; };", "yarn install", "yarn run start", "oc login", "yarn run start-console", "docker build -t quay.io/my-repositroy/my-plugin:latest .", "docker run -it --rm -d -p 9001:80 quay.io/my-repository/my-plugin:latest", "docker push quay.io/my-repository/my-plugin:latest", "helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location", "plugin: name: \"\" description: \"\" image: \"\" imagePullPolicy: IfNotPresent replicas: 2 port: 9443 securityContext: enabled: true podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi basePath: / certificateSecretName: \"\" serviceAccount: create: true annotations: {} name: \"\" patcherServiceAccount: create: true annotations: {} name: \"\" jobs: patchConsoles: enabled: true image: \"registry.redhat.io/openshift4/ose-tools-rhel8@sha256:e44074f21e0cca6464e50cb6ff934747e0bd11162ea01d522433a1a1ae116103\" podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi", "apiVersion: console.openshift.io/v1 kind: ConsolePlugin metadata: name:<plugin-name> spec: proxy: - alias: helm-charts 1 authorization: UserToken 2 caCertificate: '-----BEGIN CERTIFICATE-----\\nMIID....'en 3 endpoint: 4 service: name: <service-name> namespace: <service-namespace> port: <service-port> type: Service", "\"consolePlugin\": { \"name\": \"my-plugin\", 1 \"version\": \"0.0.1\", 2 \"displayName\": \"My Plugin\", 3 \"description\": \"Enjoy this shiny, new console plugin!\", 4 \"exposedModules\": { \"ExamplePage\": \"./components/ExamplePage\" }, \"dependencies\": { \"@console/pluginAPI\": \"/*\" } }", "{ \"type\": \"console.tab/horizontalNav\", \"properties\": { \"page\": { \"name\": \"Example Tab\", \"href\": \"example\" }, \"model\": { \"group\": \"core\", \"version\": \"v1\", \"kind\": \"Pod\" }, \"component\": { \"USDcodeRef\": \"ExampleTab\" } } }", "\"exposedModules\": { \"ExamplePage\": \"./components/ExamplePage\", \"ExampleTab\": \"./components/ExampleTab\" }", "import * as React from 'react'; export default function ExampleTab() { return ( <p>This is a custom tab added to a resource using a dynamic plugin.</p> ); }", "helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location", "const Component: React.FC = (props) => { const [activePerspective, setActivePerspective] = useActivePerspective(); return <select value={activePerspective} onChange={(e) => setActivePerspective(e.target.value)} > { // ...perspective options } </select> }", "<GreenCheckCircleIcon title=\"Healthy\" />", "<RedExclamationCircleIcon title=\"Failed\" />", "<YellowExclamationTriangleIcon title=\"Warning\" />", "<BlueInfoCircleIcon title=\"Info\" />", "<ErrorStatus title={errorMsg} />", "<InfoStatus title={infoMsg} />", "<ProgressStatus title={progressMsg} />", "<SuccessStatus title={successMsg} />", "const [navItemExtensions, navItemsResolved] = useResolvedExtensions<NavItem>(isNavItem); // process adapted extensions and render your component", "const HomePage: React.FC = (props) => { const page = { href: '/home', name: 'Home', component: () => <>Home</> } return <HorizontalNav match={props.match} pages={[page]} /> }", "const MachineList: React.FC<MachineListProps> = (props) => { return ( <VirtualizedTable<MachineKind> {...props} aria-label='Machines' columns={getMachineColumns} Row={getMachineTableRow} /> ); }", "const PodRow: React.FC<RowProps<K8sResourceCommon>> = ({ obj, activeColumnIDs }) => { return ( <> <TableData id={columns[0].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind=\"Pod\" name={obj.metadata.name} namespace={obj.metadata.namespace} /> </TableData> <TableData id={columns[1].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind=\"Namespace\" name={obj.metadata.namespace} /> </TableData> </> ); };", "// See implementation for more details on TableColumn type const [activeColumns, userSettingsLoaded] = useActiveColumns({ columns, showNamespaceOverride: false, columnManagementID, }); return userSettingsAreLoaded ? <VirtualizedTable columns={activeColumns} {...otherProps} /> : null", "const exampleList: React.FC = () => { return ( <> <ListPageHeader title=\"Example List Page\"/> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreate groupVersionKind=\"Pod\">Create Pod</ListPageCreate> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateLink to={'/link/to/my/page'}>Create Item</ListPageCreateLink> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateButton createAccessReview={access}>Create Pod</ListPageCreateButton> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { const items = { SAVE: 'Save', DELETE: 'Delete', } return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateDropdown createAccessReview={access} items={items}>Actions</ListPageCreateDropdown> </ListPageHeader> </> ); };", "// See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> )", "// See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> )", "<ResourceLink kind=\"Pod\" name=\"testPod\" title={metadata.uid} />", "<ResourceIcon kind=\"Pod\"/>", "const Component: React.FC = () => { const [model, inFlight] = useK8sModel({ group: 'app'; version: 'v1'; kind: 'Deployment' }); return }", "const Component: React.FC = () => { const [models, inFlight] = UseK8sModels(); return }", "const Component: React.FC = () => { const watchRes = { } const [data, loaded, error] = useK8sWatchResource(watchRes) return }", "const Component: React.FC = () => { const watchResources = { 'deployment': {...}, 'pod': {...} } const {deployment, pod} = useK8sWatchResources(watchResources) return }", "<StatusPopupSection firstColumn={ <> <span>{title}</span> <span className=\"text-secondary\"> My Example Item </span> </> } secondColumn='Status' >", "<StatusPopupSection firstColumn='Example' secondColumn='Status' > <StatusPopupItem icon={healthStateMapping[MCGMetrics.state]?.icon}> Complete </StatusPopupItem> <StatusPopupItem icon={healthStateMapping[RGWMetrics.state]?.icon}> Pending </StatusPopupItem> </StatusPopupSection>", "<Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview>", "<Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview>", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "if (loadError) { title = <Link to={workerNodesLink}>{t('Worker Nodes')}</Link>; } else if (!loaded) { title = <><InventoryItemLoading /><Link to={workerNodesLink}>{t('Worker Nodes')}</Link></>; } return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> </InventoryItem> )", "<React.Suspense fallback={<LoadingBox />}> <CodeEditor value={code} language=\"yaml\" /> </React.Suspense>", "<React.Suspense fallback={<LoadingBox />}> <ResourceYAMLEditor initialResource={resource} header=\"Create resource\" onSave={(content) => updateResource(content)} /> </React.Suspense>", "const [resource, loaded, loadError] = useK8sWatchResource(clusterResource); return <ResourceEventStream resource={resource} />", "const context: AppPage: React.FC = () => {<br/> const [launchModal] = useModal();<br/> const onClick = () => launchModal(ModalComponent);<br/> return (<br/> <Button onClick={onClick}>Launch a Modal</Button><br/> )<br/>}<br/>`", "const context: ActionContext = { 'a-context-id': { dataFromDynamicPlugin } }; <ActionServiceProvider context={context}> {({ actions, options, loaded }) => loaded && ( <ActionMenu actions={actions} options={options} variant={ActionMenuVariant.DROPDOWN} /> ) } </ActionServiceProvider>", "const logNamespaceChange = (namespace) => console.log(`New namespace: USD{namespace}`); <NamespaceBar onNamespaceChange={logNamespaceChange}> <NamespaceBarApplicationSelector /> </NamespaceBar> <Page>", "//in ErrorBoundary component return ( if (this.state.hasError) { return <ErrorBoundaryFallbackPage errorMessage={errorString} componentStack={componentStackString} stack={stackTraceString} title={errorString}/>; } return this.props.children; )", "<QueryBrowser defaultTimespan={15 * 60 * 1000} namespace={namespace} pollInterval={30 * 1000} queries={[ 'process_resident_memory_bytes{job=\"console\"}', 'sum(irate(container_network_receive_bytes_total[6h:5m])) by (pod)', ]} />", "const PodAnnotationsButton = ({ pod }) => { const { t } = useTranslation(); const launchAnnotationsModal = useAnnotationsModal<PodKind>(pod); return <button onClick={launchAnnotationsModal}>{t('Edit Pod Annotations')}</button> }", "const DeletePodButton = ({ pod }) => { const { t } = useTranslation(); const launchDeleteModal = useDeleteModal<PodKind>(pod); return <button onClick={launchDeleteModal}>{t('Delete Pod')}</button> }", "const PodLabelsButton = ({ pod }) => { const { t } = useTranslation(); const launchLabelsModal = useLabelsModal<PodKind>(pod); return <button onClick={launchLabelsModal}>{t('Edit Pod Labels')}</button> }", "const Component: React.FC = (props) => { const [activeNamespace, setActiveNamespace] = useActiveNamespace(); return <select value={activeNamespace} onChange={(e) => setActiveNamespace(e.target.value)} > { // ...namespace options } </select> }", "<React.Suspense fallback={<LoadingBox />}> <YAMLEditor value={code} /> </React.Suspense>", "oc get console.operator.openshift.io cluster -o jsonpath='{.spec.plugins}'", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-console spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-operators spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-operators podSelector: {} policyTypes: - Ingress", "oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait", "oc delete devworkspaceroutings.controller.devfile.io --all-namespaces --all --wait", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceroutings.controller.devfile.io", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspacetemplates.workspace.devfile.io", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceoperatorconfigs.controller.devfile.io", "oc get customresourcedefinitions.apiextensions.k8s.io | grep \"devfile.io\"", "oc delete deployment/devworkspace-webhook-server -n openshift-operators", "oc delete mutatingwebhookconfigurations controller.devfile.io", "oc delete validatingwebhookconfigurations controller.devfile.io", "oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server -n openshift-operators", "oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators", "oc delete clusterrole devworkspace-webhook-server", "oc delete clusterrolebinding devworkspace-webhook-server", "oc edit consoles.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: managementState: Removed 1", "oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml", "oc create -f my-quick-start.yaml", "oc explain consolequickstarts", "summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1", "spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg==", "introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural \"Spring on OpenShift\" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift", "icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld.", "accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create", "accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list", "nextQuickStart: - add-healthchecks", "[Perspective switcher]{{highlight qs-perspective-switcher}}", "[Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}}", "[Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}}", "[Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}}", "[CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}}", "`code block`{{copy}} `code block`{{execute}}", "``` multi line code block ```{{copy}} ``` multi line code block ```{{execute}}", "Create a serverless application.", "In this quick start, you will deploy a sample application to {product-title}.", "This quick start shows you how to deploy a sample application to {product-title}.", "Tasks to complete: Create a serverless application; Connect an event source; Force a new revision", "You will complete these 3 tasks: Creating a serverless application; Connecting an event source; Forcing a new revision", "Click OK.", "Click on the OK button.", "Enter the Developer perspective: In the main navigation, click the dropdown menu and select Developer. Enter the Administrator perspective: In the main navigation, click the dropdown menu and select Admin.", "In the node.js deployment, hover over the icon.", "Hover over the icon in the node.js deployment.", "Change the time range of the dashboard by clicking the dropdown menu and selecting time range.", "To look at data in a specific time frame, you can change the time range of the dashboard.", "In the navigation menu, click Settings.", "In the left-hand menu, click Settings.", "The success message indicates a connection.", "The message with a green icon indicates a connection.", "Set up your environment.", "Let's set up our environment." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/web_console/index
Chapter 6. GenericKafkaListener schema reference
Chapter 6. GenericKafkaListener schema reference Used in: KafkaClusterSpec Full list of GenericKafkaListener schema properties Configures listeners to connect to Kafka brokers within and outside OpenShift. You configure the listeners in the Kafka resource. Example Kafka resource showing listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: #... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... 6.1. listeners You configure Kafka broker listeners using the listeners property in the Kafka resource. Listeners are defined as an array. Example listener configuration listeners: - name: plain port: 9092 type: internal tls: false The name and port must be unique within the Kafka cluster. By specifying a unique name and port for each listener, you can configure multiple listeners. The name can be up to 25 characters long, comprising lower-case letters and numbers. 6.2. port The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client. loadbalancer listeners use the specified port number, as do internal and cluster-ip listeners ingress and route listeners use port 443 for access nodeport listeners use the port number assigned by OpenShift For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka resource. Example command to retrieve the address and port for client connection oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==" <listener_name> ")].bootstrapServers}{"\n"}' Important When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999). 6.3. type The type is set as internal , or for external listeners, as route , loadbalancer , nodeport , ingress or cluster-ip . You can also configure a cluster-ip listener, a type of internal listener you can use to build custom access mechanisms. internal You can configure internal listeners with or without encryption using the tls property. Example internal listener configuration #... spec: kafka: #... listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #... route Configures an external listener to expose Kafka using OpenShift Routes and the HAProxy router. A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Example route listener configuration #... spec: kafka: #... listeners: #... - name: external1 port: 9094 type: route tls: true #... ingress Configures an external listener to expose Kafka using Kubernetes Ingress and the Ingress NGINX Controller for Kubernetes . A dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is 9095 in the following example. You must specify the hostnames used by the bootstrap and per-broker services using GenericKafkaListenerConfigurationBootstrap and GenericKafkaListenerConfigurationBroker properties. Example ingress listener configuration #... spec: kafka: #... listeners: #... - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... Note External listeners using Ingress are currently only tested with the Ingress NGINX Controller for Kubernetes . loadbalancer Configures an external listener to expose Kafka using a Loadbalancer type Service . A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port 9094 in the following example. You can use the loadBalancerSourceRanges property to configure source ranges to restrict access to the specified IP addresses. Example loadbalancer listener configuration #... spec: kafka: #... listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #... nodeport Configures an external listener to expose Kafka using a NodePort type Service . Kafka clients connect directly to the nodes of OpenShift. An additional NodePort type of service is created to serve as a Kafka bootstrap address. When configuring the advertised addresses for the Kafka broker pods, Streams for Apache Kafka uses the address of the node on which the given pod is running. You can use preferredNodePortAddressType property to configure the first address type checked as the node address . Example nodeport listener configuration #... spec: kafka: #... listeners: #... - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #... Note TLS hostname verification is not currently supported when exposing Kafka clusters using node ports. cluster-ip Configures an internal listener to expose Kafka using a per-broker ClusterIP type Service . The listener does not use a headless service and its DNS names to route traffic to Kafka brokers. You can use this type of listener to expose a Kafka cluster when using the headless service is unsuitable. You might use it with a custom access mechanism, such as one that uses a specific Ingress controller or the OpenShift Gateway API. A new ClusterIP service is created for each Kafka broker pod. The service is assigned a ClusterIP address to serve as a Kafka bootstrap address with a per-broker port number. For example, you can configure the listener to expose a Kafka cluster over an Nginx Ingress Controller with TCP port configuration. Example cluster-ip listener configuration #... spec: kafka: #... listeners: - name: clusterip type: cluster-ip tls: false port: 9096 #... 6.4. tls The TLS property is required. To enable TLS encryption, set the tls property to true . For route and ingress type listeners, TLS encryption must be always enabled. 6.5. authentication Authentication for the listener can be specified as: mTLS ( tls ) SCRAM-SHA-512 ( scram-sha-512 ) Token-based OAuth 2.0 ( oauth ) Custom ( custom ) 6.6. networkPolicyPeers Use networkPolicyPeers to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers configuration for a plain and a tls listener. In the following example: Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker. Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener. The syntax of the networkPolicyPeers property is the same as the from property in NetworkPolicy resources. Example network policy configuration listeners: #... - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ... 6.7. GenericKafkaListener schema properties Property Property type Description name string Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long. port integer Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. type string (one of [ingress, internal, route, loadbalancer, cluster-ip, nodeport]) Type of the listener. The supported types are as follows: internal type exposes Kafka internally only within the OpenShift cluster. route type uses OpenShift Routes to expose Kafka. loadbalancer type uses LoadBalancer type services to expose Kafka. nodeport type uses NodePort type services to expose Kafka. ingress type uses OpenShift Nginx Ingress to expose Kafka with TLS passthrough. cluster-ip type uses a per-broker ClusterIP service. tls boolean Enables TLS encryption on the listener. This is a required property. authentication KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth , KafkaListenerAuthenticationCustom Authentication configuration for this listener. configuration GenericKafkaListenerConfiguration Additional listener configuration. networkPolicyPeers NetworkPolicyPeer array List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "listeners: - name: plain port: 9092 type: internal tls: false", "get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'", "# spec: kafka: # listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #", "# spec: kafka: # listeners: # - name: external1 port: 9094 type: route tls: true #", "# spec: kafka: # listeners: # - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "# spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "# spec: kafka: # listeners: # - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "# spec: kafka: # listeners: - name: clusterip type: cluster-ip tls: false port: 9096 #", "listeners: # - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-GenericKafkaListener-reference
4.3. Clustering and High Availability
4.3. Clustering and High Availability clufter The clufter package, available as a Technology Preview in Red Hat Enterprise Linux 6, provides a tool for transforming and analyzing cluster configuration formats. It can be used to assist with migration from an older stack configuration to a newer configuration that leverages Pacemaker. For information on the capabilities of clufter , see the clufter(1) man page or the output of the clufter -h command. Package: clufter-0.11.2-1 luci support for fence_sanlock The luci tool now supports the sanlock fence agent as a Technology Preview. The agent is available in the luci's list of agents. Package: luci-0.26.0-67 Recovering a node via a hardware watchdog device New fence_sanlock agent and checkquorum.wdmd, included in Red Hat Enterprise Linux 6.4 as a Technology Preview, provide new mechanisms to trigger the recovery of a node via a hardware watchdog device. Tutorials on how to enable this Technology Preview will be available at https://fedorahosted.org/cluster/wiki/HomePage Note that SELinux in enforcing mode is currently not supported. Package: cluster-3.0.12.1-73
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/clustering_tp
9.3. Booleans
9.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: allow_user_mysql_connect When enabled, this Boolean allows users to connect to MySQL. exim_can_connect_db When enabled, this Boolean allows the exim mailer to initiate connections to a database server. ftpd_connect_db When enabled, this Boolean allows ftp daemons to initiate connections to a database server. httpd_can_network_connect_db Enabling this Boolean is required for a web server to communicate with a database server. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, run the following command as root:
[ "~]# semanage boolean -l | grep service_name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-mysql-booleans
Chapter 2. Hibernate Configuration
Chapter 2. Hibernate Configuration 2.1. Hibernate Configuration The configuration for entity managers both inside an application server and in a standalone application reside in a persistence archive. A persistence archive is a JAR file which must define a persistence.xml file that resides in the META-INF/ folder. You can connect to the database using the persistence.xml file. There are two ways of doing this: Specifying a data source which is configured in the datasources subsystem in JBoss EAP. The jta-data-source points to the Java Naming and Directory Interface name of the data source this persistence unit maps to. The java:jboss/datasources/ExampleDS here points to the H2 DB embedded in the JBoss EAP. Example of object-relational-mapping in the persistence.xml File <persistence> <persistence-unit name="myapp"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> <properties> ... ... </properties> </persistence-unit> </persistence> Explicitly configuring the persistence.xml file by specifying the connection properties. Example of Specifying Connection Properties in the persistence.xml file <property name="javax.persistence.jdbc.driver" value="org.hsqldb.jdbcDriver"/> <property name="javax.persistence.jdbc.user" value="sa"/> <property name="javax.persistence.jdbc.password" value=""/> <property name="javax.persistence.jdbc.url" value="jdbc:hsqldb:."/> For the complete list of connection properties, see Connection Properties Configurable in the persistence.xml File . There are a number of properties that control the behavior of Hibernate at runtime. All are optional and have reasonable default values. These Hibernate properties are all used in the persistence.xml file. For the complete list of all configurable Hibernate properties, see Hibernate Properties . 2.2. Second-Level Caches 2.2.1. About Second-level Caches A second-level cache is a local data store that holds information persisted outside the application session. The cache is managed by the persistence provider, improving runtime by keeping the data separate from the application. JBoss EAP supports caching for the following purposes: Web Session Clustering Stateful Session Bean Clustering SSO Clustering Hibernate Second-level Cache Jakarta Persistence Second-level Cache Warning Each cache container defines a repl and a dist cache. These caches should not be used directly by user applications. 2.2.2. Configure a Second-level Cache for Hibernate The configuration of Infinispan to act as the second-level cache for Hibernate can be done in two ways: It is recommended to configure the second-level cache through Jakarta Persistence applications, using the persistence.xml file , as explained in the JBoss EAP Development Guide . Alternatively, you can configure the second-level cache through Hibernate native applications, using the hibernate.cfg.xml file, as explained below. Configuring a Second-level Cache for Hibernate Using Hibernate Native Applications Create the hibernate.cfg.xml file in the deployment's class path. Add the following XML to the hibernate.cfg.xml file. The XML needs to be within the <session-factory> tag: <property name="hibernate.cache.use_second_level_cache">true</property> <property name="hibernate.cache.use_query_cache">true</property> <property name="hibernate.cache.region.factory_class">org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory</property> In order to use the Hibernate native APIs within your application, you must add the following dependencies to the MANIFEST.MF file:
[ "<persistence> <persistence-unit name=\"myapp\"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> <properties> ... </properties> </persistence-unit> </persistence>", "<property name=\"javax.persistence.jdbc.driver\" value=\"org.hsqldb.jdbcDriver\"/> <property name=\"javax.persistence.jdbc.user\" value=\"sa\"/> <property name=\"javax.persistence.jdbc.password\" value=\"\"/> <property name=\"javax.persistence.jdbc.url\" value=\"jdbc:hsqldb:.\"/>", "<property name=\"hibernate.cache.use_second_level_cache\">true</property> <property name=\"hibernate.cache.use_query_cache\">true</property> <property name=\"hibernate.cache.region.factory_class\">org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory</property>", "Dependencies: org.infinispan,org.hibernate" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_hibernate_applications/hibernate_configuration
Chapter 3. Customizing the Cryostat dashboard
Chapter 3. Customizing the Cryostat dashboard The Cryostat Dashboard displays information about target Java Virtual Machines (JVMs) in the form of cards on the user interface. You can configure the cards and customize different dashboard layouts according to your requirements. 3.1. Creating a custom dashboard layout Create customized layouts to organize the display of dashboard cards, according to your requirements. You can organize the cards in different configurations and create custom views to display the data and specific metrics that are most relevant to your current requirements. You can add, remove, and arrange the cards and switch between different layouts. You can also create layout templates that you can download, reuse, or share with other users so that they can access the same information and metrics. By using dashboard layouts, you do not need to modify your dashboard manually each time you want to view different information. Prerequisites Created a Cryostat instance in your project. Logged in to your Cryostat web console. Created a target JVM to monitor. Procedure On the Cryostat web console, click Dashboard . On the toolbar, click the layout selector dropdown menu. Click New Layout . Figure 3.1. Creating a new dashboard layout The new layout is assigned a default name. To specify a different name, click the pencil icon beside the name. (Optional): To select an existing template or upload a new one, click the expandable menu on the New Layout button. Figure 3.2. Creating a new dashboard layout by using a template (Optional): To set or download a layout as a template or to clear the layout, click the more options icon ( ... ): Figure 3.3. Setting or downloading a layout as a template or clearing the layout To set the current layout as a template, select Set as template . To download the current layout as a template, select Download as template . The template is downloaded as a .json file. To clear the current layout, select Clear layout . A confirmation dialog then opens. To confirm that you want to permanently clear the current dashboard layout, click Clear . Figure 3.4. Clearing a dashboard layout 3.2. Adding cards to a dashboard layout You can select and configure the cards you want to add to the Cryostat Dashboard . Each card displays a different set of information or metrics about the target JVM you select. Prerequisites Created a Cryostat instance in your project. Logged in to your Cryostat web console. Created a target JVM to monitor. Procedure On the Cryostat web console, click Dashboard . From the Target dropdown menu, select the target JVM whose information you want to view. To add a dashboard card, click the Add card icon. The Dashboard card catalog window opens. From the available cards types, select a card to add to your dashboard layout and click Finish . Repeat this step for each card that you want to add. Note Some cards require additional configuration, for example, the MBeans Metrics Chart card. In this instance, click to access the configuration wizard, specify the values you require, then click Finish . Revised on 2023-12-12 18:03:30 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_the_cryostat_dashboard/assembly_customizing-dashboard_con_dashboard-cards
Chapter 10. Predictive Model Markup Language (PMML)
Chapter 10. Predictive Model Markup Language (PMML) Predictive Model Markup Language (PMML) is an XML-based standard established by the Data Mining Group (DMG) for defining statistical and data-mining models. PMML models can be shared between PMML-compliant platforms and across organizations so that business analysts and developers are unified in designing, analyzing, and implementing PMML-based assets and services. For more information about the background and applications of PMML, see the DMG PMML specification . 10.1. PMML conformance levels The PMML specification defines producer and consumer conformance levels in a software implementation to ensure that PMML models are created and integrated reliably. For the formal definitions of each conformance level, see the DMG PMML conformance page. The following list summarizes the PMML conformance levels: Producer conformance A tool or application is producer conforming if it generates valid PMML documents for at least one type of model. Satisfying PMML producer conformance requirements ensures that a model definition document is syntactically correct and defines a model instance that is consistent with semantic criteria that are defined in model specifications. Consumer conformance An application is consumer conforming if it accepts valid PMML documents for at least one type of model. Satisfying consumer conformance requirements ensures that a PMML model created according to producer conformance can be integrated and used as defined. For example, if an application is consumer conforming for Regression model types, then valid PMML documents defining models of this type produced by different conforming producers would be interchangeable in the application. Red Hat Decision Manager includes consumer conformance support for the following PMML model types: Regression models Scorecard models Tree models Mining models (with sub-types modelChain , selectAll , and selectFirst ) Clustering models For a list of all PMML model types, including those not supported in Red Hat Decision Manager, see the DMG PMML specification .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/pmml-con_pmml-models
4.12. RHEA-2012:0812 - new package: libqb
4.12. RHEA-2012:0812 - new package: libqb A new libqb package is now available for Red Hat Enterprise Linux 6. The libqb package provides a library with the primary purpose of providing high performance client server reusable features, such as high performance logging, tracing, inter-process communication, and polling. This enhancement update adds the libqb package to Red Hat Enterprise Linux 6. This package is introduced as a dependency of the pacemaker package, and is considered a Technology Preview in Red Hat Enterprise Linux 6.3. (BZ# 782240 ) All users who require libqb are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rhea-2012-0812
Appendix C. Glossary of terms
Appendix C. Glossary of terms C.1. Virtualization terms Administration Portal A web user interface provided by Red Hat Virtualization Manager, based on the oVirt engine web user interface. It allows administrators to manage and monitor cluster resources like networks, storage domains, and virtual machine templates. Hosted Engine The instance of Red Hat Virtualization Manager that manages RHHI for Virtualization. Hosted Engine virtual machine The virtual machine that acts as Red Hat Virtualization Manager. The Hosted Engine virtual machine runs on a virtualization host that is managed by the instance of Red Hat Virtualization Manager that is running on the Hosted Engine virtual machine. Manager node A virtualization host that runs Red Hat Virtualization Manager directly, rather than running it in a Hosted Engine virtual machine. Red Hat Enterprise Linux host A physical machine installed with Red Hat Enterprise Linux plus additional packages to provide the same capabilities as a Red Hat Virtualization host. This type of host is not supported for use with RHHI for Virtualization. Red Hat Virtualization An operating system and management interface for virtualizing resources, processes, and applications for Linux and Microsoft Windows workloads. Red Hat Virtualization host A physical machine installed with Red Hat Virtualization that provides the physical resources to support the virtualization of resources, processes, and applications for Linux and Microsoft Windows workloads. This is the only type of host supported with RHHI for Virtualization. Red Hat Virtualization Manager A server that runs the management and monitoring capabilities of Red Hat Virtualization. Self-Hosted Engine node A virtualization host that contains the Hosted Engine virtual machine. All hosts in a RHHI for Virtualization deployment are capable of becoming Self-Hosted Engine nodes, but there is only one Self-Hosted Engine node at a time. storage domain A named collection of images, templates, snapshots, and metadata. A storage domain can be comprised of block devices or file systems. Storage domains are attached to data centers in order to provide access to the collection of images, templates, and so on to hosts in the data center. virtualization host A physical machine with the ability to virtualize physical resources, processes, and applications for client access. VM Portal A web user interface provided by Red Hat Virtualization Manager. It allows users to manage and monitor virtual machines. C.2. Storage terms brick An exported directory on a server in a trusted storage pool. cache logical volume A small, fast logical volume used to improve the performance of a large, slow logical volume. geo-replication One way asynchronous replication of data from a source Gluster volume to a target volume. Geo-replication works across local and wide area networks as well as the Internet. The target volume can be a Gluster volume in a different trusted storage pool, or another type of storage. gluster volume A logical group of bricks that can be configured to distribute, replicate, or disperse data according to workload requirements. logical volume management (LVM) A method of combining physical disks into larger virtual partitions. Physical volumes are placed in volume groups to form a pool of storage that can be divided into logical volumes as needed. Red Hat Gluster Storage An operating system based on Red Hat Enterprise Linux with additional packages that provide support for distributed, software-defined storage. source volume The Gluster volume that data is being copied from during geo-replication. storage host A physical machine that provides storage for client access. target volume The Gluster volume or other storage volume that data is being copied to during geo-replication. thin provisioning Provisioning storage such that only the space that is required is allocated at creation time, with further space being allocated dynamically according to need over time. thick provisioning Provisioning storage such that all space is allocated at creation time, regardless of whether that space is required immediately. trusted storage pool A group of Red Hat Gluster Storage servers that recognise each other as trusted peers. C.3. Hyperconverged Infrastructure terms Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization RHHI for Virtualization is a single product that provides both virtual compute and virtual storage resources. Red Hat Virtualization and Red Hat Gluster Storage are installed in a converged configuration, where the services of both products are available on each physical machine in a cluster. hyperconverged host A physical machine that provides physical storage, which is virtualized and consumed by virtualized processes and applications run on the same host. All hosts installed with RHHI for Virtualization are hyperconverged hosts. Web Console The web user interface for deploying, managing, and monitoring RHHI for Virtualization. The Web Console is provided by the Web Console service and plugins for Red Hat Virtualization Manager.
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/glossary-of-terms
Preface
Preface This guide contains information about installing, configuring, and managing the OpenStack Integration Test Suite in a Red Hat OpenStack Platform environment.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/openstack_integration_test_suite_guide/pr01
Chapter 4. Targeted Policy
Chapter 4. Targeted Policy Targeted policy is the default SELinux policy used in Red Hat Enterprise Linux. When using targeted policy, processes that are targeted run in a confined domain, and processes that are not targeted run in an unconfined domain. For example, by default, logged-in users run in the unconfined_t domain, and system processes started by init run in the initrc_t domain; both of these domains are unconfined. Executable and writable memory checks may apply to both confined and unconfined domains. However, by default, subjects running in an unconfined domain cannot allocate writable memory and execute it. This reduces vulnerability to buffer overflow attacks. These memory checks are disabled by setting Booleans, which allow the SELinux policy to be modified at runtime. Boolean configuration is discussed later. 4.1. Confined Processes Almost every service that listens on a network, such as sshd or httpd , is confined in Red Hat Enterprise Linux. Also, most processes that run as the Linux root user and perform tasks for users, such as the passwd application, are confined. When a process is confined, it runs in its own domain, such as the httpd process running in the httpd_t domain. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. Complete this procedure to ensure that SELinux is enabled and the system is prepared to perform the following example: Procedure 4.1. How to Verify SELinux Status Run the sestatus command to confirm that SELinux is enabled, is running in enforcing mode, and that targeted policy is being used. The correct output should look similar to the output bellow. Refer to the section Section 5.4, "Permanent Changes in SELinux States and Modes" for detailed information about enabling and disabling SELinux. As the Linux root user, run the touch /var/www/html/testfile command to create a file. Run the ls -Z /var/www/html/testfile command to view the SELinux context: By default, Linux users run unconfined in Red Hat Enterprise Linux, which is why the testfile file is labeled with the SELinux unconfined_u user. RBAC is used for processes, not files. Roles do not have a meaning for files; the object_r role is a generic role used for files (on persistent storage and network file systems). Under the /proc/ directory, files related to processes may use the system_r role. The httpd_sys_content_t type allows the httpd process to access this file. The following example demonstrates how SELinux prevents the Apache HTTP Server ( httpd ) from reading files that are not correctly labeled, such as files intended for use by Samba. This is an example, and should not be used in production. It assumes that the httpd and wget packages are installed, the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Procedure 4.2. An Example of Confined Process As the Linux root user, run the service httpd start command to start the httpd process. The output is as follows if httpd starts successfully: Change into a directory where your Linux user has write access to, and run the wget http://localhost/testfile command. Unless there are changes to the default configuration, this command succeeds: The chcon command relabels files; however, such label changes do not survive when the file system is relabeled. For permanent changes that survive a file system relabel, use the semanage command, which is discussed later. As the Linux root user, run the following command to change the type to a type used by Samba: Run the ls -Z /var/www/html/testfile command to view the changes: Note: the current DAC permissions allow the httpd process access to testfile . Change into a directory where your Linux user has write access to, and run the wget http://localhost/testfile command. Unless there are changes to the default configuration, this command fails: As the Linux root user, run the rm -i /var/www/html/testfile command to remove testfile . If you do not require httpd to be running, as the Linux root user, run the service httpd stop command to stop httpd : This example demonstrates the additional security added by SELinux. Although DAC rules allowed the httpd process access to testfile in step 2, because the file was labeled with a type that the httpd process does not have access to, SELinux denied access. If the auditd daemon is running, an error similar to the following is logged to /var/log/audit/audit.log : Also, an error similar to the following is logged to /var/log/httpd/error_log :
[ "~]USD sestatus SELinux status: enabled SELinuxfs mount: /selinux Current mode: enforcing Mode from config file: enforcing Policy version: 24 Policy from config file: targeted", "-rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/testfile", "~]# service httpd start Starting httpd: [ OK ]", "~]USD wget http://localhost/testfile --2009-11-06 17:43:01-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 0 [text/plain] Saving to: `testfile' [ <=> ] 0 --.-K/s in 0s 2009-11-06 17:43:01 (0.00 B/s) - `testfile' saved [0/0]", "~]# chcon -t samba_share_t /var/www/html/testfile", "-rw-r--r-- root root unconfined_u:object_r:samba_share_t:s0 /var/www/html/testfile", "~]USD wget http://localhost/testfile --2009-11-06 14:11:23-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2009-11-06 14:11:23 ERROR 403: Forbidden.", "~]# service httpd stop Stopping httpd: [ OK ]", "type=AVC msg=audit(1220706212.937:70): avc: denied { getattr } for pid=1904 comm=\"httpd\" path=\"/var/www/html/testfile\" dev=sda5 ino=247576 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0 tclass=file type=SYSCALL msg=audit(1220706212.937:70): arch=40000003 syscall=196 success=no exit=-13 a0=b9e21da0 a1=bf9581dc a2=555ff4 a3=2008171 items=0 ppid=1902 pid=1904 auid=500 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm=\"httpd\" exe=\"/usr/sbin/httpd\" subj=unconfined_u:system_r:httpd_t:s0 key=(null)", "[Wed May 06 23:00:54 2009] [error] [client 127.0.0.1 ] (13)Permission denied: access to /testfile denied" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/chap-security-enhanced_linux-targeted_policy
Chapter 4. Glossary
Chapter 4. Glossary Common terms and definitions for Red Hat's Trusted Profile Analyzer service. Exhort The backend endpoint of Trusted Profile Analyzer where all the API requests get sent, to retrieve the necessary data to analyze, including package dependencies and vulnerabilities. The Red Hat Dependency Analytics (RHDA) integrated development environment (IDE) plug-in uses this endpoint to generate vulnerability reports within the IDE framework. Software Bill of Materials Also known by the acronym, SBOM. A manifest of dependent software packages needed for a particular application. Single Pane of Glass Also known by the acronym, SPOG. The RESTful application programming interface (API) for the Trusted Profile Analyzer web dashboard, and notifications. Vulnerability Exploitability eXchange Also known by the acronym, VEX. A security advisory issued by a software provider for specific vulnerabilities within a product. Common Vulnerability and Exposures Also known by the acronym, CVE. A CVE indicates a product's exposure to attacks and malicious activities by giving it a score 1-10, where 1 is the lowest exposure level and 10 is the highest exposure level. Common Vulnerability Score System Also known by the acronym CVSS. The CVSS calculates CVE scores according to specific formulas when trying to calculate CVEs in a broad range of products and networks.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/reference_guide/glossary_ref
Chapter 2. Assigning virtual GPUs
Chapter 2. Assigning virtual GPUs To set up NVIDIA vGPU devices, you need to: Obtain and install the correct NVIDIA vGPU driver for your GPU device Create mediated devices Assign each mediated device to a virtual machine Install guest drivers on each virtual machine. The following procedures explain this process. 2.1. Setting up NVIDIA vGPU devices on the host Note Before installing the NVIDIA vGPU driver on the guest operating system, you need to understand the licensing requirements and obtain the correct license credentials. Prerequisites Your GPU device supports virtual GPU (vGPU) functionality. Your system is listed as a validated server hardware platform. For more information about supported GPUs and validated platforms, see NVIDIA vGPU CERTIFIED SERVERS on www.nvidia.com. Procedure Download and install the NVIDIA-vGPU driver. For information on getting the driver, see vGPU drivers page on the NVIDIA website . An Nvidia enterprise account is required to download the drivers. Contact the hardware vendor if this is not available. Unzip the downloaded file from the Nvidia website and copy it to the host to install the driver. If the NVIDIA software installer did not create the /etc/modprobe.d/nvidia-installer-disable-nouveau.conf file, create it manually. Open /etc/modprobe.d/nvidia-installer-disable-nouveau.conf file in a text editor and add the following lines to the end of the file: blacklist nouveau options nouveau modeset=0 Regenerate the initial ramdisk for the current kernel, then reboot: # dracut --force # reboot Alternatively, if you need to use a prior supported kernel version with mediated devices, regenerate the initial ramdisk for all installed kernel versions: # dracut --regenerate-all --force # reboot Check that the kernel loaded the nvidia_vgpu_vfio module: # lsmod | grep nvidia_vgpu_vfio Check that the nvidia-vgpu-mgr.service service is running: # systemctl status nvidia-vgpu-mgr.service For example: # lsmod | grep nvidia_vgpu_vfio nvidia_vgpu_vfio 45011 0 nvidia 14333621 10 nvidia_vgpu_vfio mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio 32695 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 # systemctl status nvidia-vgpu-mgr.service nvidia-vgpu-mgr.service - NVIDIA vGPU Manager Daemon Loaded: loaded (/usr/lib/systemd/system/nvidia-vgpu-mgr.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-03-16 10:17:36 CET; 5h 8min ago Main PID: 1553 (nvidia-vgpu-mgr) [...] In the Administration Portal, click Compute Virtual Machines . Click the name of the virtual machine to go to the details view. Click the Host Devices tab. Click Manage vGPU . The Manage vGPU dialog box opens. Select a vGPU type and the number of instances that you would like to use with this virtual machine. Select On for Secondary display adapter for VNC to add a second emulated QXL or VGA graphics adapter as the primary graphics adapter for the console in addition to the vGPU. Note On cluster levels 4.5 and later, when a vGPU is used and the Secondary display adapter for VNC is set to On , an additional framebuffer display device is automatically added to the virtual machine. This allows the virtual machine console to be displayed before the vGPU is initialized, instead of a blank screen. Click Save . 2.2. Installing the vGPU driver on the virtual machine Procedure Run the virtual machine and connect to it using the VNC console. Note SPICE is not supported on vGPU. Download the driver to the virtual machine. For information on getting the driver, see the Drivers page on the NVIDIA website . Install the vGPU driver, following the instructions in Installing the NVIDIA vGPU Software Graphics Driver in the NVIDIA Virtual GPU software documentation . Important Linux only: When installing the driver on a Linux guest operating system, you are prompted to update xorg.conf. If you do not update xorg.conf during the installation, you need to update it manually. After the driver finishes installing, reboot the machine. For Windows virtual machines, fully power off the guest from the Administration portal or the VM portal, not from within the guest operating system. Important Windows only: Powering off the virtual machine from within the Windows guest operating system sometimes sends the virtual machine into hibernate mode, which does not completely clear the memory, possibly leading to subsequent problems. Using the Administration portal or the VM portal to power off the virtual machine forces it to fully clean the memory. Run the virtual machine and connect to it using one of the supported remote desktop protocols, such as Mechdyne TGX, and verify that the vGPU is recognized by opening the NVIDIA Control Panel. On Windows, you can alternatively open the Windows Device Manager. The vGPU should appear under Display adapters . For more information, see the NVIDIA vGPU Software Graphics Driver in the NVIDIA Virtual GPU software documentation . Set up NVIDIA vGPU guest software licensing for each vGPU and add the license credentials in the NVIDIA control panel. For more information, see How NVIDIA vGPU Software Licensing Is Enforced in the NVIDIA Virtual GPU Software Documentation . 2.3. Removing NVIDIA vGPU devices To change the configuration of assigned vGPU mediated devices, the existing devices have to be removed from the assigned guests. Procedure In the Administration Portal, click Compute Virtual Machines . Click the name of the virtual machine to go to the details view. Click the Host Devices tab. Click Manage vGPU . The Manage vGPU dialog box opens. Click the x button to Selected vGPU Type Instances to detach the vGPU from the virtual machine. Click SAVE . 2.4. Monitoring NVIDIA vGPUs For NVIDIA vGPUS, to get info on the physical GPU and vGPU, you can use the NVIDIA System Management Interface by entering the nvidia-smi command on the host. For more information, see NVIDIA System Management Interface nvidia-smi in the NVIDIA Virtual GPU Software Documentation . For example: 2.5. Remote desktop streaming services for NVIDIA vGPU The following remote desktop streaming services have been successfully tested for use with the NVIDIA vGPU feature in RHEL 8: HP-RGS Mechdyne TGX - It is currently not possible to use Mechdyne TGX with Windows Server 2016 guests. NICE DCV - When using this streaming service, use fixed resolution settings, because using dynamic resolution in some cases results in a black screen.
[ "blacklist nouveau options nouveau modeset=0", "dracut --force reboot", "dracut --regenerate-all --force reboot", "lsmod | grep nvidia_vgpu_vfio", "systemctl status nvidia-vgpu-mgr.service", "lsmod | grep nvidia_vgpu_vfio nvidia_vgpu_vfio 45011 0 nvidia 14333621 10 nvidia_vgpu_vfio mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio 32695 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 systemctl status nvidia-vgpu-mgr.service nvidia-vgpu-mgr.service - NVIDIA vGPU Manager Daemon Loaded: loaded (/usr/lib/systemd/system/nvidia-vgpu-mgr.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-03-16 10:17:36 CET; 5h 8min ago Main PID: 1553 (nvidia-vgpu-mgr) [...]", "nvidia-smi Thu Nov 1 17:40:09 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.62 Driver Version: 410.62 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla M60 On | 00000000:84:00.0 Off | Off | | N/A 40C P8 24W / 150W | 1034MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla M60 On | 00000000:85:00.0 Off | Off | | N/A 33C P8 23W / 150W | 8146MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla M60 On | 00000000:8B:00.0 Off | Off | | N/A 34C P8 24W / 150W | 8146MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla M60 On | 00000000:8C:00.0 Off | Off | | N/A 45C P8 24W / 150W | 18MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 34432 C+G vgpu 508MiB | | 0 34718 C+G vgpu 508MiB | | 1 35032 C+G vgpu 8128MiB | | 2 35032 C+G vgpu 8128MiB | +-----------------------------------------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/setting_up_an_nvidia_gpu_for_a_virtual_machine_in_red_hat_virtualization/assembly_managing-nvidia-vgpu-devices
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide describes how to install Ansible plug-ins for Red Hat Developer Hub. This document has been updated to include information for the latest release of Ansible Automation Platform.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_ansible_plug-ins_for_red_hat_developer_hub/pr01
Chapter 99. Ref
Chapter 99. Ref Both producer and consumer are supported The Ref component is used for lookup of existing endpoints bound in the Registry. 99.1. Dependencies When using ref with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ref-starter</artifactId> </dependency> 99.2. URI format Where someName is the name of an endpoint in the Registry (usually, but not always, the Spring registry). If you are using the Spring registry, someName would be the bean ID of an endpoint in the Spring registry. 99.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 99.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 99.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 99.4. Component Options The Ref component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 99.5. Endpoint Options The Ref endpoint is configured using URI syntax: with the following path and query parameters: 99.5.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of endpoint to lookup in the registry. String 99.5.2. Query Parameters (4 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 99.6. Runtime lookup This component can be used when you need dynamic discovery of endpoints in the Registry where you can compute the URI at runtime. Then you can look up the endpoint using the following code: // lookup the endpoint String myEndpointRef = "bigspenderOrder"; Endpoint endpoint = context.getEndpoint("ref:" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange); And you could have a list of endpoints defined in the Registry such as: <camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring"> <endpoint id="normalOrder" uri="activemq:order.slow"/> <endpoint id="bigspenderOrder" uri="activemq:order.high"/> </camelContext> 99.7. Sample In the sample below we use the ref: in the URI to reference the endpoint with the spring ID, endpoint2 : You could, of course, have used the ref attribute instead: <to uri="ref:endpoint2"/> Which is the more common way to write it. 99.8. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.ref.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.ref.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.ref.enabled Whether to enable auto configuration of the ref component. This is enabled by default. Boolean camel.component.ref.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ref-starter</artifactId> </dependency>", "ref:someName[?options]", "ref:name", "// lookup the endpoint String myEndpointRef = \"bigspenderOrder\"; Endpoint endpoint = context.getEndpoint(\"ref:\" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange);", "<camelContext id=\"camel\" xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <endpoint id=\"normalOrder\" uri=\"activemq:order.slow\"/> <endpoint id=\"bigspenderOrder\" uri=\"activemq:order.high\"/> </camelContext>", "<to uri=\"ref:endpoint2\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-ref-component-starter
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/making-open-source-more-inclusive
function::gettimeofday_s
function::gettimeofday_s Name function::gettimeofday_s - Number of seconds since UNIX epoch Synopsis Arguments None Description This function returns the number of seconds since the UNIX epoch.
[ "gettimeofday_s:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-gettimeofday-s
5.9. Maintaining SELinux Labels
5.9. Maintaining SELinux Labels These sections describe what happens to SELinux contexts when copying, moving, and archiving files and directories. Also, it explains how to preserve contexts when copying and archiving. 5.9.1. Copying Files and Directories When a file or directory is copied, a new file or directory is created if it does not exist. That new file or directory's context is based on default-labeling rules, not the original file or directory's context (unless options were used to preserve the original context). For example, files created in user home directories are labeled with the user_home_t type: If such a file is copied to another directory, such as /etc/ , the new file is created in accordance to default-labeling rules for the /etc/ directory. Copying a file (without additional options) may not preserve the original context: When file1 is copied to /etc/ , if /etc/file1 does not exist, /etc/file1 is created as a new file. As shown in the example above, /etc/file1 is labeled with the etc_t type, in accordance to default-labeling rules. When a file is copied over an existing file, the existing file's context is preserved, unless the user specified cp options to preserve the context of the original file, such as --preserve=context . SELinux policy may prevent contexts from being preserved during copies. Copying Without Preserving SELinux Contexts When copying a file with the cp command, if no options are given, the type is inherited from the targeted, parent directory: In this example, file1 is created in a user's home directory, and is labeled with the user_home_t type. The /var/www/html/ directory is labeled with the httpd_sys_content_t type, as shown with the ls -dZ /var/www/html/ command. When file1 is copied to /var/www/html/ , it inherits the httpd_sys_content_t type, as shown with the ls -Z /var/www/html/file1 command. Preserving SELinux Contexts When Copying Use the cp --preserve=context command to preserve contexts when copying: In this example, file1 is created in a user's home directory, and is labeled with the user_home_t type. The /var/www/html/ directory is labeled with the httpd_sys_content_t type, as shown with the ls -dZ /var/www/html/ command. Using the --preserve=context option preserves SELinux contexts during copy operations. As shown with the ls -Z /var/www/html/file1 command, the file1 user_home_t type was preserved when the file was copied to /var/www/html/ . Copying and Changing the Context Use the cp -Z command to change the destination copy's context. The following example was performed in the user's home directory: In this example, the context is defined with the -Z option. Without the -Z option, file2 would be labeled with the unconfined_u:object_r:user_home_t context. Copying a File Over an Existing File When a file is copied over an existing file, the existing file's context is preserved (unless an option is used to preserve contexts). For example: In this example, two files are created: /etc/file1 , labeled with the etc_t type, and /tmp/file2 , labeled with the user_tmp_t type. The cp /tmp/file2 /etc/file1 command overwrites file1 with file2 . After copying, the ls -Z /etc/file1 command shows file1 labeled with the etc_t type, not the user_tmp_t type from /tmp/file2 that replaced /etc/file1 . Important Copy files and directories, rather than moving them. This helps ensure they are labeled with the correct SELinux contexts. Incorrect SELinux contexts can prevent processes from accessing such files and directories.
[ "~]USD touch file1 ~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1 ~]# cp file1 /etc/ ~]USD ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD touch file1 ~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1 ~]USD ls -dZ /var/www/html/ drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/ ~]# cp file1 /var/www/html/ ~]USD ls -Z /var/www/html/file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/file1", "~]USD touch file1 ~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1 ~]USD ls -dZ /var/www/html/ drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/ ~]# cp --preserve=context file1 /var/www/html/ ~]USD ls -Z /var/www/html/file1 -rw-r--r-- root root unconfined_u:object_r:user_home_t:s0 /var/www/html/file1", "~]USD touch file1 ~]USD cp -Z system_u:object_r:samba_share_t:s0 file1 file2 ~]USD ls -Z file1 file2 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1 -rw-rw-r-- user1 group1 system_u:object_r:samba_share_t:s0 file2 ~]USD rm file1 file2", "~]# touch /etc/file1 ~]# ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1 ~]# touch /tmp/file2 ~]# ls -Z /tmp/file2 -rw-r--r-- root root unconfined_u:object_r:user_tmp_t:s0 /tmp/file2 ~]# cp /tmp/file2 /etc/file1 ~]# ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-working_with_selinux-maintaining_selinux_labels_
Chapter 4. Bug fixes
Chapter 4. Bug fixes This section describes bugs with significant impact on users that were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in versions. 4.1. The Cephadm utility Bootstrap no longer fails if a comma-separated list of quoted IPs are passed in as the public network in the initial Ceph configuration Previously, cephadm bootstrap would improperly parse comma-delimited lists of IP addresses, if the list was quoted. Due to this, the bootstrap would fail if a comma-separated list of quoted IP addresses, for example, '172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24', was provided as the public_network in the initial Ceph configuration passed to bootstrap with the --config parameter. With this fix, you can enter the comma-separated lists of quoted IPs into the initial Ceph configuration passed to bootstrap for the public_network or cluster_network , and it works as expected. Bugzilla:2111680 cephadm no longer attempts to parse the provided yaml files more than necessary Previously, cephadm bootstrap would attempt to manually parse the provided yaml files more than necessary. Due to this, sometimes, even if the user had provided a valid yaml file to cephadm bootstrap, the manual parsing would fail, depending on the individual specification, causing the entire specification to be discarded. With this fix, cephadm no longer attempts to parse the yaml more than necessary. The host specification is searched only for the purpose of spreading SSH keys. Otherwise, the specification is just passed up to the manager module. The cephadm bootstrap --apply-spec command now works as expected with any valid specification. Bugzilla:2112309 host.containers.internal entry is no longer added to the /etc/hosts file of deployed containers Previously, certain podman versions would, by default, add a host.containers.internal entry to the /etc/hosts file of deployed containers. Due to this, issues arose in some services with respect to this entry, as it was misunderstood to represent FQDN of a real node. With this fix, Cephadm mounts the host's /etc/hosts file when deploying containers. The host.containers.internal entry in the /etc/hosts file in the containers, is no longer present, avoiding all bugs related to the entry, although users can still see the host's /etc/hosts for name resolution within the container. Bugzilla:2133549 Cephadm now logs device information only when an actual change occurs Previously, cephadm`would compare all fields reported for OSDs, to check for new or changed devices. But one of these fields included a timestamp that would differ every time. Due to this, `cephadm would log that it 'Detected new or changed devices' every time it refreshed a host's devices, regardless of whether anything actually changed or not. With this fix, the comparison of device information against information no longer takes the timestamp fields into account that are expected to constantly change. Cephadm now logs only when there is an actual change in the devices. Bugzilla:2136336 The generated Prometheus URL is now accessible Previously, if a host did not have an FQDN, the Prometheus URL generated would be http://_host-shortname :9095_, and it would be inaccessible. With this fix, if no FQDN is available, the host IP is used over the shortname. The URL generated for Prometheus is now in a format that is accessible, even if the host Prometheus is deployed on a service that has no FQDN available. Bugzilla:2153726 cephadm no longer has permission issues while writing files to the host Previously, cephadm would first create files within the /tmp directory, and then move them to their final location. Due to this, in certain setups, a permission issue would arise when writing files, making cephadm effectively unable to operate until permissions were modified. With this fix, cephadm uses a subdirectory within`/tmp` to write files to the host that do not have the same permission issues. Bugzilla:2182035 4.2. Ceph Dashboard The default option in the OSD creation step of Expand Cluster wizard works as expected Previously, the default option in the OSD creation step of Expand Cluster wizard was not working on the dashboard, causing the user to be misled by showing the option as "selected". With this fix, the default option works as expected. Additionally, a "Skip" button is added if the user decides to skip the step. Bugzilla:2111751 Users can create normal or mirror snapshots Previously, even though the users were allowed to create a normal image snapshot and mirror image snapshot, it was not possible to create a normal image snapshot. With this fix, the user can choose from two options to select either normal or mirror image snapshot modes. Bugzilla:2145104 Flicker no longer occurs on the Host page Previously, the host page would flicker after 5 seconds if there were more than 1 hosts, causing a bad user experience. With this fix, the API is optimized to load the page normally and the flicker no longer occurs. Bugzilla:2164327 4.3. Ceph Metrics The metrics names produced by Ceph exporter and prometheus manager module are the same Previously, the metrics coming from the Ceph daemons (performance counters) were produced by the Prometheus manager module. The new Ceph exporter would replace the Prometheus manager module, and the metrics name produced would not follow the same rules applied in the Prometheus manager module. Due to this, the name of the metrics for the same performance counters were different depending on the provider of the metric (Prometheus manager module or Ceph exporter) With this fix, the Ceph exporter uses the same rules as the ones in the Prometheus manager module to generate metric names from Ceph performance counters. The metrics produced by Ceph exporter and Prometheus manager module are exactly the same. Bugzilla:2186557 4.4. Ceph File System mtime and change_attr are now updated for snapshot directory when snapshots are created Previously, libcephfs clients would not update mtime , and would change the attribute when snaps were created or deleted. Due to this, NFS clients could not list CephFS snapshots within a CephFS NFS-Ganesha export correctly. With this fix, mtime and change_attr are updated for the snapshot directory, .snap , when snapshots are created, deleted, and renamed. Correct mtime and change_attr ensure that listing snapshots do not return stale snapshot entries. Bugzilla:1975689 cephfs-top -d [--delay] option accepts only integer values ranging between 1 to 25 Previously, cephfs-top -d [--delay] option would not work properly, due to the addition of a few new curses methods. The new curses method would accept only integer values, due to which an exception was thrown on getting the float values from a helper function. With this fix, cephfs-top -d [--delay] option accepts only integer values ranging between 1 and 25, and cephfs-top utility works as expected. Bugzilla:2136031 Creating same dentries after the unlink finishes does not crash the MDS daemons Previously, there was a racy condition between unlink and creating operations. Due to this, if the unlink request was delayed due to any reasons, and creating same dentries was attempted during this time, it would fail by crashing the MDS daemons or new creation would succeed but the written content would be lost. With this fix, users need to ensure to wait until the unlink finishes, to avoid conflict when creating the same dentries. Bugzilla:2140784 Non-existing cluster no longer shows up when running the ceph nfs cluster info CLUSTER_ID command. Previously, existence of a cluster would not be checked when ceph nfs cluster info CLUSTER_ID command was run, due to which, information of the non-existing cluster would be shown, such as virtual_ip and backend , null and empty respectively. With this fix, the`ceph nfs cluster info CLUSTER_ID ` command checks the cluster existence and an Error ENOENT: cluster does not exist is thrown in case a non-existing cluster is queried. Bugzilla:2149415 The snap-schedule module no longer incorrectly refers to the volumes module Previously, the snap-schedule module would incorrectly refer to the volumes module when attempting to fetch the subvolume path. Due to using the incorrect name of the volumes module and remote method name, the ImportError traceback would be seen. With this fix, the untested and incorrect code is rectified, and the method is implemented and correctly invoked from the snap-schedule CLI interface methods. The snap-schedule module now correctly resolves the subvolume path when trying to add a subvolume level schedule. Bugzilla:2153196 Integer overflow and ops_in_flight value overflow no longer happens Previously, _calculate_ops would rely on a configuration option filer_max_purge_ops , which could be modified on the fly too. Due to this, if the value of ops_in_flight is set to more than uint64 's capability, then there would be an integer overflow, and this would make ops_in_flight far more greater than max_purge_ops and it would not be able to go back to a reasonable value. With this fix, the usage of filer_max_purge_ops in ops_in_flight is ignored, since it is already used in Filer::_do_purge_range() . Integer overflow and ops_in_flight value overflow no longer happens. Bugzilla:2159307 Invalid OSD requests are no longer submitted to RADOS Previously, when the first dentry had enough metadata and the size was larger than max_write_size , an invalid OSD request would be submitted to RADOS. Due to this, RADOS would fail the invalid request, causing CephFS to be read-only. With this fix, all the OSD requests are filled with validated information before sending it to RADOS and no invalid OSD requests cause the CephFS to be read-only. Bugzilla:2160598 MDS now processes all stray directory entries. Previously, a bug in the MDS stray directory processing logic caused the MDS to skip processing a few stray directory entries. Due to this, the MDS would not process all stray directory entries, causing deleted files to not free up space. With this fix, the stray index pointer is corrected, so that the MDS processes all stray directories. Bugzilla:2161479 Pool-level snaps for pools attached to a Ceph File System are disabled Previously, the pool-level snaps and mon-managed snaps had their own snap ID namespace and this caused a clash between the IDs, and the Ceph Monitor was unable to uniquely identify a snap as to whether it is a pool-level snap or a mon-managed snap. Due to this, there were chances for the wrong snap to get deleted when referring to an ID, which is present in the set of pool-level snaps and mon-managed snaps. With this fix, the pool-level snaps for the pools attached to a Ceph File System are disabled and no clash of pool IDs occurs. Hence, no unintentional data loss happens when a CephFS snap is removed. Bugzilla:2168541 Client requests no longer bounce indefinitely between MDS and clients Previously, there was a mismatch between the Ceph protocols for client requests between CephFS client and MDS. Due to this, the corresponding information would be truncated or lost when communicating between CephFS clients and MDS, and the client requests would indefinitely bounce between MDS and clients. With this fix, the type of the corresponding members in the protocol for the client requests is corrected by making them the same type and the new code is made to be compatible with the old Cephs. The client request does not bounce between MDS and clients indefinitely, and stops after being well retried. Bugzilla:2172791 A code assert is added to the Ceph Manager daemon service to detect metadata corruption Previously, a type of snapshot-related metadata corruption would be introduced by the manager daemon service for workloads running Postgres, and possibly others. With this fix, a code assert is added to the manager daemon service which is triggered if a new corruption is detected. This reduces the proliferation of the damage, and allows the collection of logs to ascertain the cause. Note If daemons crash after the cluster is upgraded to Red Hat Ceph Storage 6.1, contact Red Hat support for analysis and corrective action. Bugzilla:2175307 MDS daemons no longer crash due to sessionmap version mismatch issue Previously, MDS sessionmap journal log would not correctly persist when MDS failover occurred. Due to this, when a new MDS was trying to replay the journal logs, the sessionmap journal logs would mismatch with the information in the MDCache or the information from other journal logs, causing the MDS daemons to trigger an assert to crash themselves. With this fix, trying to force replay the sessionmap version instead of crashing the MDS daemons results in no MDS daemon crashes due to sessionmap version mismatch issue. Bugzilla:2182564 MDS no longer gets indefinitely stuck while waiting for the cap revocation acknowledgement Previously, if __setattrx() failed, the _write() would retain the CEPH_CAP_FILE_WR caps reference, the MDS would be indefinitely stuck waiting for the cap revocation acknowledgment. It would also cause other clients' requests to be stuck indefinitely. With this fix, the CEPH_CAP_FILE_WR caps reference is released if the __setattrx() fails and MDS' caps revoke request is not stuck. Bugzilla:2182613 4.5. The Ceph Volume utility The correct size is calculated for each database device in ceph-volume Previously, as of RHCS 4.3, ceph-volume would not make a single VG with all database devices inside, since each database device had its own VG. Due to this, the database size was calculated differently for each LV. With this release, the logic is updated to take into account the new database devices with LVM layout. The correct size is calculated for each database device. Bugzilla:2185588 4.6. Ceph Object Gateway Topic creation is now allowed with or without trailing slash Previously, http endpoints with one trailing slash in the push-endpoint URL, failed to create a topic. With this fix, topic creation is allowed with or without trailing slash and it creates successfully. Bugzilla:2082666 Blocksize is changed to 4K Previously, Ceph Object Gateway GC processing would consume excessive time due to the use of a 1K blocksize that would consume the GC queue. This caused slower processing of large GC queues. With this fix, blocksize is changed to 4K, which has accelerated the processing of large GC queues. Bugzilla:2142167 Timestamp is sent in the multipart upload bucket notification event to the receiver Previously, no timestamp was sent on the multipart upload bucket notification event. Due to this, the receiver of the event would not know when the multipart upload ended. With this fix, the timestamp when the multipart upload ends is sent in the notification event to the receiver. Bugzilla:2149259 Object size and etag values are no longer sent as 0 / empty Previously, some object metadata would not be decoded before dispatching bucket notifications from the lifecycle. Due to this, object size and etag values were sent as 0 / empty in notifications from lifecycle events. With this fix, object metadata is fetched and values are now correctly sent with notifications. Bugzilla:2153533 Ceph Object Gateway recovers from kafka broker disconnections Previously, if the kafka broker was down for more than 30 seconds, there would be no reconnect after the broker was up again. Due to this, bucket notifications would not be sent, and eventually, after queue fill up, S3 operations that require notifications would be rejected. With this fix, the broker reconnect happens regardless of the time duration the broker is down and the Ceph Object Gateway is able to recover from kafka broker disconnects. Bugzilla:2184268 S3 PUT requests with chunked Transfer-Encoding does not require content-length Previously, S3 clients that PUT objects with Transfer-Encoding:chunked , without providing the x-amz-decoded-content-length field, would fail. As a result, the S3 PUT requests would fail with 411 Length Required http status code. With this fix, S3 PUT requests with chunked Transfer-Encoding need not specify a content-length , and S3 clients can perform S3 PUT requests as expected. Bugzilla:2186760 Users can now configure the remote S3 service with the right credentials Previously, while configuring remote cloud S3 object store service to transition objects, access keys starting with digit were incorrectly parsed. Due to this, there were chances for the object transition to fail. With this fix, the keys are parsed correctly. Users cannot configure the remote S3 service with the right credentials for transition. Bugzilla:2187394 4.7. Multi-site Ceph Object Gateway Bucket attributes are no longer overwritten in the archive sync module Previously, bucket attributes were overwritten in the archive sync module. Due to this, bucket policy or any other attributes would be reset when archive zone sync_object() was executed. With this fix, ensure to not reset bucket attributes. Any bucket attribute set on source replicates to the archive zone without being reset. Bugzilla:1937618 Zonegroup is added to the bucket ARN in the notification event Previously, zonegroup was missing from bucket ARN in the notification event. Due to this, while the notification events handler received events from multiple zone groups, it would cause confusion in the identification of the source bucket of the event. With this fix, zonegroup is added to the bucket ARN and the notification events handler receiving events from multiple zone groups has all the required information. Bugzilla:2004175 bucket read_sync_status() command no longer returns a negative ret value Previously, bucket read_sync_status() would always return a negative ret value. Due to this, the bucket sync marker command would fail with : ERROR: sync.read_sync_status() returned error=0 . With this fix, the actual ret value from the bucket read_sync_status() operation is returned and the bucket sync marker command runs successfully. Bugzilla:2127926 New bucket instance information are stored on the newly created bucket Previously, in the archive zone, a new bucket would be created when a source bucket was deleted, in order to preserve the archived versions of objects. The new bucket instance information would be stored in the old instance rendering the new bucket on the archived zone to be in accessible With this fix, the bucket instance information is stored in the newly created bucket. Deleted buckets on source are still accessible in the archive zone. Bugzilla:2186774 Segmentation fault no longer occurs when bucket has a num_shards value of 0 Previously, multi-site sync would result in segmentation faults when a bucket had num_shards value of 0 . This resulted in inconsistent sync behavior and segmentation fault. With this fix, num_shards=0 is properly represented in data sync and buckets with shard value 0 does not have any issues with syncing. Bugzilla:2187617 4.8. RADOS Upon querying the IOPS capacity for an OSD, only the configuration option that matches the underlying device type shows the measured/default value Previously, the osd_mclock_max_capacity_iops_[ssd|hdd] values were set depending on the OSD's underlying device type. The configuration options also had default values that were displayed when queried. For example, if the underlying device type for an OSD was SSD, the default value for the HDD option, osd_mclock_max_capacity_iops_hdd , was also displayed with a non-zero value. Due to this, displaying values for both HDD and SSD options of an OSD when queried, caused confusion regarding the correct option to interpret. With this fix, the IOPS capacity-related configuration option of the OSD that matches the underlying device type is set and the alternate/inactive configuration option is set to 0 . When a user queries the IOPS capacity for an OSD, only the configuration option that matches the underlying device type shows the measured/default value. The alternative/inactive option is set to 0 to clearly indicate that it is disabled. Bugzilla:2111282 4.9. RBD Mirroring Error message when enabling image mirroring within a namespace now provides more insight Previously, attempting to enable image mirroring within a namespace would fail with a "cannot enable mirroring in current pool mirroring mode" error. The error would neither provide insight into the problem nor provide any solution. With this fix, to provide more insight, the error handling is improved and the error now states "cannot enable mirroring: mirroring is not enabled on a namespace". Bugzilla:2024444 Snapshot mirroring no longer halts permanently Previously, if a primary snapshot creation request was forwarded to rbd-mirror daemon when the rbd-mirror daemon was axed for some practical reason before marking the snapshot as complete, the primary snapshot would be permanently incomplete. This is because, upon retrying that primary snapshot creation request, librbd would notice that such a snapshot already existed. It would not check whether this "pre-existing" snapshot was complete or not. Due to this, the mirroring of snapshots was permanently halted. With this fix, as part of the mirror snapshot creation, including being triggered by a scheduler, checks are made to ensure that any incomplete snapshots are deleted accordingly to resume the mirroring. Bugzilla:2120624
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/6.1_release_notes/bug-fixes