title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 34. AWS Simple Email Service Component
Chapter 34. AWS Simple Email Service Component Available as of Camel version 2.9 The ses component supports sending emails with Amazon's SES service. Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon SES. More information are available at Amazon SES . 34.1. URI Format aws-ses://from[?options] You can append query options to the URI in the following format, ?options=value&option2=value&... 34.2. URI Options The AWS Simple Email Service component supports 5 options, which are listed below. Name Description Default Type configuration (advanced) The AWS SES default configuration SesConfiguration accessKey (producer) Amazon AWS Access Key String secretKey (producer) Amazon AWS Secret Key String region (producer) The region in which SES client needs to work String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The AWS Simple Email Service endpoint is configured using URI syntax: with the following path and query parameters: 34.2.1. Path Parameters (1 parameters): Name Description Default Type from Required The sender's email address. String 34.2.2. Query Parameters (11 parameters): Name Description Default Type amazonSESClient (producer) To use the AmazonSimpleEmailService as the client AmazonSimpleEmail Service proxyHost (producer) To define a proxy host when instantiating the SES client String proxyPort (producer) To define a proxy port when instantiating the SES client Integer region (producer) The region in which SES client needs to work String replyToAddresses (producer) List of reply-to email address(es) for the message, override it using 'CamelAwsSesReplyToAddresses' header. List returnPath (producer) The email address to which bounce notifications are to be forwarded, override it using 'CamelAwsSesReturnPath' header. String subject (producer) The subject which is used if the message header 'CamelAwsSesSubject' is not present. String to (producer) List of destination email address. Can be overriden with 'CamelAwsSesTo' header. List synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean accessKey (security) Amazon AWS Access Key String secretKey (security) Amazon AWS Secret Key String 34.3. Spring Boot Auto-Configuration The component supports 16 options, which are listed below. Name Description Default Type camel.component.aws-ses.access-key Amazon AWS Access Key String camel.component.aws-ses.configuration.access-key Amazon AWS Access Key String camel.component.aws-ses.configuration.amazon-s-e-s-client To use the AmazonSimpleEmailService as the client AmazonSimpleEmail Service camel.component.aws-ses.configuration.from The sender's email address. String camel.component.aws-ses.configuration.proxy-host To define a proxy host when instantiating the SES client String camel.component.aws-ses.configuration.proxy-port To define a proxy port when instantiating the SES client Integer camel.component.aws-ses.configuration.region The region in which SES client needs to work String camel.component.aws-ses.configuration.reply-to-addresses List of reply-to email address(es) for the message, override it using 'CamelAwsSesReplyToAddresses' header. List camel.component.aws-ses.configuration.return-path The email address to which bounce notifications are to be forwarded, override it using 'CamelAwsSesReturnPath' header. String camel.component.aws-ses.configuration.secret-key Amazon AWS Secret Key String camel.component.aws-ses.configuration.subject The subject which is used if the message header 'CamelAwsSesSubject' is not present. String camel.component.aws-ses.configuration.to List of destination email address. Can be overriden with 'CamelAwsSesTo' header. List camel.component.aws-ses.enabled Enable aws-ses component true Boolean camel.component.aws-ses.region The region in which SES client needs to work String camel.component.aws-ses.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.aws-ses.secret-key Amazon AWS Secret Key String Required SES component options You have to provide the amazonSESClient in the Registry or your accessKey and secretKey to access the Amazon's SES . 34.4. Usage 34.4.1. Message headers evaluated by the SES producer Header Type Description CamelAwsSesFrom String The sender's email address. CamelAwsSesTo List<String> The destination(s) for this email. CamelAwsSesSubject String The subject of the message. CamelAwsSesReplyToAddresses List<String> The reply-to email address(es) for the message. CamelAwsSesReturnPath String The email address to which bounce notifications are to be forwarded. CamelAwsSesHtmlEmail Boolean Since Camel 2.12.3 The flag to show if email content is HTML. 34.4.2. Message headers set by the SES producer Header Type Description CamelAwsSesMessageId String The Amazon SES message ID. 34.4.3. Advanced AmazonSimpleEmailService configuration If you need more control over the AmazonSimpleEmailService instance configuration you can create your own instance and refer to it from the URI: from("direct:start") .to("aws-ses://[email protected]?amazonSESClient=#client"); The #client refers to a AmazonSimpleEmailService in the Registry. For example if your Camel Application is running behind a firewall: AWSCredentials awsCredentials = new BasicAWSCredentials("myAccessKey", "mySecretKey"); ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setProxyHost("http://myProxyHost"); clientConfiguration.setProxyPort(8080); AmazonSimpleEmailService client = new AmazonSimpleEmailServiceClient(awsCredentials, clientConfiguration); registry.bind("client", client); 34.5. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.8.4 or higher). 34.6. See Also Configuring Camel Component Endpoint Getting Started AWS Component
[ "aws-ses://from[?options]", "aws-ses:from", "from(\"direct:start\") .to(\"aws-ses://[email protected]?amazonSESClient=#client\");", "AWSCredentials awsCredentials = new BasicAWSCredentials(\"myAccessKey\", \"mySecretKey\"); ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setProxyHost(\"http://myProxyHost\"); clientConfiguration.setProxyPort(8080); AmazonSimpleEmailService client = new AmazonSimpleEmailServiceClient(awsCredentials, clientConfiguration); registry.bind(\"client\", client);", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/aws-ses-component
Chapter 19. Red Hat Quay garbage collection
Chapter 19. Red Hat Quay garbage collection Red Hat Quay includes automatic and continuous image garbage collection. Garbage collection ensures efficient use of resources for active objects by removing objects that occupy sizeable amounts of disk space, such as dangling or untagged images, repositories, and blobs, including layers and manifests. Garbage collection performed by Red Hat Quay can reduce downtime in your organization's environment. 19.1. Red Hat Quay garbage collection in practice Currently, all garbage collection happens discreetly, and there are no commands to manually run garbage collection. Red Hat Quay provides metrics that track the status of the different garbage collection workers. For namespace and repository garbage collection, the progress is tracked based on the size of their respective queues. Namespace and repository garbage collection workers require a global lock to work. As a result, and for performance reasons, only one worker runs at a time. Note Red Hat Quay shares blobs between namespaces and repositories in order to conserve disk space. For example, if the same image is pushed 10 times, only one copy of that image will be stored. It is possible that tags can share their layers with different images already stored somewhere in Red Hat Quay. In that case, blobs will stay in storage, because deleting shared blobs would make other images unusable. Blob expiration is independent of the time machine. If you push a tag to Red Hat Quay and the time machine is set to 0 seconds, and then you delete a tag immediately, garbage collection deletes the tag and everything related to that tag, but will not delete the blob storage until the blob expiration time is reached. Garbage collecting tagged images works differently than garbage collection on namespaces or repositories. Rather than having a queue of items to work with, the garbage collection workers for tagged images actively search for a repository with inactive or expired tags to clean up. Each instance of garbage collection workers will grab a repository lock, which results in one worker per repository. Note In Red Hat Quay, inactive or expired tags are manifests without tags because the last tag was deleted or it expired. The manifest stores information about how the image is composed and stored in the database for each individual tag. When a tag is deleted and the allotted time from Time Machine has been met, Red Hat Quay garbage collects the blobs that are not connected to any other manifests in the registry. If a particular blob is connected to a manifest, then it is preserved in storage and only its connection to the manifest that is being deleted is removed. Expired images will disappear after the allotted time, but are still stored in Red Hat Quay. The time in which an image is completely deleted, or collected, depends on the Time Machine setting of your organization. The default time for garbage collection is 14 days unless otherwise specified. Until that time, tags can be pointed to an expired or deleted images. For each type of garbage collection, Red Hat Quay provides metrics for the number of rows per table deleted by each garbage collection worker. The following image shows an example of how Red Hat Quay monitors garbage collection with the same metrics: 19.1.1. Measuring storage reclamation Red Hat Quay does not have a way to track how much space is freed up by garbage collection. Currently, the best indicator of this is by checking how many blobs have been deleted in the provided metrics. Note The UploadedBlob table in the Red Hat Quay metrics tracks the various blobs that are associated with a repository. When a blob is uploaded, it will not be garbage collected before the time designated by the PUSH_TEMP_TAG_EXPIRATION_SEC parameter. This is to avoid prematurely deleting blobs that are part of an ongoing push. For example, if garbage collection is set to run often, and a tag is deleted in the span of less than one hour, then it is possible that the associated blobs will not get cleaned up immediately. Instead, and assuming that the time designated by the PUSH_TEMP_TAG_EXPIRATION_SEC parameter has passed, the associated blobs will be removed the time garbage collection is triggered to run by another expired tag on the same repository. 19.2. Garbage collection configuration fields The following configuration fields are available to customize what is garbage collected, and the frequency at which garbage collection occurs: Name Description Schema FEATURE_GARBAGE_COLLECTION Whether garbage collection is enabled for image tags. Defaults to true . Boolean FEATURE_NAMESPACE_GARBAGE_COLLECTION Whether garbage collection is enabled for namespaces. Defaults to true . Boolean FEATURE_REPOSITORY_GARBAGE_COLLECTION Whether garbage collection is enabled for repositories. Defaults to true . Boolean GARBAGE_COLLECTION_FREQUENCY The frequency, in seconds, at which the garbage collection worker runs. Affects only garbage collection workers. Defaults to 30 seconds. String PUSH_TEMP_TAG_EXPIRATION_SEC The number of seconds that blobs will not be garbage collected after being uploaded. This feature prevents garbage collection from cleaning up blobs that are not referenced yet, but still used as part of an ongoing push. String TAG_EXPIRATION_OPTIONS List of valid tag expiration values. String DEFAULT_TAG_EXPIRATION Tag expiration time for time machine. String CLEAN_BLOB_UPLOAD_FOLDER Automatically cleans stale blobs left over from an S3 multipart upload. By default, blob files older than two days are cleaned up every hour. Boolean + Default: true 19.3. Disabling garbage collection The garbage collection features for image tags, namespaces, and repositories are stored in the config.yaml file. These features default to true . In rare cases, you might want to disable garbage collection, for example, to control when garbage collection is performed. You can disable garbage collection by setting the GARBAGE_COLLECTION features to false . When disabled, dangling or untagged images, repositories, namespaces, layers, and manifests are not removed. This might increase the downtime of your environment. Note There is no command to manually run garbage collection. Instead, you would disable, and then re-enable, the garbage collection feature. 19.4. Garbage collection and quota management Red Hat Quay introduced quota management in 3.7. With quota management, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. As of Red Hat Quay 3.7, garbage collection reclaims memory that was allocated to images, repositories, and blobs after deletion. Because the garbage collection feature reclaims memory after deletion, there is a discrepancy between what is stored in an environment's disk space and what quota management is reporting as the total consumption. There is currently no workaround for this issue. 19.5. Garbage collection in practice Use the following procedure to check your Red Hat Quay logs to ensure that garbage collection is working. Procedure Enter the following command to ensure that garbage collection is properly working: USD sudo podman logs <container_id> Example output: gcworker stdout | 2022-11-14 18:46:52,458 [63] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], run at: 2022-11-14 18:47:22 UTC)" executed successfully Delete an image tag. Enter the following command to ensure that the tag was deleted: USD podman logs quay-app Example output: gunicorn-web stdout | 2022-11-14 19:23:44,574 [233] [INFO] [gunicorn.access] 192.168.0.38 - - [14/Nov/2022:19:23:44 +0000] "DELETE /api/v1/repository/quayadmin/busybox/tag/test HTTP/1.0" 204 0 "http://quay-server.example.com/repository/quayadmin/busybox?tab=tags" "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" 19.6. Red Hat Quay garbage collection metrics The following metrics show how many resources have been removed by garbage collection. These metrics show how many times the garbage collection workers have run and how many namespaces, repositories, and blobs were removed. Metric name Description quay_gc_iterations_total Number of iterations by the GCWorker quay_gc_namespaces_purged_total Number of namespaces purged by the NamespaceGCWorker quay_gc_repos_purged_total Number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker quay_gc_storage_blobs_deleted_total Number of storage blobs deleted Sample metrics output # TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189714e+09 ... # HELP quay_gc_iterations_total number of iterations by the GCWorker # TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189433e+09 ... # HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker # TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 .... # TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.631782319018925e+09 ... # HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker # TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189059e+09 ... # HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted # TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ...
[ "sudo podman logs <container_id>", "gcworker stdout | 2022-11-14 18:46:52,458 [63] [INFO] [apscheduler.executors.default] Job \"GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2022-11-14 18:47:22 UTC)\" executed successfully", "podman logs quay-app", "gunicorn-web stdout | 2022-11-14 19:23:44,574 [233] [INFO] [gunicorn.access] 192.168.0.38 - - [14/Nov/2022:19:23:44 +0000] \"DELETE /api/v1/repository/quayadmin/busybox/tag/test HTTP/1.0\" 204 0 \"http://quay-server.example.com/repository/quayadmin/busybox?tab=tags\" \"Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\"", "TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189714e+09 HELP quay_gc_iterations_total number of iterations by the GCWorker TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189433e+09 HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 . TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.631782319018925e+09 HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189059e+09 HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/garbage-collection
6.2. Disabling vhost-net
6.2. Disabling vhost-net The vhost-net module is a kernel-level back end for virtio networking that reduces virtualization overhead by moving virtio packet processing tasks out of user space (the QEMU process) and into the kernel (the vhost-net driver). vhost-net is only available for virtio network interfaces. If the vhost-net kernel module is loaded, it is enabled by default for all virtio interfaces, but can be disabled in the interface configuration if a particular workload experiences a degradation in performance when vhost-net is in use. Specifically, when UDP traffic is sent from a host machine to a guest virtual machine on that host, performance degradation can occur if the guest virtual machine processes incoming data at a rate slower than the host machine sends it. In this situation, enabling vhost-net causes the UDP socket's receive buffer to overflow more quickly, which results in greater packet loss. It is therefore better to disable vhost-net in this situation to slow the traffic, and improve overall performance. To disable vhost-net , edit the <interface> sub-element in the guest virtual machine's XML configuration file and define the network as follows: Setting the driver name to qemu forces packet processing into QEMU user space, effectively disabling vhost-net for that interface.
[ "<interface type=\"network\"> <model type=\"virtio\"/> <driver name=\"qemu\"/> </interface>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-network_configuration-disabling_vhost_net
20.3. Sending Commands with echo
20.3. Sending Commands with echo The virsh echo [--shell][--xml] arguments command displays the specified argument in the specified format. The formats you can use are --shell and --xml . Each argument queried is displayed separated by a space. The --shell option generates output that is formatted in single quotes where needed, so it is suitable for copying and pasting into the bash mode as a command. If the --xml argument is used, the output is formatted for use in an XML file, which can then be saved or used for guest's configuration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-generic_commands-argument_display
5.2.28. /proc/swaps
5.2.28. /proc/swaps This file measures swap space and its utilization. For a system with only one swap partition, the output of /proc/swap may look similar to the following: While some of this information can be found in other files in the /proc/ directory, /proc/swap provides a snapshot of every swap file name, the type of swap space, the total size, and the amount of space in use (in kilobytes). The priority column is useful when multiple swap files are in use. The lower the priority, the more likely the swap file is to be used.
[ "Filename Type Size Used Priority /dev/mapper/VolGroup00-LogVol01 partition 524280 0 -1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-swaps
Installing on IBM Z and IBM LinuxONE
Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.12 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_z_and_ibm_linuxone/index
Chapter 11. Consistent Network Device Naming
Chapter 11. Consistent Network Device Naming Red Hat Enterprise Linux provides methods for consistent and predictable network device naming for network interfaces. These features change the name of network interfaces on a system in order to make locating and differentiating the interfaces easier. Traditionally, network interfaces in Linux are enumerated as eth[0123...]s0 , but these names do not necessarily correspond to actual labels on the chassis. Modern server platforms with multiple network adapters can encounter non-deterministic and counter-intuitive naming of these interfaces. This affects both network adapters embedded on the motherboard ( Lan-on-Motherboard , or LOM ) and add-in (single and multiport) adapters. In Red Hat Enterprise Linux, udev supports a number of different naming schemes. The default is to assign fixed names based on firmware, topology, and location information. This has the advantage that the names are fully automatic, fully predictable, that they stay fixed even if hardware is added or removed (no re-enumeration takes place), and that broken hardware can be replaced seamlessly. The disadvantage is that they are sometimes harder to read than the eth or wla names traditionally used. For example: enp5s0 . Warning Red Hat does not support systems with consistent device naming disabled. For further details, see Is it safe to set net.ifnames=0? 11.1. Naming Schemes Hierarchy By default, systemd will name interfaces using the following policy to apply the supported naming schemes: Scheme 1: Names incorporating Firmware or BIOS provided index numbers for on-board devices (example: eno1 ), are applied if that information from the firmware or BIOS is applicable and available, else falling back to scheme 2. Scheme 2: Names incorporating Firmware or BIOS provided PCI Express hotplug slot index numbers (example: ens1 ) are applied if that information from the firmware or BIOS is applicable and available, else falling back to scheme 3. Scheme 3: Names incorporating physical location of the connector of the hardware (example: enp2s0 ), are applied if applicable, else falling directly back to scheme 5 in all other cases. Scheme 4: Names incorporating interface's MAC address (example: enx78e7d1ea46da ), is not used by default, but is available if the user chooses. Scheme 5: The traditional unpredictable kernel naming scheme, is used if all other methods fail (example: eth0 ). This policy, the procedure outlined above, is the default. If the system has biosdevname enabled, it will be used. Note that enabling biosdevname requires passing biosdevname=1 as a kernel command-line parameter, except in the case of a Dell system, where biosdevname will be used by default as long as it is installed. If the user has added udev rules which change the name of the kernel devices, those rules will take precedence.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-consistent_network_device_naming
Chapter 2. Creating a subscription allocation for a disconnected Satellite Server
Chapter 2. Creating a subscription allocation for a disconnected Satellite Server Users on a connected Satellite Server create subscription manifests in the Manifests section of the Red Hat Hybrid Cloud Console. For information about how to create a manifest for a connected Satellite Server, see Creating a manifest for a connected Satellite Server . Users using a disconnected Satellite Server can still create a new subscription allocation to set aside subscriptions and entitlements for a system that is offline or air-gapped. This is necessary before you can download its manifest and upload it to a system. Procedure To create a manifest for a disconnected or air-gapped Satellite Server, complete the following steps: From the Subscription Allocations page, click Create Manifest . Click New Subscription Allocation Enter a Name for the allocation so that you can find it later. Select the Type of subscription management application you plan to use on the system. Click Create .
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_disconnected_satellite_server/sub_allocation_new_proc
Chapter 5. Configuring the Nagios Plugins for Ceph
Chapter 5. Configuring the Nagios Plugins for Ceph Configure the Nagios plug-ins for Red Hat Ceph Storage cluster. Prerequisites Root-level access to the Ceph Monitor host and Nagios Core Server. A running Red Hat Ceph Storage cluster. Procedure Log in to the Ceph monitor host and create a Ceph key and keyring for Nagios. Example Each plug-in will require authentication. Repeat this procedure for each host that contains a plug-in. Add a command for the check_ceph_health plug-in: Example Enable and restart the nrpe service: Example Repeat this procedure for each Ceph plug-in applicable to the host. Return to the Nagios Core server and define a check_nrpe command for the NRPE plug-in: Example Syntax On the Nagios Core server, edit the configuration file for the node and add a service for the Ceph plug-in. Example Syntax Replace HOSTNAME with the hostname of the Ceph host you want to monitor. Example Note The check_command setting uses check_nrpe! before the Ceph plug-in name. This tells NRPE to execute the check_ceph_health command on the remote node. Repeat this procedure for each plug-in applicable to the host. Restart the Nagios Core server: Example Before proceeding with additional configuration, ensure that the plug-ins are working on the Ceph host: Syntax Example Note The check_ceph_health plug-in performs the equivalent of the ceph health command. Additional Resources See Nagios plugins for Ceph for more information about Ceph Nagios plug-ins usage.
[ "ssh user@host01 [user@host01 ~]USD sudo su - cd /etc/ceph ceph auth get-or-create client.nagios mon 'allow r' > client.nagios.keyring", "vi /usr/local/nagios/etc/nrpe.cfg", "command[check_ceph_health]=/usr/lib/nagios/plugins/check_ceph_health --id nagios --keyring /etc/ceph/client.nagios.keyring", "systemctl enable nrpe systemctl restart nrpe", "cd /usr/local/nagios/etc/objects vi commands.cfg", "define command{ command_name check_nrpe command_line USDUSER1USD/check_nrpe -H USDHOSTADDRESSUSD -c USDARG1USD }", "vi /usr/local/nagios/etc/objects/host01.cfg", "define service { use generic-service host_name HOSTNAME service_description Ceph Health Check check_command check_nrpe!check_ceph_health }", "define service { use generic-service host_name host01 service_description Ceph Health Check check_command check_nrpe!check_ceph_health }", "systemctl restart nagios", "/usr/lib/nagios/plugins/check_ceph_health --id NAGIOS_USER --keyring /etc/ceph/client.nagios.keyring", "/usr/lib/nagios/plugins/check_ceph_health --id nagios --keyring /etc/ceph/client.nagios.keyring HEALTH OK" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/monitoring_ceph_with_nagios_guide/configuring-the-nagios-plugins-for-ceph_nagios
Chapter 10. Using ldapmodify to manage IdM users externally
Chapter 10. Using ldapmodify to manage IdM users externally As an IdM administrators you can use the ipa commands to manage your directory content. Alternatively, you can use the ldapmodify command to achieve similar goals. You can use this command interactively and provide all the data directly in the command line. You also can provide data in the file in the LDAP Data Interchange Format (LDIF) to ldapmodify command. 10.1. Templates for managing IdM user accounts externally The following templates can be used for various user management operations in IdM. The templates show which attributes you must modify using ldapmodify to achieve the following goals: Adding a new stage user Modifying a user's attribute Enabling a user Disabling a user Preserving a user The templates are formatted in the LDAP Data Interchange Format (LDIF). LDIF is a standard plain text data interchange format for representing LDAP directory content and update requests. Using the templates, you can configure the LDAP provider of your provisioning system to manage IdM user accounts. For detailed example procedures, see the following sections: Adding an IdM stage user defined in an LDIF file Adding an IdM stage user directly from the CLI using ldapmodify Preserving an IdM user with ldapmodify Templates for adding a new stage user A template for adding a user with UID and GID assigned automatically . The distinguished name (DN) of the created entry must start with uid=user_login : A template for adding a user with UID and GID assigned statically : You are not required to specify any IdM object classes when adding stage users. IdM adds these classes automatically after the users are activated. Templates for modifying existing users Modifying a user's attribute : Disabling a user : Enabling a user : Updating the nssAccountLock attribute has no effect on stage and preserved users. Even though the update operation completes successfully, the attribute value remains nssAccountLock: TRUE . Preserving a user : Note Before modifying a user, obtain the user's distinguished name (DN) by searching using the user's login. In the following example, the user_allowed_to_modify_user_entries user is a user allowed to modify user and group information, for example activator or IdM administrator. The password in the example is this user's password: 10.2. Templates for managing IdM group accounts externally The following templates can be used for various user group management operations in IdM. The templates show which attributes you must modify using ldapmodify to achieve the following aims: Creating a new group Deleting an existing group Adding a member to a group Removing a member from a group The templates are formatted in the LDAP Data Interchange Format (LDIF). LDIF is a standard plain text data interchange format for representing LDAP directory content and update requests. Using the templates, you can configure the LDAP provider of your provisioning system to manage IdM group accounts. Creating a new group Modifying groups Deleting an existing group : Adding a member to a group : Do not add stage or preserved users to groups. Even though the update operation completes successfully, the users will not be updated as members of the group. Only active users can belong to groups. Removing a member from a group : Note Before modifying a group, obtain the group's distinguished name (DN) by searching using the group's name. 10.3. Using ldapmodify command interactively You can modify Lightweight Directory Access Protocol (LDAP) entries in the interactive mode. Procedure In a command line, enter the LDAP Data Interchange Format (LDIF) statement after the ldapmodify command. Example 10.1. Changing the telephone number for a testuser Note that you need to obtain a Kerberos ticket for using -Y option. Press Ctlr+D to exit the interactive mode. Alternatively, provide an LDIF file after ldapmodify command: Example 10.2. The ldapmodify command reads modification data from an LDIF file Additional resources For more information about how to use the ldapmodify command see ldapmodify(1) man page on your system. For more information about the LDIF structure, see ldif(5) man page on your system. 10.4. Preserving an IdM user with ldapmodify Follow this procedure to use ldapmodify to preserve an IdM user; that is, how to deactivate a user account after the employee has left the company. Prerequisites You can authenticate as an IdM user with a role to preserve users. Procedure Log in as an IdM user with a role to preserve users: Enter the ldapmodify command and specify the Generic Security Services API (GSSAPI) as the Simple Authentication and Security Layer (SASL) mechanism to be used for authentication: Enter the dn of the user you want to preserve: Enter modrdn as the type of change you want to perform: Specify the newrdn for the user: Indicate that you want to preserve the user: Specify the new superior DN : Preserving a user moves the entry to a new location in the directory information tree (DIT). For this reason, you must specify the DN of the new parent entry as the new superior DN. Press Enter again to confirm that this is the end of the entry: Exit the connection using Ctrl + C . Verification Verify that the user has been preserved by listing all preserved users:
[ "dn: uid=user_login ,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: user_login sn: surname givenName: first_name cn: full_name", "dn: uid=user_login,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: posixaccount uid: user_login uidNumber: UID_number gidNumber: GID_number sn: surname givenName: first_name cn: full_name homeDirectory: /home/user_login", "dn: distinguished_name changetype: modify replace: attribute_to_modify attribute_to_modify: new_value", "dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: TRUE", "dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: FALSE", "dn: distinguished_name changetype: modrdn newrdn: uid=user_login deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com", "ldapsearch -LLL -x -D \"uid= user_allowed_to_modify_user_entries ,cn=users,cn=accounts,dc=idm,dc=example,dc=com\" -w \"Secret123\" -H ldap://r8server.idm.example.com -b \"cn=users,cn=accounts,dc=idm,dc=example,dc=com\" uid=test_user dn: uid=test_user,cn=users,cn=accounts,dc=idm,dc=example,dc=com memberOf: cn=ipausers,cn=groups,cn=accounts,dc=idm,dc=example,dc=com", "dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: ipaobject objectClass: ipausergroup objectClass: groupofnames objectClass: nestedgroup objectClass: posixgroup uid: group_name cn: group_name gidNumber: GID_number", "dn: group_distinguished_name changetype: delete", "dn: group_distinguished_name changetype: modify add: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com", "dn: distinguished_name changetype: modify delete: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com", "ldapsearch -YGSSAPI -H ldap://server.idm.example.com -b \"cn=groups,cn=accounts,dc=idm,dc=example,dc=com\" \"cn=group_name\" dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com ipaNTSecurityIdentifier: S-1-5-21-1650388524-2605035987-2578146103-11017 cn: testgroup objectClass: top objectClass: groupofnames objectClass: nestedgroup objectClass: ipausergroup objectClass: ipaobject objectClass: posixgroup objectClass: ipantgroupattrs ipaUniqueID: 569bf864-9d45-11ea-bea3-525400f6f085 gidNumber: 1997010017", "ldapmodify -Y GSSAPI -H ldap://server.example.com dn: uid=testuser,cn=users,cn=accounts,dc=example,dc=com changetype: modify replace: telephoneNumber telephonenumber: 88888888", "ldapmodify -Y GSSAPI -H ldap://server.example.com -f ~/example.ldif", "kinit admin", "ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 256 SASL data security layer installed.", "dn: uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com", "changetype: modrdn", "newrdn: uid=user1", "deleteoldrdn: 0", "newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com", "[Enter] modifying rdn of entry \"uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com\"", "ipa user-find --preserved=true -------------- 1 user matched -------------- User login: user1 First name: First 1 Last name: Last 1 Home directory: /home/user1 Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1997010003 GID: 1997010003 Account disabled: True Preserved user: True ---------------------------- Number of entries returned 1 ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/using-ldapmodify-to-manage-IdM-users-externally_managing-users-groups-hosts
Chapter 3. Enable the Required Repositories
Chapter 3. Enable the Required Repositories Once the system is registered to either the RHEL for SAP Applications or the RHEL for SAP Solutions subscriptions as described in the chapter, the appropriate repositories can be enabled so that all required packages can be installed. 3.1. SAP NetWeaver/SAP ABAP Application Platform For SAP NetWeaver/SAP ABAP Application Platform on RHEL 8, enable the one of the following sets of repos: Platform Repo ID (normal) Repo ID (eus) Repo ID (e4s) x86_64 rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms rhel-8-for-x86_64-sap-netweaver-rpms rhel-8-for-x86_64-highavailability-rpms (#) rhel-8-for-x86_64-baseos-eus-rpms rhel-8-for-x86_64-appstream-eus-rpms rhel-8-for-x86_64-sap-netweaver-eus-rpms rhel-8-for-x86_64-highavailability-eus-rpms (#) rhel-8-for-x86_64-baseos-e4s-rpms rhel-8-for-x86_64-appstream-e4s-rpms rhel-8-for-x86_64-sap-netweaver-e4s-rpms rhel-8-for-x86_64-highavailability-e4s-rpms (#) ppc64le rhel-8-for-ppc64le-baseos-rpms rhel-8-for-ppc64le-appstream-rpms rhel-8-for-ppc64le-sap-netweaver-rpms rhel-8-for-ppc64le-highavailability-rpms (#) rhel-8-for-ppc64le-baseos-eus-rpms rhel-8-for-ppc64le-appstream-eus-rpms rhel-8-for-ppc64le-sap-netweaver-eus-rpms rhel-8-for-ppc64le-highavailability-eus-rpms (#) rhel-8-for-ppc64le-baseos-e4s-rpms rhel-8-for-ppc64le-appstream-e4s-rpms rhel-8-for-ppc64le-sap-netweaver-e4s-rpms rhel-8-for-ppc64le-highavailability-e4s-rpms (#) s390x rhel-8-for-s390x-baseos-rpms rhel-8-for-s390x-appstream-rpms rhel-8-for-s390x-sap-netweaver-rpms rhel-8-for-s390x-highavailability-rpms (#) rhel-8-for-s390x-baseos-eus-rpms rhel-8-for-s390x-appstream-eus-rpms rhel-8-for-s390x-sap-netweaver-eus-rpms rhel-8-for-s390x-highavailability-eus-rpms (#) - (#) This repo is only needed if one of the Red Hat HA solutions for SAP is going to be used. Note RHEL 8 is not supported on the ppc64 (IBM POWER, Big Endian) platform. To use the EUS or E4S variants of the repos, the RHEL 8 minor release must be set via subscription-manager. There are no RHEL 8 E4S repos for the s390x platform. The "normal", "EUS" and "E4S" variants of the repos must never be enabled at the same time, since they provide different versions of the same packages which will lead to package version conflicts when trying to install or update packages. To enable the normal repos for SAP NetWeaver/SAP ABAP Application Platform on RHEL 8, run the following command: # subscription-manager repos \ --disable="*" \ --enable="rhel-8-for-USD(uname -m)-baseos-rpms" \ --enable="rhel-8-for-USD(uname -m)-appstream-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-netweaver-rpms" To enable the EUS repos for SAP NetWeaver/SAP ABAP Application Platform on RHEL 8 (on RHEL 8 minor releases where EUS repos are available) if one of the Red Hat HA solutions for SAP is going to be used, run the following command: # subscription-manager repos \ --disable="*"\ --enable="rhel-8-for-USD(uname -m)-baseos-eus-rpms" \ --enable="rhel-8-for-USD(uname -m)-appstream-eus-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-netweaver-eus-rpms" \ --enable="rhel-8-for-USD(uname -m)-highavailability-eus-rpms" 3.2. SAP HANA (with or without SAP NetWeaver/SAP ABAP Application Platform) on RHEL 8 up to and including RHEL 8.8 For SAP HANA on RHEL 8 up to and including RHEL 8.8, enable the following repos (enabling the sap-netweaver repos would not be necessary for SAP HANA only systems but for simplicity and greater flexibility, it is recommend to enable these in all cases): Platform Repo ID x86_64 rhel-8-for-x86_64-baseos-e4s-rpms rhel-8-for-x86_64-appstream-e4s-rpms rhel-8-for-x86_64-sap-solutions-e4s-rpms rhel-8-for-x86_64-sap-netweaver-e4s-rpms rhel-8-for-x86_64-highavailability-e4s-rpms (#) ppc64le rhel-8-for-ppc64le-baseos-e4s-rpms rhel-8-for-ppc64le-appstream-e4s-rpms rhel-8-for-ppc64le-sap-solutions-e4s-rpms rhel-8-for-ppc64le-sap-netweaver-e4s-rpms rhel-8-for-ppc64le-highavailability-e4s-rpms (#) (#) This repo is only needed if one of the Red Hat HA solutions for SAP is going to be used. Note SAP HANA is not supported on the s390x (IBM System Z) platform. To use the "e4s" variant of the repos, the RHEL 8 minor release must be set via subscription-manager to a RHEL 8 minor release for which "Update Services for SAP Solutions" (E4S) is available. Please check Update Services for SAP Solutions for the list of RHEL 8 minor releases for which "Update Services for SAP Solutions" (E4S) is available. For example, to set the release lock on a RHEL 8.8 system, run the following command: # subscription-manager release --set=8.8 To enable the correct repos for SAP HANA on a RHEL 8 system (on RHEL 8 minor releases where E4S repos are available), run the following command: # subscription-manager repos \ --disable="*" \ --enable="rhel-8-for-USD(uname -m)-baseos-e4s-rpms" \ --enable="rhel-8-for-USD(uname -m)-appstream-e4s-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-solutions-e4s-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-netweaver-e4s-rpms" To enable the correct repos for SAP HANA on a RHEL 8 system (on RHEL 8 minor releases where E4S repos are available) if one of the Red Hat HA solutions for SAP is going to be used, run the following command: # subscription-manager repos \ --disable="*" \ --enable="rhel-8-for-USD(uname -m)-baseos-e4s-rpms" \ --enable="rhel-8-for-USD(uname -m)-appstream-e4s-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-solutions-e4s-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-netweaver-e4s-rpms" \ --enable="rhel-8-for-USD(uname -m)-highavailability-e4s-rpms" 3.3. SAP HANA (with or without SAP NetWeaver/SAP ABAP Application Platform) on RHEL 8.10 For SAP HANA on RHEL 8.10, do not set a release lock. Also, enable the normal repos instead of the E4S or EUS repos. This is because: RHEL 8.10 is the last RHEL minor release for RHEL 8, so the command yum update never updates the system to any release later than that of RHEL 8.10. The normal RHEL 8.10 repos will receive updates for more than 6 months after GA. See this chapter and this table in the Red Hat Enterprise Linux Life Cycle page for more details. For SAP HANA on RHEL 8.10, enable the following repos (enabling the sap-netweaver repos would not be necessary for SAP HANA only systems, but for simplicity and greater flexibility, it is recommended to enable these in all cases): Platform Repo ID x86_64 rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms rhel-8-for-x86_64-sap-solutions-rpms rhel-8-for-x86_64-sap-netweaver-rpms rhel-8-for-x86_64-highavailability-rpms (#) ppc64le rhel-8-for-ppc64le-baseos-rpms rhel-8-for-ppc64le-appstream-rpms rhel-8-for-ppc64le-sap-solutions-rpms rhel-8-for-ppc64le-sap-netweaver-rpms rhel-8-for-ppc64le-highavailability-rpms (#) (#) This repo is only needed if one of the Red Hat HA solutions for SAP is going to be used. Note SAP HANA is not supported on the s390x (IBM System Z) platform. A RHEL 8.10 system must not have a RHEL minor release lock set. You can verify by checking if the output of the following command is Release not set : # subscription-manager release If a minor release lock has been set, disable it with: # subscription-manager release --unset To enable the correct repos for SAP HANA on a RHEL 8.10 system, run the following command: # subscription-manager repos \ --disable="*" \ --enable="rhel-8-for-USD(uname -m)-baseos-rpms" \ --enable="rhel-8-for-USD(uname -m)-appstream-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-solutions-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-netweaver-rpms" To enable the correct repos for SAP HANA on a RHEL 8.10 system, if one of the Red Hat HA solutions for SAP is going to be used, run the following command: # subscription-manager repos \ --disable="*" \ --enable="rhel-8-for-USD(uname -m)-baseos-rpms" \ --enable="rhel-8-for-USD(uname -m)-appstream-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-solutions-rpms" \ --enable="rhel-8-for-USD(uname -m)-sap-netweaver-rpms" \ --enable="rhel-8-for-USD(uname -m)-highavailability-rpms"
[ "subscription-manager repos --disable=\"*\" --enable=\"rhel-8-for-USD(uname -m)-baseos-rpms\" --enable=\"rhel-8-for-USD(uname -m)-appstream-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-netweaver-rpms\"", "subscription-manager repos --disable=\"*\" --enable=\"rhel-8-for-USD(uname -m)-baseos-eus-rpms\" --enable=\"rhel-8-for-USD(uname -m)-appstream-eus-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-netweaver-eus-rpms\" --enable=\"rhel-8-for-USD(uname -m)-highavailability-eus-rpms\"", "subscription-manager release --set=8.8", "subscription-manager repos --disable=\"*\" --enable=\"rhel-8-for-USD(uname -m)-baseos-e4s-rpms\" --enable=\"rhel-8-for-USD(uname -m)-appstream-e4s-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-solutions-e4s-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-netweaver-e4s-rpms\"", "subscription-manager repos --disable=\"*\" --enable=\"rhel-8-for-USD(uname -m)-baseos-e4s-rpms\" --enable=\"rhel-8-for-USD(uname -m)-appstream-e4s-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-solutions-e4s-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-netweaver-e4s-rpms\" --enable=\"rhel-8-for-USD(uname -m)-highavailability-e4s-rpms\"", "subscription-manager release", "subscription-manager release --unset", "subscription-manager repos --disable=\"*\" --enable=\"rhel-8-for-USD(uname -m)-baseos-rpms\" --enable=\"rhel-8-for-USD(uname -m)-appstream-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-solutions-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-netweaver-rpms\"", "subscription-manager repos --disable=\"*\" --enable=\"rhel-8-for-USD(uname -m)-baseos-rpms\" --enable=\"rhel-8-for-USD(uname -m)-appstream-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-solutions-rpms\" --enable=\"rhel-8-for-USD(uname -m)-sap-netweaver-rpms\" --enable=\"rhel-8-for-USD(uname -m)-highavailability-rpms\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/rhel_for_sap_subscriptions_and_repositories/asmb_enable_repo_rhel-for-sap-subscriptions-and-repositories-8
Chapter 3. Identity [user.openshift.io/v1]
Chapter 3. Identity [user.openshift.io/v1] Description Identity records a successful authentication of a user with an identity provider. The information about the source of authentication is stored on the identity, and the identity is then associated with a single user object. Multiple identities can reference a single user. Information retrieved from the authentication provider is stored in the extra field using a schema determined by the provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required providerName providerUserName user 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources extra object (string) Extra holds extra information about this identity kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata providerName string ProviderName is the source of identity information providerUserName string ProviderUserName uniquely represents this identity in the scope of the provider user ObjectReference User is a reference to the user this identity is associated with Both Name and UID must be set 3.2. API endpoints The following API endpoints are available: /apis/user.openshift.io/v1/identities DELETE : delete collection of Identity GET : list or watch objects of kind Identity POST : create an Identity /apis/user.openshift.io/v1/watch/identities GET : watch individual changes to a list of Identity. deprecated: use the 'watch' parameter with a list operation instead. /apis/user.openshift.io/v1/identities/{name} DELETE : delete an Identity GET : read the specified Identity PATCH : partially update the specified Identity PUT : replace the specified Identity /apis/user.openshift.io/v1/watch/identities/{name} GET : watch changes to an object of kind Identity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/user.openshift.io/v1/identities Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Identity Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Identity Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK IdentityList schema 401 - Unauthorized Empty HTTP method POST Description create an Identity Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body Identity schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 202 - Accepted Identity schema 401 - Unauthorized Empty 3.2.2. /apis/user.openshift.io/v1/watch/identities Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Identity. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/user.openshift.io/v1/identities/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the Identity Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Identity Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Identity Table 3.17. HTTP responses HTTP code Reponse body 200 - OK Identity schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Identity Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Identity Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body Identity schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 401 - Unauthorized Empty 3.2.4. /apis/user.openshift.io/v1/watch/identities/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the Identity Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Identity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/user_and_group_apis/identity-user-openshift-io-v1
Chapter 3. Deploy using local storage devices
Chapter 3. Deploy using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Use this section to deploy OpenShift Data Foundation on VMware where OpenShift Container Platform is already installed. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the steps. Installing Local Storage Operator Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating OpenShift Data Foundation cluster on VMware vSphere VMware vSphere supports the following three types of local storage: Virtual machine disk (VMDK) Raw device mapping (RDM) VMDirectPath I/O Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node to use local storage devices on VMware. For VMs on VMware vSphere, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab. For more information, see Installing on vSphere . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Select one of the following: Disks on all nodes to use the available disks that match the selected filters on all nodes. Disks on selected nodes to use the available disks that match the selected filters only on selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes is spread across fewer than the minimum requirement of 3 availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, the flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-using-local-storage-devices-vmware
Chapter 10. Configuring the node port service range
Chapter 10. Configuring the node port service range As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports. The default port range is 30000-32767 . You can never reduce the port range, even if you first expand it beyond the default range. 10.1. Prerequisites Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to 30000-32900 , the inclusive port range of 32768-32900 must be allowed by your firewall or packet filtering configuration. 10.2. Expanding the node port range You can expand the node port range for the cluster. Important You can expand the node port range into the protected port range, which is between 0 and 32767. However, after expansion, you cannot change the range. Attempting to change the range returns the following error: The Network "cluster" is invalid: spec.serviceNodePortRange: Invalid value: "30000-32767": new service node port range 30000-32767 does not completely cover the range 0-32767 . Before making changes, ensure that the new range you set is appropriate for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To expand the node port range, enter the following command. Replace <port> with the largest port number in the new range. USD oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "30000-<port>" } }' Tip You can alternatively apply the following YAML to update the node port range: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: "30000-<port>" Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]' Example output "service-node-port-range":["30000-33000"] 10.3. Additional resources Configuring ingress cluster traffic using a NodePort Network [config.openshift.io/v1 ] Service [core/v1 ]
[ "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/configuring-node-port-service-range
Appendix C. Ceph Monitor configuration options
Appendix C. Ceph Monitor configuration options The following are Ceph monitor configuration options that can be set up during deployment. You can set these configuration options with the ceph config set mon CONFIGURATION_OPTION VALUE command. mon_initial_members Description The IDs of initial monitors in a cluster during startup. If specified, Ceph requires an odd number of monitors to form an initial quorum (for example, 3). Type String Default None mon_force_quorum_join Description Force monitor to join quorum even if it has been previously removed from the map Type Boolean Default False mon_dns_srv_name Description The service name used for querying the DNS for the monitor hosts/addresses. Type String Default ceph-mon fsid Description The cluster ID. One per cluster. Type UUID Required Yes. Default N/A. May be generated by a deployment tool if not specified. mon_data Description The monitor's data location. Type String Default /var/lib/ceph/mon/USDcluster-USDid mon_data_size_warn Description Ceph issues a HEALTH_WARN status in the cluster log when the monitor's data store reaches this threshold. The default value is 15GB. Type Integer Default 15*1024*1024*1024* mon_data_avail_warn Description Ceph issues a HEALTH_WARN status in the cluster log when the available disk space of the monitor's data store is lower than or equal to this percentage. Type Integer Default 30 mon_data_avail_crit Description Ceph issues a HEALTH_ERR status in the cluster log when the available disk space of the monitor's data store is lower or equal to this percentage. Type Integer Default 5 mon_warn_on_cache_pools_without_hit_sets Description Ceph issues a HEALTH_WARN status in the cluster log if a cache pool does not have the hit_set_type parameter set. Type Boolean Default True mon_warn_on_crush_straw_calc_version_zero Description Ceph issues a HEALTH_WARN status in the cluster log if the CRUSH's straw_calc_version is zero. See CRUSH tunables for details. Type Boolean Default True mon_warn_on_legacy_crush_tunables Description Ceph issues a HEALTH_WARN status in the cluster log if CRUSH tunables are too old (older than mon_min_crush_required_version ). Type Boolean Default True mon_crush_min_required_version Description This setting defines the minimum tunable profile version required by the cluster. Type String Default hammer mon_warn_on_osd_down_out_interval_zero Description Ceph issues a HEALTH_WARN status in the cluster log if the mon_osd_down_out_interval setting is zero, because the Leader behaves in a similar manner when the noout flag is set. Administrators find it easier to troubleshoot a cluster by setting the noout flag. Ceph issues the warning to ensure administrators know that the setting is zero. Type Boolean Default True mon_cache_target_full_warn_ratio Description Ceph issues a warning when between the ratio of cache_target_full and target_max_object . Type Float Default 0.66 mon_health_data_update_interval Description How often (in seconds) a monitor in the quorum shares its health status with its peers. A negative number disables health updates. Type Float Default 60 mon_health_to_clog Description This setting enables Ceph to send a health summary to the cluster log periodically. Type Boolean Default True mon_health_detail_to_clog Description This setting enable Ceph to send a health details to the cluster log periodically. Type Boolean Default True mon_op_complaint_time Description Number of seconds after which the Ceph Monitor operation is considered blocked after no updates. Type Integer Default 30 mon_health_to_clog_tick_interval Description How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. If the current health summary is empty or identical to the last time, the monitor will not send the status to the cluster log. Type Integer Default 60.000000 mon_health_to_clog_interval Description How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. The monitor will always send the summary to the cluster log. Type Integer Default 600 mon_osd_full_ratio Description The percentage of disk space used before an OSD is considered full . Type Float: Default .95 mon_osd_nearfull_ratio Description The percentage of disk space used before an OSD is considered nearfull . Type Float Default .85 mon_sync_trim_timeout Description, Type Double Default 30.0 mon_sync_heartbeat_timeout Description, Type Double Default 30.0 mon_sync_heartbeat_interval Description, Type Double Default 5.0 mon_sync_backoff_timeout Description, Type Double Default 30.0 mon_sync_timeout Description The number of seconds the monitor will wait for the update message from its sync provider before it gives up and bootstraps again. Type Double Default 60.000000 mon_sync_max_retries Description, Type Integer Default 5 mon_sync_max_payload_size Description The maximum size for a sync payload (in bytes). Type 32-bit Integer Default 1045676 paxos_max_join_drift Description The maximum Paxos iterations before we must first sync the monitor data stores. When a monitor finds that its peer is too far ahead of it, it will first sync with data stores before moving on. Type Integer Default 10 paxos_stash_full_interval Description How often (in commits) to stash a full copy of the PaxosService state. Currently this setting only affects mds , mon , auth and mgr PaxosServices. Type Integer Default 25 paxos_propose_interval Description Gather updates for this time interval before proposing a map update. Type Double Default 1.0 paxos_min Description The minimum number of paxos states to keep around Type Integer Default 500 paxos_min_wait Description The minimum amount of time to gather updates after a period of inactivity. Type Double Default 0.05 paxos_trim_min Description Number of extra proposals tolerated before trimming Type Integer Default 250 paxos_trim_max Description The maximum number of extra proposals to trim at a time Type Integer Default 500 paxos_service_trim_min Description The minimum amount of versions to trigger a trim (0 disables it) Type Integer Default 250 paxos_service_trim_max Description The maximum amount of versions to trim during a single proposal (0 disables it) Type Integer Default 500 mon_max_log_epochs Description The maximum amount of log epochs to trim during a single proposal Type Integer Default 500 mon_max_pgmap_epochs Description The maximum amount of pgmap epochs to trim during a single proposal Type Integer Default 500 mon_mds_force_trim_to Description Force monitor to trim mdsmaps to this point (0 disables it. dangerous, use with care) Type Integer Default 0 mon_osd_force_trim_to Description Force monitor to trim osdmaps to this point, even if there is PGs not clean at the specified epoch (0 disables it. dangerous, use with care) Type Integer Default 0 mon_osd_cache_size Description The size of osdmaps cache, not to rely on underlying store's cache Type Integer Default 500 mon_election_timeout Description On election proposer, maximum waiting time for all ACKs in seconds. Type Float Default 5 mon_lease Description The length (in seconds) of the lease on the monitor's versions. Type Float Default 5 mon_lease_renew_interval_factor Description mon lease * mon lease renew interval factor will be the interval for the Leader to renew the other monitor's leases. The factor should be less than 1.0 . Type Float Default 0.6 mon_lease_ack_timeout_factor Description The Leader will wait mon lease * mon lease ack timeout factor for the Providers to acknowledge the lease extension. Type Float Default 2.0 mon_accept_timeout_factor Description The Leader will wait mon lease * mon accept timeout factor for the Requesters to accept a Paxos update. It is also used during the Paxos recovery phase for similar purposes. Type Float Default 2.0 mon_min_osdmap_epochs Description Minimum number of OSD map epochs to keep at all times. Type 32-bit Integer Default 500 mon_max_pgmap_epochs Description Maximum number of PG map epochs the monitor should keep. Type 32-bit Integer Default 500 mon_max_log_epochs Description Maximum number of Log epochs the monitor should keep. Type 32-bit Integer Default 500 clock_offset Description How much to offset the system clock. See Clock.cc for details. Type Double Default 0 mon_tick_interval Description A monitor's tick interval in seconds. Type 32-bit Integer Default 5 mon_clock_drift_allowed Description The clock drift in seconds allowed between monitors. Type Float Default .050 mon_clock_drift_warn_backoff Description Exponential backoff for clock drift warnings. Type Float Default 5 mon_timecheck_interval Description The time check interval (clock drift check) in seconds for the leader. Type Float Default 300.0 mon_timecheck_skew_interval Description The time check interval (clock drift check) in seconds when in the presence of a skew in seconds for the Leader. Type Float Default 30.0 mon_max_osd Description The maximum number of OSDs allowed in the cluster. Type 32-bit Integer Default 10000 mon_globalid_prealloc Description The number of global IDs to pre-allocate for clients and daemons in the cluster. Type 32-bit Integer Default 10000 mon_sync_fs_threshold Description Synchronize with the filesystem when writing the specified number of objects. Set it to 0 to disable it. Type 32-bit Integer Default 5 mon_subscribe_interval Description The refresh interval, in seconds, for subscriptions. The subscription mechanism enables obtaining the cluster maps and log information. Type Double Default 86400.000000 mon_stat_smooth_intervals Description Ceph will smooth statistics over the last N PG maps. Type Integer Default 6 mon_probe_timeout Description Number of seconds the monitor will wait to find peers before bootstrapping. Type Double Default 2.0 mon_daemon_bytes Description The message memory cap for metadata server and OSD messages (in bytes). Type 64-bit Integer Unsigned Default 400ul << 20 mon_max_log_entries_per_event Description The maximum number of log entries per event. Type Integer Default 4096 mon_osd_prime_pg_temp Description Enables or disable priming the PGMap with the OSDs when an out OSD comes back into the cluster. With the true setting, the clients will continue to use the OSDs until the newly in OSDs as that PG peered. Type Boolean Default true mon_osd_prime_pg_temp_max_time Description How much time in seconds the monitor should spend trying to prime the PGMap when an out OSD comes back into the cluster. Type Float Default 0.5 mon_osd_prime_pg_temp_max_time_estimate Description Maximum estimate of time spent on each PG before we prime all PGs in parallel. Type Float Default 0.25 mon_osd_allow_primary_affinity Description Allow primary_affinity to be set in the osdmap. Type Boolean Default False mon_osd_pool_ec_fast_read Description Whether turn on fast read on the pool or not. It will be used as the default setting of newly created erasure pools if fast_read is not specified at create time. Type Boolean Default False mon_mds_skip_sanity Description Skip safety assertions on FSMap, in case of bugs where we want to continue anyway. Monitor terminates if the FSMap sanity check fails, but we can disable it by enabling this option. Type Boolean Default False mon_max_mdsmap_epochs Description The maximum amount of mdsmap epochs to trim during a single proposal. Type Integer Default 500 mon_config_key_max_entry_size Description The maximum size of config-key entry (in bytes). Type Integer Default 65536 mon_warn_pg_not_scrubbed_ratio Description The percentage of the scrub max interval past the scrub max interval to warn. Type float Default 0.5 mon_warn_pg_not_deep_scrubbed_ratio Description The percentage of the deep scrub interval past the deep scrub interval to warn Type float Default 0.75 mon_scrub_interval Description How often, in seconds, the monitor scrub its store by comparing the stored checksums with the computed ones of all the stored keys. Type Integer Default 3600*24 mon_scrub_timeout Description The timeout to restart scrub of mon quorum participant does not respond for the latest chunk. Type Integer Default 5 min mon_scrub_max_keys Description The maximum number of keys to scrub each time. Type Integer Default 100 mon_scrub_inject_crc_mismatch Description The probability of injecting CRC mismatches into Ceph Monitor scrub. Type Integer Default 3600*24 mon_scrub_inject_missing_keys Description The probability of injecting missing keys into mon scrub. Type float Default 0 mon_compact_on_start Description Compact the database used as Ceph Monitor store on ceph-mon start. A manual compaction helps to shrink the monitor database and improve its performance if the regular compaction fails to work. Type Boolean Default False mon_compact_on_bootstrap Description Compact the database used as Ceph Monitor store on bootstrap. The monitor starts probing each other for creating a quorum after bootstrap. If it times out before joining the quorum, it will start over and bootstrap itself again. Type Boolean Default False mon_compact_on_trim Description Compact a certain prefix (including paxos) when we trim its old states. Type Boolean Default True mon_cpu_threads Description Number of threads for performing CPU intensive work on monitor. Type Integer Default 4 mon_osd_mapping_pgs_per_chunk Description We calculate the mapping from the placement group to OSDs in chunks. This option specifies the number of placement groups per chunk. Type Integer Default 4096 mon_osd_max_split_count Description Largest number of PGs per "involved" OSD to let split create. When we increase the pg_num of a pool, the placement groups will be split on all OSDs serving that pool. We want to avoid extreme multipliers on PG splits. Type Integer Default 300 rados_mon_op_timeout Description Number of seconds to wait for a response from the monitor before returning an error from a rados operation. 0 means at limit, or no wait time. Type Double Default 0 Additional Resources Pool Values CRUSH tunables
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/configuration_guide/ceph-monitor-configuration-options_conf
Chapter 17. Annotating encrypted RBD storage classes
Chapter 17. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/annotating-the-existing-encrypted-rbd-storageclasses_rhodf
Chapter 7. Log File Reference
Chapter 7. Log File Reference Red Hat Directory Server (Directory Server) provides logs to help monitor directory activity. Monitoring helps quickly detecting and remedying failures and, where done proactively, anticipating and resolving potential problems before they result in failure or poor performance. Part of monitoring the directory effectively is understanding the structure and content of the log files. This chapter does not provide an exhaustive list of log messages. However, the information presented in this chapter serves as a good starting point for common problems and for better understanding the information in the access, error, and audit logs. Logs are kept per Directory Server instances and are located in the /var/log/dirsrv/slapd- instance directory. 7.1. Access Log Reference The Directory Server access log contains detailed information about client connections to the directory. A connection is a sequence of requests from the same client with the following structure: Connection record, which provides the connection index and the IP address of the client Bind record Bind result record Sequence of operation request and operation result pairs of records, or individual records in the case of connection, closed, and abandon records Unbind record Closed record The following is an example access log entry: Apart from connection, closed, and abandon records, which appear individually, all records appear in pairs, consisting of a request for service record followed by a RESULT record: The RESULT message contains the following performance-related entries:; wtime : The amount of time the operation was waiting in the work queue before a worker thread picked up the operation optime : The amount of time it took for the actual operation to perform the task etime : The elapsed time, which covers the time the operation was received by the server to when the server sent back the result to the client Note The wtime and optime values provide useful information about how the server handles load and processes operations. Due to the timing of when Directory Server gathers these statistics, the sum of the wtime and optime values are slightly greater than the etime value. However, this very small difference is negligible. The access logs have different levels of logging, set in the nsslapd-accesslog-level attribute. The following sections provide an overview of the default access logging content, log levels, and the content logged at different logging levels: Section 7.1.1, "Access Logging Levels" Section 7.1.2, "Default Access Logging Content" Section 7.1.3, "Access Log Content for Additional Access Logging Levels" Note that you cannot change the format of the access log. 7.1.1. Access Logging Levels Different levels of access logging generate different amounts of detail and record different kinds of operations. The log level is set in the instance's Section 3.1.1.2, "nsslapd-accesslog-level (Access Log Level)" configuration attribute. The default level of logging is level 256, which logs access to an entry, but there are four different log levels available: 0 = No access logging. 4 = Logging for internal access operations. 256 = Logging for access to an entry. 512 = Logging for access to an entry and referrals. This levels are additive, so to enable several different kinds of logging, add the values of those levels together. For example, to log internal access operations, entry access, and referrals, set the value of nsslapd-accesslog-level to 516 ( 512 + 4 ). 7.1.2. Default Access Logging Content This section describes the access log content in detail based on the default access logging level extract shown below. Example 7.1. Example Access Log Connection Number Every external LDAP request is listed with an incremental connection number, in this case conn=11 , starting at conn=0 immediately after server startup. Internal LDAP requests are not recorded in the access log by default. To activate the logging of internal access operations, specify access logging level 4 on the Section 3.1.1.2, "nsslapd-accesslog-level (Access Log Level)" configuration attribute. File Descriptor Every connection from an external LDAP client to Directory Server requires a file descriptor or socket descriptor from the operating system, in this case fd=608 . fd=608 indicates that it was file descriptor number 608 out of the total pool of available file descriptors which was used. Slot Number The slot number, in this case slot=608 , is a legacy part of the access log which has the same meaning as file descriptor. Ignore this part of the access log. Operation Number To process a given LDAP request, Directory Server will perform the required series of operations. For a given connection, all operation request and operation result pairs are given incremental operation numbers beginning with op=0 to identify the distinct operations being performed. In Section 7.1.2, "Default Access Logging Content" , we have op=0 for the bind operation request and result pair, then op=1 for the LDAP search request and result pair, and so on. The entry op=-1 in the access log generally means that the LDAP request for this connection was not issued by an external LDAP client but, instead, initiated internally. Method Type The method number, in this case method=128 , indicates which LDAPv3 bind method was used by the client. There are three possible bind method values: 0 for authentication 128 for simple bind with user password sasl for SASL bind using external authentication mechanism Version Number The version number, in this case version=3 , indicates the LDAP version number (either LDAPv2 or LDAPv3) that the LDAP client used to communicate with the LDAP server. Error Number The error number, in this case err=0 , provides the LDAP result code returned from the LDAP operation performed. The LDAP error number 0 means that the operation was successful. For a more comprehensive list of LDAP result codes, see Section 7.4, "LDAP Result Codes" . Tag Number The tag number, in this case tag=97 , indicates the type of result returned, which is almost always a reflection of the type of operation performed. The tags used are the BER tags from the LDAP protocol. Table 7.1. Commonly-Used Tags Tag Description tag=97 Result from a client bind operation. tag=100 The actual entry being searched for. tag=101 Result from a search operation. tag=103 Result from a modify operation. tag=105 Result from an add operation. tag=107 Result from a delete operation. tag=109 Result from a moddn operation. tag=111 Result from a compare operation. tag=115 Search reference when the entry on which the search was performed holds a referral to the required entry. Search references are expressed in terms of a referral. tag=120 Result from an extended operation. tag=121 Result from an intermediate operation. Note tag=100 and tag=115 are not result tags as such, and so it is unlikely that they will be recorded in the access log. Number of Entries nentries shows the number of entries, in this case nentries=0 , that were found matching the LDAP client's request. Elapsed Time etime shows the elapsed time, in this case etime=3 , or the amount of time (in seconds) that it took the Directory Server to perform the LDAP operation. An etime value of 0 means that the operation actually took 0 nanoseconds to perform. LDAP Request Type The LDAP request type indicates the type of LDAP request being issued by the LDAP client. Possible values are: SRCH for search MOD for modify DEL for delete ADD for add MODDN for moddn EXT for extended operation ABANDON for abandon operation If the LDAP request resulted in sorting of entries, then the message SORT serialno will be recorded in the log, followed by the number of candidate entries that were sorted. For example: The number enclosed in parentheses specifies the number of candidate entries that were sorted, which in this case is 1 . LDAP Response Type The LDAP response type indicates the LDAP response being issued by the LDAP client. There are three possible values: RESULT ENTRY REFERRAL , an LDAP referral or search reference Search Indicators Directory Server provides additional information on searches in the notes field of log entries. For example: The following search indicators exist: Paged Search Indicator: notes=P LDAP clients with limited resources can control the rate at which an LDAP server returns the results of a search operation. When the search performed used the LDAP control extension for simple paging of search results, Directory Server logs the notes=P paged search indicator. This indicator is informational and no further actions are required. For more details, see RFC 2696 . Unindexed Search Indicators: notes=A and notes=U When attributes are not indexed, Directory Server must search them in the database directly. This procedure is more resource-intensive than searching the index file. The following unindexed search indicators can be logged: notes=A All candidate attributes in the filter were unindexed and a full table scan was required. This can exceed the value set in the nsslapd-lookthroughlimit parameter. notes=U This state is set in the following situations: At least one of the search terms is unindexed. The limit set in the nsslapd-idlistscanlimit parameter was reached during the search operation. For details, see Section 4.4.1.9, "nsslapd-idlistscanlimit" . Unindexed searches occur in the following scenarios: The nsslapd-idlistscanlimit parameter's value was reached within the index file used for the search. No index file existed. The index file was not configured in the way required by the search. To optimize future searches, add frequently searched unindexed attributes to the index. For details, see the corresponding section in the Directory Server Administration Guide . Note An unindexed search indicator is often accompanied by a large etime value, as unindexed searches are generally more time consuming. Beside a single value, the notes field can have the following value combinations: notes=P,A and notes=U,P . VLV-Related Entries When a search involves virtual list views (VLVs), appropriate entries are logged in the access log file. Similar to the other entries, VLV-specific entries show the request and response information side by side: RequestInformation has the following form: If the client uses a position-by-value VLV request, the format for the first part, the request information would be beforeCount: afterCount: value . ResponseInformation has the following form: The example below highlights the VLV-specific entries: In the above example, the first part, 0:5:0210 , is the VLV request information: The beforeCount is 0 . The afterCount is 5 . The value is 0210 . The second part, 10:5397 (0) , is the VLV response information: The targetPosition is 10 . The contentCount is 5397 . The (resultCode) is (0) . Search Scope The entry scope=n defines the scope of the search performed, and n can have a value of 0 , 1 , or 2 . 0 for base search 1 for one-level search 2 for subtree search Extended Operation OID An extended operation OID, such as EXT oid="2.16.840.1.113730.3.5.3" or EXT oid="2.16.840.1.113730.3.5.5" in Example 7.1, "Example Access Log" , provides the OID of the extended operation being performed. Table 7.2, "LDAPv3 Extended Operations Supported by Directory Server" provides a partial list of LDAPv3 extended operations and their OIDs supported in Directory Server. Table 7.2. LDAPv3 Extended Operations Supported by Directory Server Extended Operation Name Description OID Directory Server Start Replication Request Sent by a replication initiator to indicate that a replication session is requested. 2.16.840.1.113730.3.5.3 Directory Server Replication Response Sent by a replication responder in response to a Start Replication Request Extended Operation or an End Replication Request Extended Operation. 2.16.840.1.113730.3.5.4 Directory Server End Replication Request Sent to indicate that a replication session is to be terminated. 2.16.840.1.113730.3.5.5 Directory Server Replication Entry Request Carries an entry, along with its state information ( csn and UniqueIdentifier ) and is used to perform a replica initialization. 2.16.840.1.113730.3.5.6 Directory Server Bulk Import Start Sent by the client to request a bulk import together with the suffix being imported to and sent by the server to indicate that the bulk import may begin. 2.16.840.1.113730.3.5.7 Directory Server Bulk Import Finished Sent by the client to signal the end of a bulk import and sent by the server to acknowledge it. 2.16.840.1.113730.3.5.8 Change Sequence Number The change sequence number, in this case csn=3b4c8cfb000000030000 , is the replication change sequence number, indicating that replication is enabled on this particular naming context. Abandon Message The abandon message indicates that an operation has been aborted. nentries=0 indicates the number of entries sent before the operation was aborted, etime=0 value indicates how much time (in seconds) had elapsed, and targetop=1 corresponds to an operation value from a previously initiated operation (that appears earlier in the access log). There are two possible log ABANDON messages, depending on whether the message ID succeeds in locating which operation was to be aborted. If the message ID succeeds in locating the operation (the targetop ) then the log will read as above. However, if the message ID does not succeed in locating the operation or if the operation had already finished prior to the ABANDON request being sent, then the log will read as follows: targetop=NOTFOUND indicates the operation to be aborted was either an unknown operation or already complete. Message ID The message ID, in this case msgid=2 , is the LDAP operation identifier, as generated by the LDAP SDK client. The message ID may have a different value than the operation number but identifies the same operation. The message ID is used with an ABANDON operation and tells the user which client operation is being abandoned. Note The Directory Server operation number starts counting at 0, and, in the majority of LDAP SDK/client implementations, the message ID number starts counting at 1, which explains why the message ID is frequently equal to the Directory Server operation number plus 1. SASL Multi-Stage Bind Logging In Directory Server, logging for multi-stage binds is explicit. Each stage in the bind process is logged. The error codes for these SASL connections are really return codes. In Example 7.1, "Example Access Log" , the SASL bind is currently in progress so it has a return code of err=14 , meaning the connection is still open, and there is a corresponding progress statement, SASL bind in progress . In logging a SASL bind, the sasl method is followed by the LDAP Version Number and the SASL mechanism used, as shown below with the GSS-API mechanism. Note The authenticated DN (the DN used for access control decisions) is now logged in the BIND result line as opposed to the bind request line, as was previously the case: For SASL binds, the DN value displayed in the bind request line is not used by the server and, as a consequence, is not relevant. However, given that the authenticated DN is the DN which, for SASL binds, must be used for audit purposes, it is essential that this be clearly logged. Having this authenticated DN logged in the bind result line avoids any confusion as to which DN is which. 7.1.3. Access Log Content for Additional Access Logging Levels This section presents the additional access logging levels available in the Directory Server access log. In Example 7.2, "Access Log Extract with Internal Access Operations Level (Level 4)" , access logging level 4 , which logs internal operations, is enabled. Example 7.2. Access Log Extract with Internal Access Operations Level (Level 4) Access log level 4 enables logging for internal operations, which log search base, scope, filter, and requested search attributes, in addition to the details of the search being performed. In the following example, access logging level 768 is enabled (512 + 256), which logs access to entries and referrals. In this extract, six entries and one referral are returned in response to the search request, which is shown on the first line. Connection Description The connection description, in this case conn=Internal , indicates that the connection is an internal connection. The operation number op=-1 also indicates that the operation was initiated internally. Options Description The options description ( options=persistent ) indicates that a persistent search is being performed, as distinguished from a regular search operation. Persistent searches can be used as a form of monitoring and configured to return changes to given configurations as changes occur. Both log levels 512 and 4 are enabled for this example, so both internal access operations and entry access and referrals being logged. 7.1.4. Common Connection Codes A connection code is a code that is added to the closed log message to provide additional information related to the connection closure. Table 7.3. Common Connection Codes Connection Code Description A1 Client aborts the connection. B1 Corrupt BER tag encountered. If BER tags, which encapsulate data being sent over the wire, are corrupt when they are received, a B1 connection code is logged to the access log. BER tags can be corrupted due to physical layer network problems or bad LDAP client operations, such as an LDAP client aborting before receiving all request results. B2 BER tag is longer than the nsslapd-maxbersize attribute value. For further information about this configuration attribute, see Section 3.1.1.118, "nsslapd-maxbersize (Maximum Message Size)" . B3 Corrupt BER tag encountered. B4 Server failed to flush data response back to client. P2 Closed or corrupt connection has been detected. T1 Client does not receive a result within the specified idletimeout period. For further information about this configuration attribute, see Section 3.1.1.97, "nsslapd-idletimeout (Default Idle Timeout)" . T2 Server closed connection after ioblocktimeout period was exceeded. For further information about this configuration attribute, see Section 3.1.1.100, "nsslapd-ioblocktimeout (IO Block Time Out)" . U1 Connection closed by server after client sends an unbind request. The server will always close the connection when it sees an unbind request. 7.2. Error Log Reference The Directory Server error log records messages for Directory Server transactions and operations. These may be error messages for failed operations, but it also contains general information about the processes of Directory Server and LDAP tasks, such as server startup messages, logins and searches of the directory, and connection information. 7.2.1. Error Log Logging Levels The error log can record different amounts of detail for operations, as well as different kinds of information depending on the type of error logging enabled. The logging level is set in the Section 3.1.1.79, "nsslapd-errorlog-level (Error Log Level)" configuration attribute. The default log level is 16384 , which included critical error messages and standard logged messages, like LDAP results codes and startup messages. As with access logging, error logging levels are additive. To enable both replication logging ( 8192 ) and plug-in logging ( 65536 ), set the log level to 73728 ( 8192 + 65536 ). Note Enabling high levels of debug logging can significantly erode server performance. Debug log levels, such as replication ( 8192 ) should only be enabled for troubleshooting, not for daily operations. Table 7.4. Error Log Levels Setting Console Name Description 1 Trace function calls Logs a message when the server enters and exits a function. 2 Packeting handlings Logs debug information for packets processed by the server. 4 Heavy trace output Logs when the server enters and exits a function, with additional debugging messages. 8 Connection management Logs the current connection status, including the connection methods used for a SASL bind. 16 Packets sent/received Print out the numbers of packets sent and received by the server. 32 Search filter processing Logs all of the functions called by a search operation. 64 Config file processing Prints any .conf configuration files used with the server, line by line, when the server is started. By default, only slapd-collations.conf is available and processed. 128 Access control list processing Provides very detailed access control list processing information. 2048 Log entry parsing Logs schema parsing debugging information. 4096 Housekeeping Housekeeping thread debugging. 8192 Replication Logs detailed information about every replication-related operation, including updates and errors, which is important for debugging replication problems. 16384 Default Default level of logging used for critical errors and other messages that are always written to the error log, such as server startup messages. Messages at this level are always included in the error log, regardless of the log level setting. 32768 Entry cache Database entry cache debugging. 65536 Plug-ins Writes an entry to the log file when a server plug-in calls slapi-log-error , so this is used for server plug-in debugging. 262144 Access control summary Summarizes information about access to the server, much less verbose than level 128 . This value is recommended for use when a summary of access control processing is needed. Use 128 for very detailed processing messages. 7.2.2. Error Log Content The format of the error log differs compared from that of the access log: Log entries written by the server Entries that the server writes to the file, use the following format: For example: Log entries written by plug-ins Entries that plug-ins write to the file, use the following format: For example: Error log entries contain the following columns: Time stamp: The format can differ depending on your local settings. If high-resolution time stamps are enabled in the nsslapd-logging-hr-timestamps-enabled attribute in the cn=config entry (default), the time stamp is exact to the nanosecond. Severity level: The following severity levels are used: EMERG : This level is logged when the server fails to start. ALERT : The server is in a critical state and possible action must be taken. CRIT : Severe error. ERR : General error. WARNING : A warning message, that is not necessarily an error. NOTICE : A normal, but significant condition occurred. For example, this is logged for expected behavior. INFO : Informational messages, such as startup, shutdown, import, export, backup, restore. DEBUG : Debug-level messages. This level is also used by default when using a verbose logging level, such as Trace function calls (1), Access control list processing (128), and Replication (8192). For a list of error log levels, see Table 7.4, "Error Log Levels" . You can use the severity levels to filter your log entries. For example, to display only log entries using the ERR severity: Plug-in name: If a plug-in logged the entry, this column displays the name of the plug-in. If the server logged the entry, this column does not appear. Function name: Functions that the operation or the plug-in called. Message: The output that the operation or plug-in returned. This message contains additional information, such as LDAP error codes and connection information. 7.2.3. Error Log Content for Other Log Levels The different log levels return not only different levels of detail, but also information about different types of server operations. Some of these are summarized here, but there are many more combinations of logging levels possible. Replication logging is one of the most important diagnostic levels to implement. This logging level records all operations related to replication and Windows synchronization, including processing modifications on a supplier and writing them to the changelog, sending updates, and changing replication agreements. Whenever a replication update is prepared or sent, the error log identifies the replication or synchronization agreement being specified, the consumer host and port, and the current replication task. For example: {replicageneration} means that the new information is being sent, and 4949df6e000000010000 is the change sequence number of the entry being replicated. Example 7.3, "Replication Error Log Entry" shows the complete process of sending a single entry to a consumer, from adding the entry to the changelog to releasing the consumer after replication is complete. Example 7.3. Replication Error Log Entry Plug-in logging records every the name of the plug-in and all of the functions called by the plug-in. This has a simple format: The information returned can be hundreds of lines long as every step is processed. The precise information recorded depends on the plug-in itself. For example, the ACL Plug-in includes a connection and operation number, as shown in Example 7.4, "Example ACL Plug-in Error Log Entry with Plug-in Logging" . Example 7.4. Example ACL Plug-in Error Log Entry with Plug-in Logging Note Example 7.4, "Example ACL Plug-in Error Log Entry with Plug-in Logging" shows both plug-in logging and search filter processing (log level 65696). Many other kinds of logging have similar output to the plug-in logging level, only for different kinds of internal operations. Heavy trace output ( 4 ), access control list processing ( 128 ), schema parsing ( 2048 ), and housekeeping ( 4096 ) all record the functions called by the different operations being performed. In this case, the difference is not in the format of what is being recorded, but what operations it is being recorded for. The configuration file processing goes through any .conf file, printing every line, whenever the server starts up. This can be used to debug any problems with files outside of the server's normal configuration. By default, only slapd-collations.conf file, which contains configurations for international language sets, is available. Example 7.5. Config File Processing Log Entry There are two levels of ACI logging, one for debug information and one for summary. Both of these ACI logging levels records some extra information that is not included with other types of plug-ins or error logging, including Connection Number and Operation Number information. Show the name of the plug-in, the bind DN of the user, the operation performed or attempted, and the ACI which was applied. The debug level shows the series of functions called in the course of the bind and any other operations, as well. Example 7.6, "Access Control Summary Logging" shows the summary access control log entry. Example 7.6. Access Control Summary Logging 7.3. Audit Log Reference The audit log records changes made to the server instance. Unlike the error and access log, the audit log does not record access to the server instance, so searches against the database are not logged. The audit log is formatted differently than the access and error logs and is like a time-stamped LDIF file. The operations recorded in the audit log are formatted as LDIF statements: LDIF files and formats are described in more detail in the "LDAP Data Interchange Format" appendix of the Administration Guide . Several different kinds of audit entries are shown in Example 7.7, "Audit Log Content" . Example 7.7. Audit Log Content Note that you cannot change to format or set a log level for the audit log. 7.4. LDAP Result Codes Directory Server uses the following LDAP result codes: Table 7.5. LDAP Result Codes Decimal Values Hex Values Constants 0 0x00 LDAP_SUCCESS 1 0x01 LDAP_OPERATIONS_ERROR 2 0x02 LDAP_PROTOCOL_ERROR 3 0x03 LDAP_TIMELIMIT_EXCEEDED 4 0x04 LDAP_SIZELIMIT_EXCEEDED 5 0x05 LDAP_COMPARE_FALSE 6 0x06 LDAP_COMPARE_TRUE 7 0x07 LDAP_AUTH_METHOD_NOT_SUPPORTED LDAP_STRONG_AUTH_NOT_SUPPORTED 8 0x08 LDAP_STRONG_AUTH_REQUIRED 9 0x09 LDAP_PARTIAL_RESULTS 10 0x0a LDAP_REFERRAL [a] 11 0x0b LDAP_ADMINLIMIT_EXCEEDED 12 0x0c LDAP_UNAVAILABLE_CRITICAL_EXTENSION 13 0x0d LDAP_CONFIDENTIALITY_REQUIRED 14 0x0e LDAP_SASL_BIND_IN_PROGRESS 16 0x10 LDAP_NO_SUCH_ATTRIBUTE 17 0x11 LDAP_UNDEFINED_TYPE 18 0x12 LDAP_INAPPROPRIATE_MATCHING 19 0x13 LDAP_CONSTRAINT_VIOLATION 20 0x14 LDAP_TYPE_OR_VALUE_EXISTS 21 0x15 LDAP_INVALID_SYNTAX 32 0x20 LDAP_NO_SUCH_OBJECT 33 0x21 LDAP_ALIAS_PROBLEM 34 0x22 LDAP_INVALID_DN_SYNTAX 35 0x23 LDAP_IS_LEAF [b] 36 0x24 LDAP_ALIAS_DEREF_PROBLEM 48 0x30 LDAP_INAPPROPRIATE_AUTH 49 0x31 LDAP_INVALID_CREDENTIALS 50 0x32 LDAP_INSUFFICIENT_ACCESS 51 0x33 LDAP_BUSY 52 0x34 LDAP_UNAVAILABLE 53 0x35 LDAP_UNWILLING_TO_PERFORM 54 0x36 LDAP_LOOP_DETECT 60 0x3c LDAP_SORT_CONTROL_MISSING 61 0x3d LDAP_INDEX_RANGE_ERROR 64 0x40 LDAP_NAMING_VIOLATION 65 0x41 LDAP_OBJECT_CLASS_VIOLATION 66 0x42 LDAP_NOT_ALLOWED_ON_NONLEAF 67 0x43 LDAP_NOT_ALLOWED_ON_RDN 68 0x44 LDAP_ALREADY_EXISTS 69 0x45 LDAP_NO_OBJECT_CLASS_MODS 70 0x46 LDAP_RESULTS_TOO_LARGE [c] 71 0x47 LDAP_AFFECTS_MULTIPLE_DSAS 76 0x4C LDAP_VIRTUAL_LIST_VIEW_ERROR 80 0x50 LDAP_OTHER 81 0x51 LDAP_SERVER_DOWN 82 0x52 LDAP_LOCAL_ERROR 83 0x53 LDAP_ENCODING_ERROR 84 0x54 LDAP_DECODING_ERROR 85 0x55 LDAP_TIMEOUT 86 0x56 LDAP_AUTH_UNKNOWN 87 0x57 LDAP_FILTER_ERROR 88 0x58 LDAP_USER_CANCELLED 89 0x59 LDAP_PARAM_ERROR 90 0x5A LDAP_NO_MEMORY 91 0x5B LDAP_CONNECT_ERROR 92 0x5C LDAP_NOT_SUPPORTED 93 0x5D LDAP_CONTROL_NOT_FOUND 94 0x5E LDAP_MORE_RESULTS_TO_RETURN 95 0x5F LDAP_MORE_RESULTS_TO_RETURN 96 0x60 LDAP_CLIENT_LOOP 97 0x61 LDAP_REFERRAL_LIMIT_EXCEEDED 118 0x76 LDAP_CANCELLED [a] LDAPv3 [b] Not used in LDAPv3 [c] Reserved for CLDAP 7.5. Replacing Log Files with a Named Pipe Many administrators want to do some special configuration or operation with logging data, like configuring an access log to record only certain events. This is not possible using the standard Directory Server log file configuration attributes, but it is possible by sending the log data to a named pipe, and then using another script to process the data. Using a named pipe for the log simplifies these special tasks, like: Logging certain events, like failed bind attempts or connections from specific users or IP addresses Logging entries which match a specific regular expression pattern Keeping the log to a certain length (logging only the last number of lines) Sending a notification, such as an email, when an event occurs Replacing a log file with a pipe improves performance, especially on servers with a high rate of operations. The named pipe is different than using a script to extract data from the logs because of how data are handled in the log buffer. If a log is buffered, server performance is good, but important data are not written to disk (the log file) as soon as the event occurs. If the server is having a problem with crashing, it may crash before the data is written to disk - and there is no data for the script to extract. If a log is not buffered [1] , the writes are flushed to disk with each operation, causing a lot of disk I/O and performance degradation. Replacing the log disk file with a pipe has the benefits of buffering, since the script that reads from the pipe can buffer the incoming log data in memory (which is not possible with a simple script). The usage and option details for the script is covered in Section 9.4, "ds-logpipe.py" . The basic format is: ds-logpipe.py /path/to/named_pipe --user pipe_user --maxlines number --serverpidfile file.pid --serverpid PID --servertimeout seconds --plugin= /path/to/plugin.py pluginfile.arg = value 7.5.1. Using the Named Pipe for Logging The Directory Server instance can use a named pipe for its logging simply by running the named pipe log script and giving the name of the pipe. (If the server is already running, then the log has to be reopened, but there is no configuration required otherwise.) Running the ds-logpipe.py in this way has the advantage of being simple to implement and not requiring any Directory Server configuration changes. This is useful for fast debugging or monitoring, especially if you are looking for a specific type of event. If the Directory Server instance will frequently or permanently use the named pipe rather than a real file for logging, then it is possible to reconfigure the instance to create the named pipe and use it for logging (as it does by default for the log files). Three things need to be configured for the log configuration for the instance: The log file to use has to be changed to the pipe ( nsslapd-*log , where the * can be access, error, or audit [2] , depending on the log type being configured) Buffering should be disabled because the script already buffers the log entries ( nsslapd-*log-logbuffering ) Log rotation should be disabled so that the server does not attempt to rotate the named pipe ( nsslapd-*log-maxlogsperdir , nsslapd-*log-logexpirationtime , and nsslapd-*log-logrotationtime ) These configuration changes can be made in the Directory Server Console or using ldapmodify . For example, this switches the access log to access.pipe : Note Making these changes causes the server to close the current log file and switch to the named pipe immediately. This can be very helpful for debugging a running server and sifting the log output for specific messages. 7.5.2. Starting the Named Pipe with the Server The named pipe can be started and shut down along with the Directory Server instance by editing the instance's init script configuration file. Note The named pipe script has to be specifically configured in the instance's dse.ldif file before it can be called at server startup. Open the instance configuration file for the server system. Warning Do not edit the /etc/sysconfig/dirsrv file. At the end of the file, there will be a line that reads: Below that line, insert the ds-logpipe.py command to launch when the server starts. For example: Note The -s option both specifies the .pid file for the server to write its PID to and sets the script to start and stop with the server process. 7.5.3. Using Plug-ins with the Named Pipe Log A plug-in can be called to read the log data from the named pipe and perform some operation on it. There are some considerations with using plug-ins with the named pipe log script: The plug-in function is called for every line read from the named pipe. The plug-in function must be a Python script and must end in .py . Any plug-in arguments are passed in the command line to the named pipe log script. A pre-operation function can be specified for when the plug-in is loaded. A post-operation function can be called for when the script exits. 7.5.3.1. Loading Plug-ins with the Named Pipe Log Script There are two options with ds-logpipe.py to use for plug-ins: The --plugin option gives the path to the plug-in file (which must be a Python script and must end in .py ). The plugin.arg option passes plug-in arguments to the named pipe log script. The plug-in file name (without the .py extension) is plugin and any argument allowed in that plug-in can be arg . For example: If there are more than one values passed for the same argument, then they are converted into a list of values in the plug-in dict. For example, this script gives two values for arg1 : In the plug-in, this is converted to: This is a Python dict object with two keys. The first key is the string arg1 , and its value is a Python list object with two elements, the strings foo and bar . The second key is the string arg2 , and its value is the string baz . If an argument has only a single value, it is left as a simple string. Multiple values for a single argument name are converted into a list of strings. 7.5.3.2. Writing Plug-ins to Use with the Named Pipe Log Script The ds-logpipe.py command expects up to three functions in any plug-in: plugin () , pre () , and post () . Any plug-in used with the ds-logpipe.py command must specify the plugin function. The plugin () function is performed against every line in the log data, while the pre () and post () functions are run when the script is started and stopped, respectively. Each function can have any arguments defined for it, and these arguments can then be passed to the script using the plugin.arg option. Additionally, each function can have its own return values and actions defined for it. Example 7.8. Simple Named Pipe Log Plug-in [1] Server performance suffers when log buffering is disabled on the access log, when the log level is changed on the error log, or with audit logging. [2] The audit log is not enabled by default, so this log has to be enabled before a named pipe can be used to replace it.
[ "[23/Jun/2020:16:30:27.388006333 -0400] conn=20 op=5 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(&(objectClass=top)(objectClass=ldapsubentry)(objectClass=passwordpolicy))\" attrs=\"distinguishedName\"", "[23/Jun/2020:16:30:27.390881301 -0400] conn=20 op=5 RESULT err=0 tag=101 nentries=0 wtime=0.000035342 optime=0.002877749 etime=0.002911121", "[21/Apr/2020:11:39:51 -0700] conn=11 fd=608 slot=608 connection from 207.1.153.51 to 192.18.122.139 [21/Apr/2020:11:39:51 -0700] conn=11 op=0 BIND dn=\"cn=Directory Manager\" method=128 version=3 [21/Apr/2020:11:39:51 -0700] conn=11 op=0 RESULT err=0 tag=97 nentries=0 etime=0 [21/Apr/2020:11:39:51 -0700] conn=11 op=1 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(mobile=+1 123 456-7890)\" [21/Apr/2020:11:39:51 -0700] conn=11 op=1 RESULT err=0 tag=101 nentries=1 etime=3 notes=U [21/Apr/2020:11:39:51 -0700] conn=11 op=2 UNBIND [21/Apr/2020:11:39:51 -0700] conn=11 op=2 fd=608 closed - U1 [21/Apr/2020:11:39:52 -0700] conn=12 fd=634 slot=634 connection from 207.1.153.51 to 192.18.122.139 [21/Apr/2020:11:39:52 -0700] conn=12 op=0 BIND dn=\"cn=Directory Manager\" method=128 version=3 [21/Apr/2020:11:39:52 -0700] conn=12 op=0 RESULT err=0 tag=97 nentries=0 etime=0 [21/Apr/2020:11:39:52 -0700] conn=12 op=1 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(uid=bjensen)\" [21/Apr/2020:11:39:52 -0700] conn=12 op=2 ABANDON targetop=1 msgid=2 nentries=0 etime=0 [21/Apr/2020:11:39:52 -0700] conn=12 op=3 UNBIND [21/Apr/2020:11:39:52 -0700] conn=12 op=3 fd=634 closed - U1 [21/Apr/2020:11:39:53 -0700] conn=13 fd=659 slot=659 connection from 207.1.153.51 to 192.18.122.139 [21/Apr/2020:11:39:53 -0700] conn=13 op=0 BIND dn=\"cn=Directory Manager\" method=128 version=3 [21/Apr/2020:11:39:53 -0700] conn=13 op=0 RESULT err=0 tag=97 nentries=0 etime=0 [21/Apr/2020:11:39:53 -0700] conn=13 op=1 EXT oid=\"2.16.840.1.113730.3.5.3\" [21/Apr/2020:11:39:53 -0700] conn=13 op=1 RESULT err=0 tag=120 nentries=0 etime=0 [21/Apr/2020:11:39:53 -0700] conn=13 op=2 ADD dn=\"cn=Sat Apr 21 11:39:51 MET DST 2020,dc=example,dc=com\" [21/Apr/2020:11:39:53 -0700] conn=13 op=2 RESULT err=0 tag=105 nentries=0 etime=0 csn=3b4c8cfb000000030000 [21/Apr/2020:11:39:53 -0700] conn=13 op=3 EXT oid=\"2.16.840.1.113730.3.5.5\" [21/Apr/2020:11:39:53 -0700] conn=13 op=3 RESULT err=0 tag=120 nentries=0 etime=0 [21/Apr/2020:11:39:53 -0700] conn=13 op=4 UNBIND [21/Apr/2020:11:39:53 -0700] conn=13 op=4 fd=659 closed - U1 [21/Apr/2020:11:39:55 -0700] conn=14 fd=700 slot=700 connection from 207.1.153.51 to 192.18.122.139 [21/Apr/2020:11:39:55 -0700] conn=14 op=0 BIND dn=\"\" method=sasl version=3 mech=DIGEST-MD5 [21/Apr/2020:11:39:55 -0700] conn=14 op=0 RESULT err=14 tag=97 nentries=0 etime=0, SASL bind in progress [21/Apr/2020:11:39:55 -0700] conn=14 op=1 BIND dn=\"uid=jdoe,dc=example,dc=com\" method=sasl version=3 mech=DIGEST-MD5 [21/Apr/2020:11:39:55 -0700] conn=14 op=1 RESULT err=0 tag=97nentries=0 etime=0 dn=\"uid=jdoe,dc=example,dc=com\" [21/Apr/2020:11:39:55 -0700] conn=14 op=2 UNBIND [21/Apr/2020:11:39:53 -0700] conn=14 op=2 fd=700 closed - U1", "[21/Apr/2020:11:39:51 -0700] conn=11 fd=608 slot=608 connection from 207.1.153.51 to 192.18.122.139", "[21/Apr/2020:11:39:51 -0700] conn=11 fd=608 slot=608 connection from 207.1.153.51 to 192.18.122.139", "[21/Apr/2020:11:39:51 -0700] conn=11 fd=608 slot=608 connection from 207.1.153.51 to 192.18.122.139", "[21/Apr/2020:11:39:51 -0700] conn=11 op=0 RESULT err=0 tag=97 nentries=0 etime=0", "[21/Apr/2020:11:39:51 -0700] conn=11 op=0 BIND dn=\"cn=Directory Manager\" method=128 version=3", "[21/Apr/2020:11:39:51 -0700] conn=11 op=0 BIND dn=\"cn=Directory Manager\" method=128 version=3", "[21/Apr/2020:11:39:51 -0700] conn=11 op=0 RESULT err=0 tag=97 nentries=0 etime=0", "[21/Apr/2020:11:39:51 -0700] conn=11 op=0 RESULT err=0 tag=97 nentries=0 etime=0", "[21/Apr/2020:11:39:51 -0700] conn=11 op=0 RESULT err=0 tag=97 nentries=0 etime=0", "[21/Apr/2020:11:39:51 -0700] conn=11 op=1 RESULT err=0 tag=101 nentries=1 etime=3 notes=U", "[04/May/2020:15:51:46 -0700] conn=114 op=68 SORT serialno (1)", "[21/Apr/2016:11:39:51 -0700] conn=11 op=1 RESULT err=0 tag=101 nentries=1 etime=3 notes=U", "VLV RequestInformation ResponseInformation", "beforeCount:afterCount:index:contentCount", "targetPosition:contentCount ( resultCode )", "[07/May/2020:11:43:29 -0700] conn=877 op=8530 SRCH base=\"(ou=People)\" scope=2 filter=\"(uid=*)\" [07/May/2020:11:43:29 -0700] conn=877 op=8530 SORT uid [07/May/2020:11:43:29 -0700] conn=877 op=8530 VLV 0:5:0210 10:5397 (0 ) [07/May/2020:11:43:29 -0700] conn=877 op=8530 RESULT err=0 tag=101 nentries=1 etime=0", "[21/Apr/2020:11:39:52 -0700] conn=12 op=2 ABANDON targetop=1 msgid=2 nentries=0 etime=0", "[21/Apr/2020:11:39:52 -0700] conn=12 op=2 ABANDON targetop=NOTFOUND msgid=2", "[21/Apr/2020:11:39:52 -0700] conn=12 op=2 ABANDON targetop=NOTFOUND msgid=2", "[21/Apr/2020:11:39:55 -0700] conn=14 op=0 BIND dn=\"\" method=sasl version=3 mech=DIGEST-MD5 [21/Apr/2020:11:39:55 -0700] conn=14 op=0 RESULT err=14 tag=97 nentries=0 etime=0, SASL bind in progress", "[21/Apr/2020:12:57:14 -0700] conn=32 op=0 BIND dn=\"\" method=sasl version=3 mech=GSSAPI", "[21/Apr/2020:11:39:55 -0700] conn=14 op=1 RESULT err=0 tag=97 nentries=0 etime=0 dn=\"uid=jdoe,dc=example,dc=com\"", "[12/Jul/2020:16:45:46 +0200] conn=Internal op=-1 SRCH base=\"cn=\\22dc=example,dc=com\\22,cn=mapping tree,cn=config\"scope=0 filter=\"objectclass=nsMappingTree\"attrs=\"nsslapd-referral\" options=persistent [12/Jul/2020:16:45:46 +0200] conn=Internal op=-1 RESULT err=0 tag=48 nentries=1etime=0 [12/Jul/2020:16:45:46 +0200] conn=Internal op=-1 SRCH base=\"cn=\\22dc=example,dc=com\\22,cn=mapping tree,cn=config\"scope=0 filter=\"objectclass=nsMappingTree\" attrs=\"nsslapd-state\" [12/Jul/2020:16:45:46 +0200] conn=Internal op=-1 RESULT err=0 tag=48 nentries=1etime=0", "[12/Jul/2020:16:43:02 +0200] conn=306 fd=60 slot=60 connection from 127.0.0.1 to 127.0.0.1 [12/Jul/2020:16:43:02 +0200] conn=306 op=0 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(description=*)\" attrs=ALL [12/Jul/2020:16:43:02 +0200] conn=306 op=0 ENTRY dn=\"ou=Special [12/Jul/2020:16:43:02 +0200] conn=306 op=0 ENTRY dn=\"cn=Accounting Managers,ou=groups,dc=example,dc=com\" [12/Jul/2020:16:43:02 +0200] conn=306 op=0 ENTRY dn=\"cn=HR Managers,ou=groups,dc=example,dc=com\" [12/Jul/2020:16:43:02 +0200] conn=306 op=0 ENTRY dn=\"cn=QA Managers,ou=groups,dc=example,dc=com\" [12/Jul/2020:16:43:02 +0200] conn=306 op=0 ENTRY dn=\"cn=PD Managers,ou=groups,dc=example,dc=com\" [12/Jul/2020:16:43:02 +0200] conn=306 op=0 ENTRY dn=\"ou=Red Hat Servers,dc=example,dc=com\" [12/Jul/2020:16:43:02 +0200] conn=306 op=0 REFERRAL", "[12/Jul/2020:16:45:46 +0200] conn=Internal op=-1 ENTRY dn=\"cn=\\22dc=example,dc=com\\22,cn=mapping tree,cn=config\"", "[12/Jul/2020:16:45:46 +0200] conn=Internal op=-1 SRCH base=\"cn=\\22dc=example,dc=com\\22,cn=mapping tree,cn=config\"scope=0 filter=\"objectclass=nsMappingTree\"attrs=\"nsslapd-referral\" options=persistent", "time_stamp - severity_level - function_name - message", "[24/Mar/2017:11:31:38.781466443 +0100] - ERR - no_diskspace - No enough space left on device (/var/lib/dirsrv/slapd- instance_name /db) (40009728 bytes); at least 145819238 bytes space is needed for db region files", "time_stamp - severity_level - plug-in_name - function_name - message", "[24/Mar/2017:11:42:17.628363848 +0100] - ERR - NSMMReplicationPlugin - multimaster_extop_StartNSDS50ReplicationRequest - conn=19 op=3 repl=\"o= example.com \": Excessive clock skew from supplier RUV", "grep ERR /var/log/dirsrv/slapd- instance_name /errors [24/Mar/2017:11:31:38.781466443 +0100] - ERR - no_diskspace - No enough space left on device (/var/lib/dirsrv/slapd- instance_name /db) (40009728 bytes); at least 145819238 bytes space is needed for db region files [24/Mar/2017:11:31:38.815623298 +0100] - ERR - ldbm_back_start - Failed to init database, err=28 No space left on device [24/Mar/2017:11:31:38.828591835 +0100] - ERR - plugin_dependency_startall - Failed to start database plugin ldbm database", "[ timestamp ] NSMMReplicationPlugin - agmt=\" name \" ( consumer_host:consumer_port ): current_task", "[09/Jan/2020:13:44:48 -0500] NSMMReplicationPlugin - agmt=\"cn=example2\" (alt:13864): {replicageneration} 4949df6e000000010000", "[29/May/2017:14:15:30.539817639 +0200] - DEBUG - _csngen_adjust_local_time - gen state before 592c103d0000:1496059964:0:1 [29/May/2017:14:15:30.562983285 +0200] - DEBUG - _csngen_adjust_local_time - gen state after 592c10e20000:1496060129:0:1 [29/May/2017:14:15:30.578828393 +0200] - DEBUG - NSMMReplicationPlugin - ruv_add_csn_inprogress - Successfully inserted csn 592c10e2000000020000 into pending list [29/May/2017:14:15:30.589917123 +0200] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5GetDBFileByReplicaName - found DB object 0x558ddfe1f720 for database /var/lib/dirsrv/slapd-supplier_2/changelogdb/d3de3e8d-446611e7-a89886da-6a37442d_592c0e0b000000010000.db [29/May/2017:14:15:30.600044236 +0200] - DEBUG - NSMMReplicationPlugin - changelog program - cl5WriteOperationTxn - Successfully written entry with csn (592c10e2000000020000) [29/May/2017:14:15:30.615923352 +0200] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5GetDBFileByReplicaName - found DB object 0x558ddfe1f720 for database /var/lib/dirsrv/slapd-supplier_2/changelogdb/d3de3e8d-446611e7-a89886da-6a37442d_592c0e0b000000010000.db [29/May/2017:14:15:30.627443305 +0200] - DEBUG - NSMMReplicationPlugin - csnplCommitALL: committing all csns for csn 592c10e2000000020000 [29/May/2017:14:15:30.632713657 +0200] - DEBUG - NSMMReplicationPlugin - csnplCommitALL: processing data csn 592c10e2000000020000 [29/May/2017:14:15:30.652621188 +0200] - DEBUG - NSMMReplicationPlugin - ruv_update_ruv - Successfully committed csn 592c10e2000000020000 [29/May/2017:14:15:30.669666453 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_run - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): State: wait_for_changes -> wait_for_changes [29/May/2017:14:15:30.685259483 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_run - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): State: wait_for_changes -> ready_to_acquire_replica [29/May/2017:14:15:30.689906327 +0200] - DEBUG - NSMMReplicationPlugin - conn_connect - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - Trying non-secure slapi_ldap_init_ext [29/May/2017:14:15:30.700259799 +0200] - DEBUG - NSMMReplicationPlugin - conn_connect - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - binddn = cn=replrepl,cn=config, passwd = {AES-TUhNR0NTcUdTSWIzRFFFRkRUQm1NRVVHQ1NxR1NJYjNEUUVGRERBNEJDUmlZVFUzTnpRMk55MDBaR1ZtTXpobQ0KTWkxaE9XTTRPREpoTlMwME1EaGpabVUxWmdBQ0FRSUNBU0F3Q2dZSUtvWklodmNOQWdjd0hRWUpZSVpJQVdVRA0KQkFFcUJCRGhwMnNLcEZ2ZWE2RzEwWG10OU41Tg==}+36owaI7oTmvWhxRzUqX5w== [29/May/2017:14:15:30.712287531 +0200] - DEBUG - NSMMReplicationPlugin - conn_cancel_linger - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - No linger to cancel on the connection [29/May/2017:14:15:30.736779494 +0200] - DEBUG - _csngen_adjust_local_time - gen state before 592c10e20001:1496060129:0:1 [29/May/2017:14:15:30.741909244 +0200] - DEBUG - _csngen_adjust_local_time - gen state after 592c10e30000:1496060130:0:1 [29/May/2017:14:15:30.880287041 +0200] - DEBUG - NSMMReplicationPlugin - acquire_replica - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): Replica was successfully acquired. [29/May/2017:14:15:30.897500049 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_run - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): State: ready_to_acquire_replica -> sending_updates [29/May/2017:14:15:30.914417773 +0200] - DEBUG - csngen_adjust_time - gen state before 592c10e30001:1496060130:0:1 [29/May/2017:14:15:30.926341721 +0200] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5GetDBFile - found DB object 0x558ddfe1f720 for database /var/lib/dirsrv/slapd-supplier_2/changelogdb/d3de3e8d-446611e7-a89886da-6a37442d_592c0e0b000000010000.db [29/May/2017:14:15:30.943094471 +0200] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5PositionCursorForReplay - (agmt=\"cn=meTo_localhost:39001\" (localhost:39001)): Consumer RUV: [29/May/2017:14:15:30.949395331 +0200] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replicageneration} 592c0e0b000000010000 [29/May/2017:14:15:30.961118175 +0200] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replica 1 ldap://localhost:39001} 592c0e17000000010000 592c0e1a000100010000 00000000 [29/May/2017:14:15:30.976680025 +0200] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replica 2 ldap://localhost:39002} 592c103c000000020000 592c103c000000020000 00000000 [29/May/2017:14:15:30.990404183 +0200] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5PositionCursorForReplay - (agmt=\"cn=meTo_localhost:39001\" (localhost:39001)): Supplier RUV: [29/May/2017:14:15:31.001242624 +0200] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replicageneration} 592c0e0b000000010000 [29/May/2017:14:15:31.017406105 +0200] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replica 2 ldap://localhost:39002} 592c103c000000020000 592c10e2000000020000 592c10e1 [29/May/2017:14:15:31.028803190 +0200] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replica 1 ldap://localhost:39001} 592c0e1a000100010000 592c0e1a000100010000 00000000 [29/May/2017:14:15:31.040172464 +0200] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_get_buffer - found thread private buffer cache 0x558ddf870f00 [29/May/2017:14:15:31.057495165 +0200] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_get_buffer - _pool is 0x558ddfe294d0 _pool->pl_busy_lists is 0x558ddfab84c0 _pool->pl_busy_lists->bl_buffers is 0x558ddf870f00 [29/May/2017:14:15:31.063015498 +0200] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_initial_anchorcsn - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - (cscb 0 - state 0) - csnPrevMax () csnMax (592c10e2000000020000) csnBuf (592c103c000000020000) csnConsumerMax (592c103c000000020000) [29/May/2017:14:15:31.073252305 +0200] - DEBUG - clcache_initial_anchorcsn - anchor is now: 592c103c000000020000 [29/May/2017:14:15:31.089915209 +0200] - DEBUG - NSMMReplicationPlugin - changelog program - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): CSN 592c103c000000020000 found, position set for replay [29/May/2017:14:15:31.095825439 +0200] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_get_next_change - load=1 rec=1 csn=592c10e2000000020000 [29/May/2017:14:15:31.100123762 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Starting [29/May/2017:14:15:31.115749709 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [29/May/2017:14:15:31.125866330 +0200] - DEBUG - NSMMReplicationPlugin - replay_update - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): Sending add operation (dn=\"cn=user,ou=People,dc=example,dc=com\" csn=592c10e2000000020000) [29/May/2017:14:15:31.142339398 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [29/May/2017:14:15:31.160456597 +0200] - DEBUG - NSMMReplicationPlugin - replay_update - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): Consumer successfully sent operation with csn 592c10e2000000020000 [29/May/2017:14:15:31.172399536 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [29/May/2017:14:15:31.188857336 +0200] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_adjust_anchorcsn - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - (cscb 0 - state 1) - csnPrevMax (592c10e2000000020000) csnMax (592c10e2000000020000) csnBuf (592c10e2000000020000) csnConsumerMax (592c10e2000000020000) [29/May/2017:14:15:31.199605024 +0200] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_load_buffer - rc=-30988 [29/May/2017:14:15:31.210800816 +0200] - DEBUG - NSMMReplicationPlugin - send_updates - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): No more updates to send (cl5GetNextOperationToReplay) [29/May/2017:14:15:31.236214134 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_waitfor_async_results - 0 5 [29/May/2017:14:15:31.246755544 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [29/May/2017:14:15:31.277705986 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [29/May/2017:14:15:31.303530336 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 5 [29/May/2017:14:15:31.318259308 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Result 1, 0, 0, 5, (null) [29/May/2017:14:15:31.335263462 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 5 [29/May/2017:14:15:31.364551307 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_waitfor_async_results - 5 5 [29/May/2017:14:15:31.376301820 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain exiting [29/May/2017:14:15:31.393707037 +0200] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_return_buffer - session end: state=5 load=1 sent=1 skipped=0 skipped_new_rid=0 skipped_csn_gt_cons_maxcsn=0 skipped_up_to_date=0 skipped_csn_gt_ruv=0 skipped_csn_covered=0 [29/May/2017:14:15:31.398134114 +0200] - DEBUG - NSMMReplicationPlugin - consumer_connection_extension_acquire_exclusive_access - conn=4 op=3 Acquired consumer connection extension [29/May/2017:14:15:31.423099625 +0200] - DEBUG - NSMMReplicationPlugin - multimaster_extop_StartNSDS50ReplicationRequest - conn=4 op=3 repl=\"dc=example,dc=com\": Begin incremental protocol [29/May/2017:14:15:31.438899389 +0200] - DEBUG - csngen_adjust_time - gen state before 592c10e30001:1496060130:0:1 [29/May/2017:14:15:31.443800884 +0200] - DEBUG - csngen_adjust_time - gen state after 592c10e40001:1496060130:1:1 [29/May/2017:14:15:31.454123488 +0200] - DEBUG - NSMMReplicationPlugin - replica_get_exclusive_access - conn=4 op=3 repl=\"dc=example,dc=com\": Acquired replica [29/May/2017:14:15:31.469698781 +0200] - DEBUG - NSMMReplicationPlugin - release_replica - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): Successfully released consumer [29/May/2017:14:15:31.475096195 +0200] - DEBUG - NSMMReplicationPlugin - conn_start_linger -agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - Beginning linger on the connection [29/May/2017:14:15:31.485281588 +0200] - DEBUG - NSMMReplicationPlugin - repl5_inc_run - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): State: sending_updates -> wait_for_changes [29/May/2017:14:15:31.495865065 +0200] - DEBUG - NSMMReplicationPlugin - multimaster_extop_StartNSDS50ReplicationRequest - conn=4 op=3 repl=\"dc=example,dc=com\": StartNSDS90ReplicationRequest: response=0 rc=0 [29/May/2017:14:15:31.501617765 +0200] - DEBUG - NSMMReplicationPlugin - consumer_connection_extension_relinquish_exclusive_access - conn=4 op=3 Relinquishing consumer connection extension [29/May/2017:14:15:31.716627741 +0200] - DEBUG - NSMMReplicationPlugin - consumer_connection_extension_acquire_exclusive_access - conn=4 op=4 Acquired consumer connection extension [29/May/2017:14:15:31.735431913 +0200] - DEBUG - NSMMReplicationPlugin - replica_relinquish_exclusive_access - conn=4 op=4 repl=\"dc=example,dc=com\": Released replica held by locking_purl=conn=4 id=3 [29/May/2017:14:15:31.745841821 +0200] - DEBUG - NSMMReplicationPlugin - consumer_connection_extension_relinquish_exclusive_access - conn=4 op=4 Relinquishing consumer connection extension", "[ timestamp ] Plugin_name - message [ timestamp ] - function - message", "[29/May/2017:14:38:19.133878244 +0200] - DEBUG - get_filter_internal - ==> [29/May/2017:14:38:19.153942547 +0200] - DEBUG - get_filter_internal - PRESENT [29/May/2017:14:38:19.177908064 +0200] - DEBUG - get_filter_internal - <= 0 [29/May/2017:14:38:19.193547449 +0200] - DEBUG - slapi_vattr_filter_test_ext_internal - => [29/May/2017:14:38:19.198121765 +0200] - DEBUG - slapi_vattr_filter_test_ext_internal - <= [29/May/2017:14:38:19.214342752 +0200] - DEBUG - slapi_vattr_filter_test_ext_internal - PRESENT [29/May/2017:14:38:19.219886104 +0200] - DEBUG - NSACLPlugin - acl_access_allowed - conn=15 op=1 (main): Allow search on entry(cn=replication,cn=config): root user [29/May/2017:14:38:19.230152526 +0200] - DEBUG - slapi_vattr_filter_test_ext_internal - <= 0 [29/May/2017:14:38:19.240971955 +0200] - DEBUG - NSACLPlugin - acl_read_access_allowed_on_entry - Root access (read) allowed on entry(cn=replication,cn=config) [29/May/2017:14:38:19.246456160 +0200] - DEBUG - cos-plugin - cos_cache_vattr_types - Failed to get class of service reference [29/May/2017:14:38:19.257200851 +0200] - DEBUG - NSACLPlugin - Root access (read) allowed on entry(cn=replication,cn=config) [29/May/2017:14:38:19.273534025 +0200] - DEBUG - NSACLPlugin - Root access (read) allowed on entry(cn=replication,cn=config) [29/May/2017:14:38:19.289474926 +0200] - DEBUG - slapi_filter_free - type 0x87", "[29/May/2017:15:26:48.897935879 +0200] - DEBUG - collation_read_config - Reading config file /etc/dirsrv/slapd-supplier_1/slapd-collations.conf [29/May/2017:15:26:48.902606586 +0200] - DEBUG - collation-plugin - collation_read_config - line 16: collation \"\" \"\" \"\" 1 3 2.16.840.1.113730.3.3.2.0.1 default [29/May/2017:15:26:48.918493657 +0200] - DEBUG - collation-plugin - collation_read_config - line 17: collation ar \"\" \"\" 1 3 2.16.840.1.113730.3.3.2.1.1 ar [29/May/2017:15:26:48.932550086 +0200] - DEBUG - collation-plugin - collation_read_config - line 18: collation be \"\" \"\" 1 3 2.16.840.1.113730.3.3.2.2.1 be be-BY", "[29/May/2017:15:34:52.742034888 +0200] - DEBUG - NSACLPlugin - acllist_init_scan - Failed to find root for base: cn=features,cn=config [29/May/2017:15:34:52.761702767 +0200] - DEBUG - NSACLPlugin - acllist_init_scan - Failed to find root for base: cn=config [29/May/2017:15:34:52.771907825 +0200] - DEBUG - NSACLPlugin - acl_access_allowed - #### conn=6 op=1 binddn=\"cn=user,ou=people,dc=example,dc=com\" [29/May/2017:15:34:52.776327012 +0200] - DEBUG - NSACLPlugin - ************ RESOURCE INFO STARTS ********* [29/May/2017:15:34:52.786397852 +0200] - DEBUG - NSACLPlugin - Client DN: cn=user,ou=people,dc=example,dc=com [29/May/2017:15:34:52.797004451 +0200] - DEBUG - NSACLPlugin - resource type:256(search target_DN ) [29/May/2017:15:34:52.807135945 +0200] - DEBUG - NSACLPlugin - Slapi_Entry DN: cn=features,cn=config [29/May/2017:15:34:52.822877838 +0200] - DEBUG - NSACLPlugin - ATTR: objectClass [29/May/2017:15:34:52.827250828 +0200] - DEBUG - NSACLPlugin - rights:search [29/May/2017:15:34:52.831603634 +0200] - DEBUG - NSACLPlugin - ************ RESOURCE INFO ENDS ********* [29/May/2017:15:34:52.847183276 +0200] - DEBUG - NSACLPlugin - acl__scan_for_acis - Num of ALLOW Handles:0, DENY handles:0 [29/May/2017:15:34:52.857857195 +0200] - DEBUG - NSACLPlugin - print_access_control_summary - conn=6 op=1 (main): Deny search on entry(cn=features,cn=config).attr(objectClass) to cn=user,ou=people,dc=example,dc=com: no aci matched the resource", "timestamp: date dn: modified_entry changetype: action action : attribute attribute : new_value - replace: modifiersname modifiersname: dn - replace: modifytimestamp modifytimestamp: date -", "... modifying an entry time: 20200108181429 dn: uid=scarter,ou=people,dc=example,dc=com changetype: modify replace: userPassword userPassword: {SSHA}8EcJhJoIgBgY/E5j8JiVoj6W3BLyj9Za/rCPOw== - replace: modifiersname modifiersname: cn=Directory Manager - replace: modifytimestamp modifytimestamp: 20200108231429Z - ... sending a replication update time: 20200109131811 dn: cn=example2,cn=replica,cn=\"dc=example,dc=com\",cn=mapping tree,cn=config changetype: modify replace: nsds5BeginReplicaRefresh nsds5BeginReplicaRefresh: start - replace: modifiersname modifiersname: cn=Directory Manager - replace: modifytimestamp modifytimestamp: 20200109181810Z -", "ds-logpipe.py /var/log/dirsrv/slapd-example/access", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=config changetype: modify replace: nsslapd-accesslog nsslapd-accesslog: /var/log/dirsrv/slapd- instance /access.pipe - replace: nsslapd-accesslog-logbuffering nsslapd-accesslog-logbuffering: off - replace: nsslapd-accesslog-maxlogsperdir nsslapd-accesslog-maxlogsperdir: 1 - replace: nsslapd-accesslog-logexpirationtime nsslapd-accesslog-logexpirationtime: -1 - replace: nsslapd-accesslog-logrotationtime nsslapd-accesslog-logrotationtime: -1", "/etc/sysconfig/dirsrv- instance_name", "Put custom instance specific settings below here.", "only keep the last 1000 lines of the error log python /usr/bin/ds-logpipe.py /var/log/dirsrv/slapd-example/errors.pipe -m 1000 -u dirsrv -s /var/run/dirsrv/slapd-example.pid > /var/log/dirsrv/slapd-example/errors & only log failed binds python /usr/bin/ds-logpipe.py /var/log/dirsrv/slapd-example/access.pipe -u dirsrv -s /var/run/dirsrv/slapd-example.pid --plugin=/usr/share/dirsrv/data/failedbinds.py failedbinds.logfile=/var/log/dirsrv/slapd-example/access.failedbinds &", "ds-logpipe.py /var/log/dirsrc/slapd-example/errors.pipe --plugin=/usr/share/dirsrv/data/example-funct.py example-funct.regex=\"warning\" > warnings.txt", "--plugin=/path/to/pluginname.py pluginname.arg1=foo pluginname.arg1=bar pluginname.arg2=baz", "{'arg1': ['foo', 'bar'], 'arg2': 'baz'}", "def pre(myargs): retval = True myarg = myargs['argname'] if isinstance(myarg, list): # handle list of values else: # handle single value if bad_problem: retval = False return retval def plugin(line): retval = True # do something with line if something_is_bogus: retval = False return retval def post(): # no arguments # do something # no return value" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/logs-reference
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_installation_guide/providing-feedback
Chapter 2. Maintaining the indexes of a specific database
Chapter 2. Maintaining the indexes of a specific database Each database in Directory Server has its own index. You can create, update, and delete indexes using the dsconf utility or the web console. 2.1. The different index types Directory Server stores the indexes of each indexed attribute in a separate database file in the instance's database directory. For example, the indexes of the sn attribute are stored in the /var/lib/dirsrv/slapd- instance_name /db/ database_name /sn.db file. Each index file can contain multiple index types if Directory Server maintains different indexes for an attribute. Directory Server supports the following index types: The presence index ( pres ) is a list of the entries that contain a particular attribute. For example, use this type when clients frequently perform searches, such as attribute=mail . The equality index ( eq ) improves searches for entries containing a specific attribute value. For example, an equality index on the cn attribute enables faster searches for cn= first_name last_name . The approximate index ( approx ) enables efficient approximate or sounds-like searches. For example, searches for cn~= first_name last_name , cn~= first_name , or cn~= first_nam (note the misspelling) would return an entry cn= first_name X last_name . Note that the metaphone phonetic algorithm in Directory Server supports only US-ASCII letters. Therefore, use approximate indexing only with English values. The substring index ( sub ) is a costly index to maintain, but it enables efficient searching against substrings within entries. Substring indexes are limited to a minimum of three characters for each entry. For example, searches for telephoneNumber=* 555 * return all entries in the directory with a value that contains 555 in the telephoneNumber attribute. International index speeds up searches for information in international directories. The process for creating an international index is similar to the process for creating regular indexes, except that it applies a matching rule by associating an object identifier (OID) with the attributes to be indexed. 2.2. Balancing the benefits of indexing Before you create new indexes, balance the benefits of maintaining indexes against the costs: Approximate indexes are not efficient for attributes commonly containing numbers, such as phone numbers. Substring indexes do not work for binary attributes. Avoid equality indexes on attributes that contain big values, such as an image. Maintaining indexes for attributes that are not commonly used in searches increases the overhead without improving the search performance. Attributes that are not indexed can still be used in search requests, although the search performance can be degraded significantly, depending on the type of search. Indexes can become very time-consuming. For example, if Directory Server receives an add operation, the server examines the indexing attributes to determine whether an index is maintained for the attribute values. If the created attribute values are indexed, Directory Server adds the new attribute values to the index, and then the actual attribute values are created in the entry. Example 2.1. Indexing steps Directory Server performs when a user adds an entry Assume that Directory Server maintains the following indexes: Equality, approximate, and substring indexes for the cn and sn attributes. Equality and substring indexes for the telephoneNumber attribute. Substring indexes for the description attribute. For example, a user adds the following entry: dn: cn=John Doe,ou=People,dc=example,dc=com objectclass: top objectClass: person objectClass: orgperson objectClass: inetorgperson cn: John Doe cn: John sn: Doe ou: Manufacturing ou: people telephoneNumber: 408 555 8834 description: Manufacturing lead When the user adds the entry, Directory Server performs the following steps: Create the cn equality index entry for John and John Doe . Create the cn approximate index entries for John and John Doe . Create the cn substring index entries for John and John Doe . Create the sn equality index entry for Doe . Create the sn approximate index entry for Doe . Create the sn substring index entry for Doe . Create the telephoneNumber equality index entry for 408 555 8834 . Create the telephoneNumber substring index entry for 408 555 8834 . Create the description substring index entry for Manufacturing lead . This example illustrates that the number of actions required to create and maintain databases for a large directory can be very resource-intensive. Important Do not define a substring index for membership attributes (for example, member , uniquemember ) because it can impact Directory Server performance. When adding or removing members, for example, uniquemember to a group with many members, the computation of the uniquemember substring index requires to evaluating all uniquemember values and not only added or removed values. 2.3. Default index attributes Directory Server stores the default index attributes in the cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config entry. To display them, including their index types, enter: # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -b "cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config" -s one -o ldif-wrap=no Table 2.1. Directory Server default index attributes aci cn entryUSN entryUUID givenName mail mailAlternateAddress mailHost member memberOf nsUniqueId nsCertSubjectDN nsTombstoneCSN ntUniqueId ntUserDomainId numSubordinates objectClass owner parentId seeAlso sn targetUniqueId telephoneNumber uid uniqueMember Warning Removing the attributes listed in the table (system indexes) from the index of databases can significantly affect the Directory Server performance. 2.4. Maintaining the indexes of a specific database using the command line You can use the dsconf utility to maintain index settings using the command line. Procedure For example, to add the roomNumber attribute to the index of the userRoot database with the index types eq and sub , enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend index add --attr roomNumber --index-type eq --index-type sub --reindex userRoot The --reindex option causes that Directory Server automatically re-indexes the database. For example, to add the pres index type to the index settings of the roomNumber attribute in the userRoot database, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend index set --attr roomNumber --add-type pres userRoot For example, to remove the pres index type from the index settings of the roomNumber attribute in the userRoot database, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend index set --attr roomNumber --del-type pres userRoot For example, to remove the roomNumber attribute from the index in the userRoot database, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend index delete --attr roomNumber userRoot Verification List the index settings of the userRoot database: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend index list userRoot 2.5. Recreating an index while the instance offline You can use the dsctl db2index utility for reindexing the whole database while the instance is offline. Prerequisites You created an indexing entry or added additional index types to the existing userRoot database. Procedure Shut down the instance: Recreate the index: For all indexes in the database, run: For specific attribute indexes, run: The following command recreates indexes for aci , cn , and givenname attributes. For more information regarding dsctl (offline) command, run: Start the instance: Verification List the index settings of the userRoot database: 2.6. Maintaining the indexes of a specific database using the web console You can use the web console to maintain index settings in Directory Server. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Database Suffixes suffix_name Indexes Database Indexes . To add an attribute to the index: Click Add Index . Enter the attribute name to the Select An Attribute field. Select the index types. Select Index attribute after creation . Click Create Index . To update the index settings of an attribute: Click the overflow menu to the attribute, and select Edit Index . Update the index settings to your needs. Select Index attribute after creation . Click Save Index . To delete an attribute from the index: Click the overflow menu to the attribute, and select Delete Index . Select Yes, I am sure , and click Delete . In the Suffix Tasks menu, select Reindex Suffix . Verification Navigate to Database Suffixes suffix_name Indexes Database Indexes , and verify that the index settings reflect the changes you made.
[ "dn: cn=John Doe,ou=People,dc=example,dc=com objectclass: top objectClass: person objectClass: orgperson objectClass: inetorgperson cn: John Doe cn: John sn: Doe ou: Manufacturing ou: people telephoneNumber: 408 555 8834 description: Manufacturing lead", "ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -b \"cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config\" -s one -o ldif-wrap=no", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend index add --attr roomNumber --index-type eq --index-type sub --reindex userRoot", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend index set --attr roomNumber --add-type pres userRoot", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend index set --attr roomNumber --del-type pres userRoot", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend index delete --attr roomNumber userRoot", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend index list userRoot", "dsctl instance_name stop", "dsctl instance_name db2index [23/Feb/2023:05:38:28.034826108 -0500] - INFO - check_and_set_import_cache - pagesize: 4096, available bytes 1384095744, process usage 27467776 [23/Feb/2023:05:38:28.037952026 -0500] - INFO - check_and_set_import_cache - Import allocates 540662KB import cache. [23/Feb/2023:05:38:28.055104135 -0500] - INFO - bdb_db2index - userroot: Indexing attribute: aci [23/Feb/2023:05:38:28.134350191 -0500] - INFO - bdb_db2index - userroot: Finished indexing. [23/Feb/2023:05:38:28.151907852 -0500] - INFO - bdb_pre_close - All database threads now stopped db2index successful", "dsctl instance_name db2index userRoot --attr aci cn givenname", "dsctl instance_name db2index --help", "dsctl instance_name start", "dsconf -D \"cn=Directory Manager\" ldap:// server.example.com backend index list userRoot" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_indexes/assembly_maintaining-the-indexes-of-a-specific-database_managing-indexes
9.10. Assign Storage Devices
9.10. Assign Storage Devices If you selected more than one storage device on the storage devices selection screen (refer to Section 9.6, "Storage Devices" ), anaconda asks you to select which of these devices should be available for installation of the operating system, and which should only be attached to the file system for data storage. If you selected only one storage device, anaconda does not present you with this screen. During installation, the devices that you identify here as being for data storage only are mounted as part of the file system, but are not partitioned or formatted. Figure 9.33. Assign storage devices The screen is split into two panes. The left pane contains a list of devices to be used for data storage only. The right pane contains a list of devices that are to be available for installation of the operating system. Each list contains information about the devices to help you to identify them. A small drop-down menu marked with an icon is located to the right of the column headings. This menu allows you to select the types of data presented on each device. Reducing or expanding the amount of information presented might help you to identify particular devices. Move a device from one list to the other by clicking on the device, then clicking either the button labeled with a left-pointing arrow to move it to the list of data storage devices or the button labeled with a right-pointing arrow to move it to the list of devices available for installation of the operating system. The list of devices available as installation targets also includes a radio button beside each device. Use this radio button to specify the device that you want to use as the boot device for the system. Important If any storage device contains a boot loader that will chain load the Red Hat Enterprise Linux boot loader, include that storage device among the Install Target Devices . Storage devices that you identify as Install Target Devices remain visible to anaconda during boot loader configuration. Storage devices that you identify as Install Target Devices on this screen are not automatically erased by the installation process unless you selected the Use All Space option on the partitioning screen (refer to Section 9.13, "Disk Partitioning Setup" ). When you have finished identifying devices to be used for installation, click to continue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/assign_storage_devices-x86
Chapter 4. Installation configuration parameters for IBM Power
Chapter 4. Installation configuration parameters for IBM Power Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 4.1. Available installation configuration parameters for IBM Power The following tables specify the required, optional, and IBM Power-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power/installation-config-parameters-ibm-power
Chapter 7. Creating and managing host aggregates
Chapter 7. Creating and managing host aggregates As a cloud administrator, you can partition a Compute deployment into logical groups for performance or administrative purposes. Red Hat OpenStack Platform (RHOSP) provides the following mechanisms for partitioning logical groups: Host aggregate A host aggregate is a grouping of Compute nodes into a logical unit based on attributes such as the hardware or performance characteristics. You can assign a Compute node to one or more host aggregates. You can map flavors and images to host aggregates by setting metadata on the host aggregate, and then matching flavor extra specs or image metadata properties to the host aggregate metadata. The Compute scheduler can use this metadata to schedule instances when the required filters are enabled. Metadata that you specify in a host aggregate limits the use of that host to any instance that has the same metadata specified in its flavor or image. You can configure weight multipliers for each host aggregate by setting the xxx_weight_multiplier configuration option in the host aggregate metadata. You can use host aggregates to handle load balancing, enforce physical isolation or redundancy, group servers with common attributes, or separate classes of hardware. When you create a host aggregate, you can specify a zone name. This name is presented to cloud users as an availability zone that they can select. Availability zones An availability zone is the cloud user view of a host aggregate. A cloud user cannot view the Compute nodes in the availability zone, or view the metadata of the availability zone. The cloud user can only see the name of the availability zone. You can assign each Compute node to only one availability zone. You can configure a default availability zone where instances will be scheduled when the cloud user does not specify a zone. You can direct cloud users to use availability zones that have specific capabilities. 7.1. Enabling scheduling on host aggregates To schedule instances on host aggregates that have specific attributes, update the configuration of the Compute scheduler to enable filtering based on the host aggregate metadata. Procedure Open your Compute environment file. Add the following values to the NovaSchedulerEnabledFilters parameter, if they are not already present: AggregateInstanceExtraSpecsFilter : Add this value to filter Compute nodes by host aggregate metadata that match flavor extra specs. Note For this filter to perform as expected, you must scope the flavor extra specs by prefixing the extra_specs key with the aggregate_instance_extra_specs: namespace. AggregateImagePropertiesIsolation : Add this value to filter Compute nodes by host aggregate metadata that match image metadata properties. Note To filter host aggregate metadata by using image metadata properties, the host aggregate metadata key must match a valid image metadata property. For information about valid image metadata properties, see Image configuration parameters . AvailabilityZoneFilter : Add this value to filter by availability zone when launching an instance. Note Instead of using the AvailabilityZoneFilter Compute scheduler service filter, you can use the Placement service to process availability zone requests. For more information, see Filtering by availability zone using the Placement service . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 7.2. Creating a host aggregate As a cloud administrator, you can create as many host aggregates as you require. Procedure To create a host aggregate, enter the following command: Replace <aggregate_name> with the name you want to assign to the host aggregate. Add metadata to the host aggregate: Replace <key=value> with the metadata key-value pair. If you are using the AggregateInstanceExtraSpecsFilter filter, the key can be any arbitrary string, for example, ssd=true . If you are using the AggregateImagePropertiesIsolation filter, the key must match a valid image metadata property. For more information about valid image metadata properties, see Image configuration parameters . Replace <aggregate_name> with the name of the host aggregate. Add the Compute nodes to the host aggregate: Replace <aggregate_name> with the name of the host aggregate to add the Compute node to. Replace <host_name> with the name of the Compute node to add to the host aggregate. Create a flavor or image for the host aggregate: Create a flavor: Create an image: Set one or more key-value pairs on the flavor or image that match the key-value pairs on the host aggregate. To set the key-value pairs on a flavor, use the scope aggregate_instance_extra_specs : To set the key-value pairs on an image, use valid image metadata properties as the key: 7.3. Creating an availability zone As a cloud administrator, you can create an availability zone that cloud users can select when they create an instance. Procedure To create an availability zone, you can create a new availability zone host aggregate, or make an existing host aggregate an availability zone: To create a new availability zone host aggregate, enter the following command: Replace <availability_zone> with the name you want to assign to the availability zone. Replace <aggregate_name> with the name you want to assign to the host aggregate. To make an existing host aggregate an availability zone, enter the following command: Replace <availability_zone> with the name you want to assign to the availability zone. Replace <aggregate_name> with the name of the host aggregate. Optional: Add metadata to the availability zone: Replace <key=value> with your metadata key-value pair. You can add as many key-value properties as required. Replace <aggregate_name> with the name of the availability zone host aggregate. Add Compute nodes to the availability zone host aggregate: Replace <aggregate_name> with the name of the availability zone host aggregate to add the Compute node to. Replace <host_name> with the name of the Compute node to add to the availability zone. 7.4. Deleting a host aggregate To delete a host aggregate, you first remove all the Compute nodes from the host aggregate. Procedure To view a list of all the Compute nodes assigned to the host aggregate, enter the following command: To remove all assigned Compute nodes from the host aggregate, enter the following command for each Compute node: Replace <aggregate_name> with the name of the host aggregate to remove the Compute node from. Replace <host_name> with the name of the Compute node to remove from the host aggregate. After you remove all the Compute nodes from the host aggregate, enter the following command to delete the host aggregate: 7.5. Creating a project-isolated host aggregate You can create a host aggregate that is available only to specific projects. Only the projects that you assign to the host aggregate can launch instances on the host aggregate. Note Project isolation uses the Placement service to filter host aggregates for each project. This process supersedes the functionality of the AggregateMultiTenancyIsolation filter. You therefore do not need to use the AggregateMultiTenancyIsolation filter. Procedure Open your Compute environment file. To schedule project instances on the project-isolated host aggregate, set the NovaSchedulerLimitTenantsToPlacementAggregate parameter to True in the Compute environment file. Optional: To ensure that only the projects that you assign to a host aggregate can create instances on your cloud, set the NovaSchedulerPlacementAggregateRequiredForTenants parameter to True . Note NovaSchedulerPlacementAggregateRequiredForTenants is False by default. When this parameter is False , projects that are not assigned to a host aggregate can create instances on any host aggregate. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Create the host aggregate. Retrieve the list of project IDs: Use the filter_tenant_id<suffix> metadata key to assign projects to the host aggregate: Replace <ID0> , <ID1> , and all IDs up to <IDn> with unique values for each project filter that you want to create. Replace <project_id0> , <project_id1> , and all project IDs up to <project_idn> with the ID of each project that you want to assign to the host aggregate. Replace <aggregate_name> with the name of the project-isolated host aggregate. For example, use the following syntax to assign projects 78f1 , 9d3t , and aa29 to the host aggregate project-isolated-aggregate : Tip You can create a host aggregate that is available only to a single specific project by omitting the suffix from the filter_tenant_id metadata key: Additional resources For more information on creating a host aggregate, see Creating and managing host aggregates .
[ "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "(overcloud)# openstack aggregate create <aggregate_name>", "(overcloud)# openstack aggregate set --property <key=value> --property <key=value> <aggregate_name>", "(overcloud)# openstack aggregate add host <aggregate_name> <host_name>", "(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> host-agg-flavor", "(overcloud)USD openstack image create host-agg-image", "(overcloud)# openstack flavor set --property aggregate_instance_extra_specs:ssd=true host-agg-flavor", "(overcloud)# openstack image set --property os_type=linux host-agg-image", "(overcloud)# openstack aggregate create --zone <availability_zone> <aggregate_name>", "(overcloud)# openstack aggregate set --zone <availability_zone> <aggregate_name>", "(overcloud)# openstack aggregate set --property <key=value> <aggregate_name>", "(overcloud)# openstack aggregate add host <aggregate_name> <host_name>", "(overcloud)# openstack aggregate show <aggregate_name>", "(overcloud)# openstack aggregate remove host <aggregate_name> <host_name>", "(overcloud)# openstack aggregate delete <aggregate_name>", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml \\", "(overcloud)# openstack project list", "(overcloud)# openstack aggregate set --property filter_tenant_id<ID0>=<project_id0> --property filter_tenant_id<ID1>=<project_id1> --property filter_tenant_id<IDn>=<project_idn> <aggregate_name>", "(overcloud)# openstack aggregate set --property filter_tenant_id0=78f1 --property filter_tenant_id1=9d3t --property filter_tenant_id2=aa29 project-isolated-aggregate", "(overcloud)# openstack aggregate set --property filter_tenant_id=78f1 single-project-isolated-aggregate" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_creating-and-managing-host-aggregates_host-aggregates
Chapter 6. Directory Services
Chapter 6. Directory Services 6.1. Directory Services The Red Hat Virtualization platform relies on directory services for user authentication and authorization. Interactions with all Manager interfaces, including the VM Portal, Administration Portal, and REST API are limited to authenticated, authorized users. Virtual machines within the Red Hat Virtualization environment can use the same directory services to provide authentication and authorization, however they must be configured to do so. The currently supported providers of directory services for use with the Red Hat Virtualization Manager are Identity Management (IdM), Red Hat Directory Server 9 (RHDS), Active Directory (AD), and OpenLDAP. The Red Hat Virtualization Manager interfaces with the directory server for: Portal logins (User, Power User, Administrator, REST API). Queries to display user information. Adding the Manager to a domain. Authentication is the verification and identification of a party who generated some data, and of the integrity of the generated data. A principal is the party whose identity is verified. The verifier is the party who demands assurance of the principal's identity. In the case of Red Hat Virtualization, the Manager is the verifier and a user is a principal. Data integrity is the assurance that the data received is the same as the data generated by the principal. Confidentiality and authorization are closely related to authentication. Confidentiality protects data from disclosure to those not intended to receive it. Strong authentication methods can optionally provide confidentiality. Authorization determines whether a principal is allowed to perform an operation. Red Hat Virtualization uses directory services to associate users with roles and provide authorization accordingly. Authorization is usually performed after the principal has been authenticated, and may be based on information local or remote to the verifier. During installation, a local, internal domain is automatically configured for administration of the Red Hat Virtualization environment. After the installation is complete, more domains can be added.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/chap-directory_services
Chapter 291. SAP NetWeaver Component
Chapter 291. SAP NetWeaver Component Available as of Camel version 2.12 The sap-netweaver integrates with the SAP NetWeaver Gateway using HTTP transports. This camel component supports only producer endpoints. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sap-netweaver</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 291.1. URI format The URI scheme for a sap netweaver gateway component is as follows sap-netweaver:https://host:8080/path?username=foo&password=secret You can append query options to the URI in the following format, ?option=value&option=value&... 291.2. Prerequisites You would need to have an account to the SAP NetWeaver system to be able to leverage this component. SAP provides a demo setup where you can requires for an account. This component uses the basic authentication scheme for logging into SAP NetWeaver. 291.3. SAPNetWeaver options The SAP NetWeaver component has no options. The SAP NetWeaver endpoint is configured using URI syntax: with the following path and query parameters: 291.3.1. Path Parameters (1 parameters): Name Description Default Type url Required Url to the SAP net-weaver gateway server. String 291.3.2. Query Parameters (6 parameters): Name Description Default Type flatternMap (producer) If the JSON Map contains only a single entry, then flattern by storing that single entry value as the message body. true boolean json (producer) Whether to return data in JSON format. If this option is false, then XML is returned in Atom format. true boolean jsonAsMap (producer) To transform the JSON from a String to a Map in the message body. true boolean password (producer) Required Password for account. String username (producer) Required Username for account. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 291.4. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.sap-netweaver.enabled Enable sap-netweaver component true Boolean camel.component.sap-netweaver.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 291.5. Message Headers The following headers can be used by the producer. Name Type Description CamelNetWeaverCommand String Mandatory : The command to execute in MS ADO.Net Data Service format. 291.6. Examples This example is using the flight demo example from SAP, which is available online over the internet here . In the route below we request the SAP NetWeaver demo server using the following url https://sapes1.sapdevcenter.com/sap/opu/odata/IWBEP/RMTSAMPLEFLIGHT_2/ And we want to execute the following command FlightCollection(AirLineID='AA',FlightConnectionID='0017',FlightDate=datetime'2012-08-29T00%3A00%3A00') To get flight details for the given flight. The command syntax is in MS ADO.Net Data Service format. We have the following Camel route from("direct:start") .setHeader(NetWeaverConstants.COMMAND, constant(command)) .toF("sap-netweaver:%s?username=%s&password=%s", url, username, password) .to("log:response") .to("velocity:flight-info.vm") Where url, username, password and command is defined as: private String username = "P1909969254"; private String password = "TODO"; private String url = "https://sapes1.sapdevcenter.com/sap/opu/odata/IWBEP/RMTSAMPLEFLIGHT_2/"; private String command = "FlightCollection(AirLineID='AA',FlightConnectionID='0017',FlightDate=datetime'2012-08-29T00%3A00%3A00')"; The password is invalid. You would need to create an account at SAP first to run the demo. The velocity template is used for formatting the response to a basic HTML page <html> <body> Flight information: <p/> <br/>Airline ID: USDbody["AirLineID"] <br/>Aircraft Type: USDbody["AirCraftType"] <br/>Departure city: USDbody["FlightDetails"]["DepartureCity"] <br/>Departure airport: USDbody["FlightDetails"]["DepartureAirPort"] <br/>Destination city: USDbody["FlightDetails"]["DestinationCity"] <br/>Destination airport: USDbody["FlightDetails"]["DestinationAirPort"] </body> </html> When running the application you get sampel output: Flight information: Airline ID: AA Aircraft Type: 747-400 Departure city: new york Departure airport: JFK Destination city: SAN FRANCISCO Destination airport: SFO 291.7. See Also Configuring Camel Component Endpoint Getting Started HTTP
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sap-netweaver</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "sap-netweaver:https://host:8080/path?username=foo&password=secret", "sap-netweaver:url", "https://sapes1.sapdevcenter.com/sap/opu/odata/IWBEP/RMTSAMPLEFLIGHT_2/", "FlightCollection(AirLineID='AA',FlightConnectionID='0017',FlightDate=datetime'2012-08-29T00%3A00%3A00')", "from(\"direct:start\") .setHeader(NetWeaverConstants.COMMAND, constant(command)) .toF(\"sap-netweaver:%s?username=%s&password=%s\", url, username, password) .to(\"log:response\") .to(\"velocity:flight-info.vm\")", "private String username = \"P1909969254\"; private String password = \"TODO\"; private String url = \"https://sapes1.sapdevcenter.com/sap/opu/odata/IWBEP/RMTSAMPLEFLIGHT_2/\"; private String command = \"FlightCollection(AirLineID='AA',FlightConnectionID='0017',FlightDate=datetime'2012-08-29T00%3A00%3A00')\";", "<html> <body> Flight information: <p/> <br/>Airline ID: USDbody[\"AirLineID\"] <br/>Aircraft Type: USDbody[\"AirCraftType\"] <br/>Departure city: USDbody[\"FlightDetails\"][\"DepartureCity\"] <br/>Departure airport: USDbody[\"FlightDetails\"][\"DepartureAirPort\"] <br/>Destination city: USDbody[\"FlightDetails\"][\"DestinationCity\"] <br/>Destination airport: USDbody[\"FlightDetails\"][\"DestinationAirPort\"] </body> </html>", "Flight information: Airline ID: AA Aircraft Type: 747-400 Departure city: new york Departure airport: JFK Destination city: SAN FRANCISCO Destination airport: SFO" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/sap-netweaver-component
4.3.2. Backing Store -- the Central Tenet of Virtual Memory
4.3.2. Backing Store -- the Central Tenet of Virtual Memory The short answer to this question is that the rest of the application remains on disk. In other words, disk acts as the backing store for RAM; a slower, larger storage medium acting as a "backup" for a much faster, smaller storage medium. This might at first seem to be a very large performance problem in the making -- after all, disk drives are so much slower than RAM. While this is true, it is possible to take advantage of the sequential and localized access behavior of applications and eliminate most of the performance implications of using disk drives as backing store for RAM. This is done by structuring the virtual memory subsystem so that it attempts to ensure that those parts of the application currently needed -- or likely to be needed in the near future -- are kept in RAM only for as long as they are actually needed. In many respects this is similar to the relationship between cache and RAM: making a small amount of fast storage combined with a large amount of slow storage act just like a large amount of fast storage. With this in mind, let us explore the process in more detail.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-memory-virt-where
Release notes
Release notes Red Hat Advanced Cluster Security for Kubernetes 4.6 Highlights what is new and what has changed with Red Hat Advanced Cluster Security for Kubernetes releases Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/release_notes/index
3. Feedback
3. Feedback If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla/ ) against the component rh-cs . Be sure to mention the manual's identifier: By mentioning this manual's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
[ "rh-MPIO(EN)-4.9 (2011-02-16T16:48)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/s1-intro-feedback-CA
Chapter 4. Planning your environment according to object maximums
Chapter 4. Planning your environment according to object maximums Consider the following tested object maximums when you plan your OpenShift Container Platform cluster. These guidelines are based on the largest possible cluster. For smaller clusters, the maximums are lower. There are many factors that influence the stated thresholds, including the etcd version or storage data format. In most cases, exceeding these numbers results in lower overall performance. It does not necessarily mean that the cluster will fail. Warning Clusters that experience rapid change, such as those with many starting and stopping pods, can have a lower practical maximum size than documented. 4.1. OpenShift Container Platform tested cluster maximums for major releases Note Red Hat does not provide direct guidance on sizing your OpenShift Container Platform cluster. This is because determining whether your cluster is within the supported bounds of OpenShift Container Platform requires careful consideration of all the multidimensional factors that limit the cluster scale. OpenShift Container Platform supports tested cluster maximums rather than absolute cluster maximums. Not every combination of OpenShift Container Platform version, control plane workload, and network plugin are tested, so the following table does not represent an absolute expectation of scale for all deployments. It might not be possible to scale to a maximum on all dimensions simultaneously. The table contains tested maximums for specific workload and deployment configurations, and serves as a scale guide as to what can be expected with similar deployments. Maximum type 4.x tested maximum Number of nodes 2,000 [1] Number of pods [2] 150,000 Number of pods per node 2,500 [3][4] Number of pods per core There is no default value. Number of namespaces [5] 10,000 Number of builds 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy Number of pods per namespace [6] 25,000 Number of routes and back ends per Ingress Controller 2,000 per router Number of secrets 80,000 Number of config maps 90,000 Number of services [7] 10,000 Number of services per namespace 5,000 Number of back-ends per service 5,000 Number of deployments per namespace [6] 2,000 Number of build configs 12,000 Number of custom resource definitions (CRD) 1,024 [8] Pause pods were deployed to stress the control plane components of OpenShift Container Platform at 2000 node scale. The ability to scale to similar numbers will vary depending upon specific deployment and workload parameters. The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements. This was tested on a cluster with 31 servers: 3 control planes, 2 infrastructure nodes, and 26 worker nodes. If you need 2,500 user pods, you need both a hostPrefix of 20 , which allocates a network large enough for each node to contain more than 2000 pods, and a custom kubelet config with maxPods set to 2500 . For more information, see Running 2500 pods per node on OCP 4.13 . The maximum tested pods per node is 2,500 for clusters using the OVNKubernetes network plugin. The maximum tested pods per node for the OpenShiftSDN network plugin is 500 pods. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to free etcd storage. There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. Each service port and each service back-end has a corresponding entry in iptables . The number of back-ends of a given service impact the size of the Endpoints objects, which impacts the size of data that is being sent all over the system. Tested on a cluster with 29 servers: 3 control planes, 2 infrastructure nodes, and 24 worker nodes. The cluster had 500 namespaces. OpenShift Container Platform has a limit of 1,024 total custom resource definitions (CRD), including those installed by OpenShift Container Platform, products integrating with OpenShift Container Platform and user-created CRDs. If there are more than 1,024 CRDs created, then there is a possibility that oc command requests might be throttled. 4.1.1. Example scenario As an example, 500 worker nodes (m5.2xl) were tested, and are supported, using OpenShift Container Platform 4.16, the OVN-Kubernetes network plugin, and the following workload objects: 200 namespaces, in addition to the defaults 60 pods per node; 30 server and 30 client pods (30k total) 57 image streams/ns (11.4k total) 15 services/ns backed by the server pods (3k total) 15 routes/ns backed by the services (3k total) 20 secrets/ns (4k total) 10 config maps/ns (2k total) 6 network policies/ns, including deny-all, allow-from ingress and intra-namespace rules 57 builds/ns The following factors are known to affect cluster workload scaling, positively or negatively, and should be factored into the scale numbers when planning a deployment. For additional information and guidance, contact your sales representative or Red Hat support . Number of pods per node Number of containers per pod Type of probes used (for example, liveness/readiness, exec/http) Number of network policies Number of projects, or namespaces Number of image streams per project Number of builds per project Number of services/endpoints and type Number of routes Number of shards Number of secrets Number of config maps Rate of API calls, or the cluster "churn", which is an estimation of how quickly things change in the cluster configuration. Prometheus query for pod creation requests per second over 5 minute windows: sum(irate(apiserver_request_count{resource="pods",verb="POST"}[5m])) Prometheus query for all API requests per second over 5 minute windows: sum(irate(apiserver_request_count{}[5m])) Cluster node resource consumption of CPU Cluster node resource consumption of memory 4.2. OpenShift Container Platform environment and configuration on which the cluster maximums are tested 4.2.1. AWS cloud platform Node Flavor vCPU RAM(GiB) Disk type Disk size(GiB)/IOS Count Region Control plane/etcd [1] r5.4xlarge 16 128 gp3 220 3 us-west-2 Infra [2] m5.12xlarge 48 192 gp3 100 3 us-west-2 Workload [3] m5.4xlarge 16 64 gp3 500 [4] 1 us-west-2 Compute m5.2xlarge 8 32 gp3 100 3/25/250/500 [5] us-west-2 gp3 disks with a baseline performance of 3000 IOPS and 125 MiB per second are used for control plane/etcd nodes because etcd is latency sensitive. gp3 volumes do not use burst performance. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale. Workload node is dedicated to run performance and scalability workload generators. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts. 4.2.2. IBM Power platform Node vCPU RAM(GiB) Disk type Disk size(GiB)/IOS Count Control plane/etcd [1] 16 32 io1 120 / 10 IOPS per GiB 3 Infra [2] 16 64 gp2 120 2 Workload [3] 16 256 gp2 120 [4] 1 Compute 16 64 gp2 120 2 to 100 [5] io1 disks with 120 / 10 IOPS per GiB are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale. Workload node is dedicated to run performance and scalability workload generators. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run. Cluster is scaled in iterations. 4.2.3. IBM Z platform Node vCPU [4] RAM(GiB) [5] Disk type Disk size(GiB)/IOS Count Control plane/etcd [1,2] 8 32 ds8k 300 / LCU 1 3 Compute [1,3] 8 32 ds8k 150 / LCU 2 4 nodes (scaled to 100/250/500 pods per node) Nodes are distributed between two logical control units (LCUs) to optimize disk I/O load of the control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Etcd I/O demand should not interfere with other workloads. Four compute nodes are used for the tests running several iterations with 100/250/500 pods at the same time. First, idling pods were used to evaluate if pods can be instanced. , a network and CPU demanding client/server workload were used to evaluate the stability of the system under stress. Client and server pods were pairwise deployed and each pair was spread over two compute nodes. No separate workload node was used. The workload simulates a microservice workload between two compute nodes. Physical number of processors used is six Integrated Facilities for Linux (IFLs). Total physical memory used is 512 GiB. 4.3. How to plan your environment according to tested cluster maximums Important Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster. The numbers noted in this documentation are based on Red Hat's test methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments. While planning your environment, determine how many pods are expected to fit per node: The default maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application's memory, CPU, and storage requirements, as described in "How to plan your environment according to application requirements". Example scenario If you want to scope your cluster for 2200 pods per cluster, you would need at least five nodes, assuming that there are 500 maximum pods per node: If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node: Where: OpenShift Container Platform comes with several system pods, such as SDN, DNS, Operators, and others, which run across every worker node by default. Therefore, the result of the above formula can vary. 4.4. How to plan your environment according to application requirements Consider an example application environment: Pod type Pod quantity Max memory CPU cores Persistent storage apache 100 500 MB 0.5 1 GB node.js 200 1 GB 1 1 GB postgresql 100 1 GB 2 10 GB JBoss EAP 100 1 GB 1 1 GB Extrapolated requirements: 550 CPU cores, 450GB RAM, and 1.4TB storage. Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered. Node type Quantity CPUs RAM (GB) Nodes (option 1) 100 4 16 Nodes (option 2) 50 8 32 Nodes (option 3) 25 16 64 Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio. The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing. Disable the service links in the deployment's service specification file to overcome this: --- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: "USD{IMAGE}" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR2_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR3_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR4_USD{IDENTIFIER} value: "USD{ENV_VALUE}" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: "[A-Za-z0-9]{255}" required: false labels: template: deployment-config-template The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX on the system defines the maximum argument length for a new process and it is set to 2097152 bytes (2 MiB) by default. The Kubelet injects environment variables in to each pod scheduled to run in the namespace including: <SERVICE_NAME>_SERVICE_HOST=<IP> <SERVICE_NAME>_SERVICE_PORT=<PORT> <SERVICE_NAME>_PORT=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp <SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR> The pods in the namespace will start to fail if the argument length exceeds the allowed value and the number of characters in a service name impacts it. For example, in a namespace with 5000 services, the limit on the service name is 33 characters, which enables you to run 5000 pods in the namespace.
[ "required pods per cluster / pods per node = total number of nodes needed", "2200 / 500 = 4.4", "2200 / 20 = 110", "required pods per cluster / total number of nodes = expected pods per node", "--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/planning-your-environment-according-to-object-maximums
Chapter 7. Upgrading the Migration Toolkit for Virtualization
Chapter 7. Upgrading the Migration Toolkit for Virtualization You can upgrade the MTV Operator by using the Red Hat OpenShift web console to install the new version. Procedure In the Red Hat OpenShift web console, click Operators Installed Operators Migration Toolkit for Virtualization Operator Subscription . Change the update channel to the correct release. See Changing update channel in the Red Hat OpenShift documentation. Confirm that Upgrade status changes from Up to date to Upgrade available . If it does not, restart the CatalogSource pod: Note the catalog source, for example, redhat-operators . From the command line, retrieve the catalog source pod: USD oc get pod -n openshift-marketplace | grep <catalog_source> Delete the pod: USD oc delete pod -n openshift-marketplace <catalog_source_pod> Upgrade status changes from Up to date to Upgrade available . If you set Update approval on the Subscriptions tab to Automatic , the upgrade starts automatically. If you set Update approval on the Subscriptions tab to Manual , approve the upgrade. See Manually approving a pending upgrade in the Red Hat OpenShift documentation. If you are upgrading from MTV 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical . For more information, see Adding a VMSphere source provider . If you mapped to NFS on the Red Hat OpenShift destination provider in MTV 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile .
[ "oc get pod -n openshift-marketplace | grep <catalog_source>", "oc delete pod -n openshift-marketplace <catalog_source_pod>" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/installing_and_using_the_migration_toolkit_for_virtualization/upgrading-mtv-ui_mtv
A.15. taskset
A.15. taskset The taskset tool is provided by the util-linux package. It allows administrators to retrieve and set the processor affinity of a running process, or launch a process with a specified processor affinity. Important taskset does not guarantee local memory allocation. If you require the additional performance benefits of local memory allocation, Red Hat recommends using numactl instead of taskset. To set the CPU affinity of a running process, run the following command: Replace processors with a comma delimited list of processors or ranges of processors (for example, 1,3,5-7 . Replace pid with the process identifier of the process that you want to reconfigure. To launch a process with a specified affinity, run the following command: Replace processors with a comma delimited list of processors or ranges of processors. Replace application with the command, options and arguments of the application you want to run. For more information about taskset , see the man page:
[ "taskset -pc processors pid", "taskset -c processors -- application", "man taskset" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-taskset
Chapter 14. 4.11 Release Notes
Chapter 14. 4.11 Release Notes Important Disk space requirements have changed. More disk space is now required for the partition or volume where the PostgreSQL database is stored. Please see Technical configuration required for installing RHUI . 14.1. New Features The following major enhancements have been introduced in Red Hat Update Infrastructure 4.11. New Pulp version This update introduces a newer version of Pulp, 3.49. Along with this update a number of CVEs were addressed in underlying libraries. For more information check errata problem description. Persistent Custom Configuration for rhui-tools.conf Previously, any modifications made to the rhui-tools.conf file were lost after upgrading RHUI. With this update, you can now preserve custom changes by creating the /root/.rhui/rhui-tools-custom.conf file. Any configurations specified in this file will override the default settings after an upgrade. 14.2. Bug Fixes The following bug has been fixed in Red Hat Update Infrastructure 4.11. Depreciation warnings addressed in installer playbook To avoid depreciation warnings, the Red Hat Update Infrastructure installer playbook has been updated to use dictionaries instead of lists of dictionaries. As a result, users will no longer see deprecation warnings when running or rerunning rhui-installer .
null
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-11-release-notes_release-notes
Chapter 9. Removing the kubeadmin user
Chapter 9. Removing the kubeadmin user 9.1. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 9.2. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system
[ "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authentication_and_authorization/removing-kubeadmin
Chapter 4. cn=monitor
Chapter 4. cn=monitor Information used to monitor the server is stored under cn=monitor . This entry and its children are read-only; clients cannot directly modify them. The server updates this information automatically. This section describes the cn=monitor attributes. The only attribute that can be changed by a user to set access control is the aci attribute. If the nsslapd-counters attribute in cn=config is set to on (the default setting), then all of the counters kept by Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. For the cn=monitor entry, the 64-bit integers are used with the opsinitiated , opscompleted , entriessent , and bytessent counters. Note The nsslapd-counters attribute enables 64-bit support for these specific database and server counters. The counters which use 64-bit integers are not configurable; the 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. 4.1. backendMonitorDN This attribute shows the DN for each Directory Server database backend. For further information on monitoring the database, see the following sections: Section 6.4.11, "Database attributes under cn= attribute_name ,cn=encrypted attributes,cn= database_name ,cn=ldbm database,cn=plugins,cn=config" Section 6.4.6, "Database attributes under cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config" Section 6.5.4, "Database link attributes under cn=monitoring,cn= database_link_name ,cn=chaining database,cn=plugins,cn=config" 4.2. bytesSent This attribute shows the number of bytes sent by Directory Server. 4.3. connection This attribute lists open connections and associated status and performance related information and values. These are given in the following format: connection: pass:quotes[ A:YYYYMMDDhhmmssZ:B:C:D:E:F:G:H:I:IP_address ] For example: connection: pass:quotes[ 69:20200604081953Z:6086:6086:-:cn=proxy,ou=special_users,dc=example,dc=test:0:11:27:7448846:ip=192.0.2.1 ] A is the connection number, which is the number of the slot in the connection table associated with this connection. This is the number logged as slot= A in the access log message when this connection was opened, and usually corresponds to the file descriptor associated with the connection. The attribute dTableSize shows the total size of the connection table. YYYYMMDDhhmmssZ is the date and time, in GeneralizedTime form, at which the connection was opened. This value gives the time in relation to Greenwich Mean Time. B is the number of operations received on this connection. C is the number of completed operations. D is r if the server is in the process of reading BER from the network, empty otherwise. This value is usually empty (as in the example). E this is the bind DN. This may be empty or have value of NULLDN for anonymous connections. F is the connection maximum threads state: 1 is in max threads, 0 is not. G is the number of times this thread has hit the maximum threads value. H is the number of operations attempted that were blocked by the maximum number of threads. I is the connection ID as reported in the logs as conn= connection_ID . IP_address is the IP address of the LDAP client. Note B and C for the initiated and completed operations should ideally be equal. 4.4. currentConnections This attribute shows the number of currently open and active Directory Server connections 4.5. currentTime This attribute shows the current time, given in Greenwich Mean Time (indicated by generalizedTime syntax Z notation; for example, 20220202131102Z ). 4.6. dTableSize The dTableSize attribute shows the size of Directory Server connection table. Each connection is associated with a slot in this table and usually corresponds to the file descriptor used by this connection. For more information, see nsslapd-maxdescriptors and nsslapd-reservedescriptors . 4.7. entriesSent This attribute shows the number of entries sent by Directory Server. 4.8. nbackEnds This attribute shows the number of Directory Server database back ends. 4.9. opsInitiated This attribute shows the number of Directory Server operations completed. 4.10. readWaiters This attribute shows the number of connections where some requests are pending and not currently being serviced by a thread in Directory Server. 4.11. startTime This attribute shows Directory Server start time given in Greenwich Mean Time, indicated by generalizedTime syntax Z notation. For example, 20220202131102Z . 4.12. threads This attribute shows the number of threads used by Directory Server. This should correspond to nsslapd-threadnumber in cn=config . 4.13. totalConnections This attribute shows the total number of Directory Server connections. This number includes connections that have been opened and closed since the server was last started in addition to the currentConnections . 4.14. version This attribute shows Directory Server vendor, version, and build number. For example, 389-Directory/2.0.14 B2022.082.0000 .
[ "connection: pass:quotes[ A:YYYYMMDDhhmmssZ:B:C:D:E:F:G:H:I:IP_address ]", "connection: pass:quotes[ 69:20200604081953Z:6086:6086:-:cn=proxy,ou=special_users,dc=example,dc=test:0:11:27:7448846:ip=192.0.2.1 ]" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuration_and_schema_reference/assembly_cn-monitor_config-schema-reference-title
Chapter 20. Using SSSD component from IdM to cache the autofs maps
Chapter 20. Using SSSD component from IdM to cache the autofs maps The System Security Services Daemon (SSSD) is a system service to access remote service directories and authentication mechanisms. The data caching is useful in case of the slow network connection. To configure the SSSD service to cache the autofs map, follow the procedures below in this section. 20.1. Configuring autofs manually to use IdM server as an LDAP server Configure autofs to use IdM server as an LDAP server. Procedure Edit the /etc/autofs.conf file to specify the schema attributes that autofs searches for: Note User can write the attributes in both lower and upper cases in the /etc/autofs.conf file. Optional: Specify the LDAP configuration. There are two ways to do this. The simplest is to let the automount service discover the LDAP server and locations on its own: This option requires DNS to contain SRV records for the discoverable servers. Alternatively, explicitly set which LDAP server to use and the base DN for LDAP searches: Edit the /etc/autofs_ldap_auth.conf file so that autofs allows client authentication with the IdM LDAP server. Change authrequired to yes. Set the principal to the Kerberos host principal for the IdM LDAP server, host/FQDN@REALM . The principal name is used to connect to the IdM directory as part of GSS client authentication. For more information about host principal, see Using canonicalized DNS host names in IdM . If necessary, run klist -k to get the exact host principal information. 20.2. Configuring SSSD to cache autofs maps The SSSD service can be used to cache autofs maps stored on an IdM server without having to configure autofs to use the IdM server at all. Prerequisites The sssd package is installed. Procedure Open the SSSD configuration file: Add the autofs service to the list of services handled by SSSD. Create a new [autofs] section. You can leave this blank, because the default settings for an autofs service work with most infrastructures. For more information, see the sssd.conf man page on your system. Optional: Set a search base for the autofs entries. By default, this is the LDAP search base, but a subtree can be specified in the ldap_autofs_search_base parameter. Restart SSSD service: Check the /etc/nsswitch.conf file, so that SSSD is listed as a source for automount configuration: Restart autofs service: Test the configuration by listing a user's /home directory, assuming there is a master map entry for /home : If this does not mount the remote file system, check the /var/log/messages file for errors. If necessary, increase the debug level in the /etc/sysconfig/autofs file by setting the logging parameter to debug .
[ "# Other common LDAP naming # map_object_class = \"automountMap\" entry_object_class = \"automount\" map_attribute = \"automountMapName\" entry_attribute = \"automountKey\" value_attribute = \"automountInformation\"", "ldap_uri = \"ldap:///dc=example,dc=com\"", "ldap_uri = \"ldap://ipa.example.com\" search_base = \"cn= location ,cn=automount,dc=example,dc=com\"", "<autofs_ldap_sasl_conf usetls=\"no\" tlsrequired=\"no\" authrequired=\"yes\" authtype=\"GSSAPI\" clientprinc=\"host/[email protected]\" />", "vim /etc/sssd/sssd.conf", "[sssd] domains = ldap services = nss,pam, autofs", "[nss] [pam] [sudo] [autofs] [ssh] [pac]", "[domain/EXAMPLE] ldap_search_base = \"dc=example,dc=com\" ldap_autofs_search_base = \"ou=automount,dc=example,dc=com\"", "systemctl restart sssd.service", "automount: sss files", "systemctl restart autofs.service", "ls /home/ userName" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/using-sssd-component-from-idm-to-cache-the-autofs-map_managing-file-systems
2.3. Migration
2.3. Migration Migration describes the process of moving a guest virtual machine from one host to another. This is possible because the virtual machines are running in a virtualized environment instead of directly on the hardware. There are two ways to migrate a virtual machine: live and offline. Migration Types Offline migration An offline migration suspends the guest virtual machine, and then moves an image of the virtual machine's memory to the destination host. The virtual machine is then resumed on the destination host and the memory used by the virtual machine on the source host is freed. Live migration Live migration is the process of migrating an active virtual machine from one physical host to another. Note that this is not possible between all Red Hat Enterprise Linux releases. Consult the Virtualization Deployment and Administration Guide for details. 2.3.1. Benefits of Migrating Virtual Machines Migration is useful for: Load balancing When a host machine is overloaded, one or more of its virtual machines could be migrated to other hosts using live migration. Similarly, machines that are not running and tend to overload can be migrated using offline migration. Upgrading or making changes to the host When the need arises to upgrade, add, or remove hardware devices on a host, virtual machines can be safely relocated to other hosts. This means that guests do not experience any downtime due to changes that are made to hosts. Energy saving Virtual machines can be redistributed to other hosts and the unloaded host systems can be powered off to save energy and cut costs in low usage periods. Geographic migration Virtual machines can be moved to other physical locations for lower latency or for other reasons. When the migration process moves a virtual machine's memory, the disk volume associated with the virtual machine is also migrated. This process is performed using live block migration. Note For more information on migration, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 2.3.2. Virtualized to Virtualized Migration (V2V) As a special type of migration, Red Hat Enterprise Linux 7 provides tools for converting virtual machines from other types of hypervisors to KVM. The virt-v2v tool converts and imports virtual machines from Xen, other versions of KVM, and VMware ESX. Note For more information on V2V, see the V2V Knowledgebase articles . In addition, Red Hat Enterprise Linux 7.3 and later support physical-to-virtual (P2V) conversion using the virt-p2v tool. For details, see the P2V Knowledgebase article .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/sec-migration
5.49. dhcp
5.49. dhcp 5.49.1. RHSA-2012:1141 - Moderate: dhcp security update Updated dhcp packages that fix three security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. Security Fixes CVE-2012-3571 A denial of service flaw was found in the way the dhcpd daemon handled zero-length client identifiers. A remote attacker could use this flaw to send a specially-crafted request to dhcpd, possibly causing it to enter an infinite loop and consume an excessive amount of CPU time. CVE-2012-3954 Two memory leak flaws were found in the dhcpd daemon. A remote attacker could use these flaws to cause dhcpd to exhaust all available memory by sending a large number of DHCP requests. Upstream acknowledges Markus Hietava of the Codenomicon CROSS project as the original reporter of CVE-2012-3571, and Glen Eustace of Massey University, New Zealand, as the original reporter of CVE-2012-3954. Users of DHCP should upgrade to these updated packages, which contain backported patches to correct these issues. After installing this update, all DHCP servers will be restarted automatically. 5.49.2. RHBA-2012:0793 - dhcp bug fix and enhancement update Updated dhcp packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The dhcp package provides software to support the Dynamic Host Configuration Protocol (DHCP) and DHCPv6 protocol. The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to obtain their own network configuration information, including an IP address, a subnet mask, and a broadcast address. Bug Fixes BZ# 656339 Previously, when dhclient was unsuccessful in obtaining or renewing an address, it restored the resolv.conf file from backup even when there were other dhclient processes running. Consequently, network traffic could be unnecessarily interrupted. The bug in dhclient-script has been fixed and dhclient now restores resolv.conf from backup only if there are no other dhclient processes running. BZ# 747017 A bug caused an infinite loop in a dhcpd process when dhcpd tried to parse the slp-service-scope option in dhcpd.conf. As a consequence, dhcpd entered an infinite loop on startup consuming 100% of the CPU cycles. This update improves the code and the problem no longer occurs. BZ# 752116 Previously, the DHCPv4 client did not check whether the address received in a DHCPACK message was already in use. As a consequence, it was possible that after a reboot two clients could have the same, conflicting, IP address. With this update, the bug has been fixed and DHCPv4 client now performs duplicate address detection (DAD) and sends a DHCPDECLINE message if the address received in DHCPACK is already in use, as per RFC 2131. BZ# 756759 When dhclient is invoked with the "-1" command-line option, it should try to get a lease once and on failure exit with status code 2. Previously, when dhclient was invoked with the "-1" command-line option, and then issued a DHCPDECLINE message, it continued in trying to obtain a lease. With this update, the dhclient code has been fixed. As a result, dhclient stops trying to obtain a lease and exits after sending DHCPDECLINE when started with the "-1" option. BZ# 789719 Previously, dhclient kept sending DHCPDISCOVER messages in an infinite loop when started with the "-timeout" option having a value of 3 or less (seconds). With this update, the problem has been fixed and the "-timeout" option works as expected with all values. Enhancements BZ# 790686 The DHCP server daemon now uses portreserve for reserving ports 647 and 847 to prevent other programs from occupying them. BZ# 798735 All DHCPv6 options defined in RFC5970, except for the Boot File Parameters Option, were implemented. This allows the DHCPv6 server to pass boot file URLs back to IPv6-based netbooting clients (UEFI) based on the system's architecture. Users are advised to upgrade to these updated dhcp packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/dhcp
6.2. Overcommitting Virtualized CPUs
6.2. Overcommitting Virtualized CPUs The KVM hypervisor supports overcommitting virtualized CPUs. Virtualized CPUs can be overcommitted as far as load limits of guest virtual machines allow. Use caution when overcommitting VCPUs as loads near 100% may cause dropped requests or unusable response times. Virtualized CPUs (vCPUs) are overcommitted best when a single host physical machine has multiple guest virtual machines that do not share the same vCPU. KVM should safely support guest virtual machines with loads under 100% at a ratio of five VCPUs (on 5 virtual machines) to one physical CPU on one single host physical machine. KVM will switch between all of the machines, making sure that the load is balanced. Do not overcommit guest virtual machines on more than the physical number of processing cores. For example a guest virtual machine with four vCPUs should not be run on a host physical machine with a dual core processor, but on a quad core host. In addition, it is not recommended to have more than 10 total allocated vCPUs per physical processor core. Important Do not overcommit CPUs in a production environment without extensive testing. Applications which use 100% of processing resources may become unstable in overcommitted environments. Test before deploying. For more information on how to get the best performance out of your virtual machine, refer to the Red Hat Enterprise Linux 6 Virtualization Tuning and Optimization Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/form-virtualization-overcommitting_with_kvm-overcommitting_virtualized_cpus
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_installation_and_configuration_guide/making-open-source-more-inclusive
7.52. evolution-exchange
7.52. evolution-exchange 7.52.1. RHBA-2015:1265 - evolution-exchange bug fix update Updated evolution-exchange packages that fix one bug are now available for Red Hat Enterprise Linux 6. The evolution-exchange packages enable added functionality to Evolution when used with a Microsoft Exchange Server 2003. The packages also contain Exchange Web Services (EWS) connector, which can connect to Microsoft Exchange 2007 and later servers. Bug Fix BZ# 1160279 When the Exchange Web Services (EWS) connector was used, the UI part of the connector failed to load due to a missing external symbol. Consequently, the user could neither change the settings nor configure a new mail account for the EWS part of the evolution-exchange packages. This update corrects the library link options during build time to have the missing symbol available. Now, the UI part of the EWS connector loads properly, and the mail account can be added and configured. Users of evolution-exchange are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-evolution-exchange
Chapter 3. Customizing the dashboard
Chapter 3. Customizing the dashboard The OpenShift AI dashboard provides features that are designed to work for most scenarios. These features are configured in the OdhDashboardConfig custom resource (CR) file. To see a description of the options in the OpenShift AI dashboard configuration file, see Dashboard configuration options . As an administrator, you can customize the interface of the dashboard, for example to show or hide some of the dashboard navigation menu options. To change the default settings of the dashboard, edit the OdhDashboardConfig custom resource (CR) file as described in Editing the dashboard configuration file . 3.1. Editing the dashboard configuration file As an administrator, you can customize the interface of the dashboard by editing the dashboard configuration file. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Home API Explorer . In the search bar, enter OdhDashboardConfig to filter by kind. Click the OdhDashboardConfig custom resource (CR) to open the resource details page. Select the redhat-ods-applications project from the Project list. Click the Instances tab. Click the odh-dashboard-config instance to open the details page. Click the YAML tab. Edit the values of the options that you want to change. Click Save to apply your changes and then click Reload to synchronize your changes to the cluster. Verification Log in to OpenShift AI and verify that your dashboard configurations apply. 3.2. Dashboard configuration options The OpenShift AI dashboard includes a set of core features enabled by default that are designed to work for most scenarios. Administrators can configure the OpenShift AI dashboard from the OdhDashboardConfig custom resource (CR) in OpenShift. Table 3.1. Dashboard feature configuration options Feature Default Description dashboardConfig: disableAcceleratorProfiles false Shows the Settings Accelerator profiles option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableBYONImageStream false Shows the Settings Notebook images option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableClusterManager false Shows the Settings Cluster settings option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableCustomServingRuntimes false Shows the Settings Serving runtimes option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableDistributedWorkloads false Shows the Distributed Workload Metrics option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableHome false Shows the Home option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableInfo false On the Applications Explore page, when a user clicks on an application tile, an information panel opens with more details about the application. To disable the information panel for all applications on the Applications Explore page , set the value to true . dashboardConfig: disableISVBadges false Shows the label on a tile that indicates whether the application is "Red Hat managed", "Partner managed", or "Self-managed". To hide these labels, set the value to true . dashboardConfig: disableKServe false Enables the ability to select KServe as a model-serving platform. To disable this ability, set the value to true . dashboardConfig: disableKServeAuth false Enables the ability to use authentication with KServe. To disable this ability, set the value to true . dashboardConfig: disableKServeMetrics false Enables the ability to view KServe metrics. To disable this ability, set the value to true . dashboardConfig: disableModelMesh false Enables the ability to select ModelMesh as a model-serving platform. To disable this ability, set the value to true . dashboardConfig: disableModelRegistry false Shows the Model Registry option and the Settings Model registry settings option in the dashboard navigation menu. To hide these menu options, set the value to true . dashboardConfig: disableModelRegistrySecureDB false Shows the Add CA certificate to secure database connection section in the Create model registry dialog and the Edit model registry dialog. To hide this section, set the value to true . dashboardConfig: disableModelServing false Shows the Model Serving option in the dashboard navigation menu and in the list of components for the data science projects. To hide Model Serving from the dashboard navigation menu and from the list of components for data science projects, set the value to true . dashboardConfig: disableNIMModelServing false Enables the ability to select NVIDIA NIM as a model-serving platform. To disable this ability, set the value to true . dashboardConfig: disablePerformanceMetrics false Shows the Endpoint Performance tab on the Model Serving page. To hide this tab, set the value to true . dashboardConfig: disablePipelines false Shows the Data Science Pipelines option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableProjects false Shows the Data Science Projects option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableProjectSharing false Allows users to share access to their data science projects with other users. To prevent users from sharing data science projects, set the value to true . dashboardConfig: disableServingRuntimeParams false Shows the Configuration parameters section in the Deploy model dialog and the Edit model dialog when using the single-model serving platform. To hide this section, set the value to true . dashboardConfig: disableStorageClasses false Shows the Settings Storage classes option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: disableSupport false Shows the Support menu option when a user clicks the Help icon in the dashboard toolbar. To hide this menu option, set the value to true . dashboardConfig: disableTracking false Allows Red Hat to collect data about OpenShift AI usage in your cluster. To disable data collection, set the value to true . You can also set this option in the OpenShift AI dashboard interface from the Settings Cluster settings navigation menu. dashboardConfig: disableTrustyBiasMetrics false Shows the Model Bias tab on the Model Serving page. To hide this tab, set the value to true . dashboardConfig: disableUserManagement false Shows the Settings User management option in the dashboard navigation menu. To hide this menu option, set the value to true . dashboardConfig: enablement true Enables OpenShift AI administrators to add applications to the OpenShift AI dashboard Applications Enabled page. To disable this ability, set the value to false . notebookController: enabled true Controls the Notebook Controller options, such as whether it is enabled in the dashboard and which parts are visible. notebookSizes Allows you to customize names and resources for notebooks. The Kubernetes-style sizes are shown in the drop-down menu that appears when launching a workbench with the Notebook Controller. Note: These sizes must follow conventions. For example, requests must be smaller than limits. modelServerSizes Allows you to customize names and resources for model servers. groupsConfig Read-only. To configure access to the OpenShift AI dashboard, use the spec.adminGroups and spec.allowedGroups options in the OpenShift Auth resource in the services.platform.opendatahub.io API group. templateOrder Specifies the order of custom Serving Runtime templates. When the user creates a new template, it is added to this list.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1//html/managing_openshift_ai/customizing-the-dashboard
Chapter 2. Configure User Access to manage notifications
Chapter 2. Configure User Access to manage notifications To configure notifications settings, you must be a member of a group with the Notifications administrator role. This group must be configured in User Access by an Organization Administrator. In the Red Hat Hybrid Cloud Console > Settings > Identity & Access Management > User Access > Groups , an Organization Administrator performs the following high-level steps: Create a User Access group for Notifications administrators. Add the Notifications administrator role to the group. Add members (users with account access) to the group. Organization Administrator The Organization Administrator configures the User Access group for Notifications administrators, then adds the Notifications administrator role and users to the group. Notifications administrator Notifications administrators configure how services interact with notifications. Notifications administrators configure behavior groups to define how services notify users about events. Administrators can configure additional integrations as they become available, as well as edit, disable, and remove existing integrations. Notifications viewer The Notifications viewer role is automatically granted to everyone on the account and limits how a user can interact with notifications service views and configurations. A viewer can view notification configurations, but cannot modify or remove them. A viewer cannot configure, modify, or remove integrations. Additional resources To learn more about User Access on the Red Hat Hybrid Cloud Console, see the User Access Configuration Guide for Role-based Access Control (RBAC) . 2.1. Creating and configuring a notifications group in the Hybrid Cloud Console An Organization Administrator of a Hybrid Cloud Console account creates a group with the Notifications administrator role and adds members to the group. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as an Organization Administrator. Procedure Click Settings . Under Identity & Access Management , click User Access . In the left navigation panel, expand User Access if necessary and then click Groups . Click Create group . Enter a group name, for example, Notifications administrators , and a description, and then click . Select the role to add to this group, in this case Notifications administrator , and then click . Add members to the group: Search for individual users or filter by username, email, or status. Check the box to each intended member's name, and then click . On the Review details screen, click Submit to finish creating the group. 2.2. Editing or removing a User Access group You can make changes to an existing User Access group in the Red Hat Hybrid Cloud Console and you can delete groups that are no longer needed. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console and meet one of the following criteria: You are a user with Organization Administrator permissions. You are a member of a group that has the User Access administrator role assigned to it. Procedure Navigate to Red Hat Hybrid Cloud Console > Settings > Identity & Access Management > User Access > Groups . Click the options icon (...) on the far right of the group name row, and then click Edit or Delete . Make and save changes or delete the group.
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_notifications_on_the_red_hat_hybrid_cloud_console/assembly-config-user-access_notifications
Chapter 3. Deploy using local storage devices
Chapter 3. Deploy using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Use this section to deploy OpenShift Data Foundation on VMware where OpenShift Container Platform is already installed. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the steps. Installing Local Storage Operator Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as either 4.9 or stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 3.3. Creating Multus networks [Technology Preview] OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. During cluster installation, you can configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. Important Multus support is a Technology Preview feature that is only supported and has been tested on bare metal and VMWare deployments. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 3.3.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Recommended network configuration and requirements for a Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface). Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting OSD pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface). Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ). 3.4. Creating OpenShift Data Foundation cluster on VMware vSphere VMware vSphere supports the following three types of local storage: Virtual machine disk (VMDK) Raw device mapping (RDM) VMDirectPath I/O Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node to use local storage devices on VMware. For VMs on VMware vSphere, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab. For more information, see Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere . If you want to use the technology preview feature of multus support, before deployment you must create network attachment definitions (NADs) that later will be attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select the Create a new StorageClass using the local storage devices option. Expand Advanced and select Full Deployment for the Deployment type option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install and follows procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes to use the available disks that match the selected filters on all nodes. Disks on selected nodes to use the available disks that match the selected filters only on selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes are spread across fewer than the minimum requirement of 3 availability zones. For information about flexible scaling, see Add capacity using YAML section in Scaling Storage guide. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see Resource requirements section in Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device type from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Choose one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https://<hostname or ip>''), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Provide CA Certificate , Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Choose one of the following: Select Default (SDN) if you are using a single network. Select Custom (Multus) if you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-using-local-storage-devices-vmware
Appendix C. Revision History
Appendix C. Revision History Note that revision numbers relate to the edition of this manual, not to version numbers of Red Hat Enterprise Linux. Revision History Revision 6.7-4 Mon Apr 10 2017 Aneta Steflova Petrova Updated Smart Cards. Revision 6.7-3 Wed Mar 8 2017 Aneta Steflova Petrova Version for 6.9 GA publication. Revision 6.7-2 Wed May 4 2016 Marc Muehlfeld Preparing document for 6.8 GA publication. Revision 6.7-1 Thu Feb 18 2016 Aneta Petrova Minor updates to trust and sudo chapters, added a warning to renewing CA certificates issued by external CAs. Revision 6.7-0 Tue Jul 14 2015 Tomas Capek Version for 6.7 GA release. Revision 6.6-2 Tue Mar 31 2015 Tomas Capek Improved sections on setting a Kerberized NFS server and client. Revision 6.6-1 Fri Dec 19 2014 Tomas Capek Rebuilt to update the sort order on the splash page. Revision 6.6-0 Fri Oct 10 2014 Tomas Capek Version for 6.6 GA release. Revision 6.5-5 July 9, 2014 Ella Deon Ballard Fixed bugs. Revision 6.5-4 February 3, 2014 Ella Deon Ballard Fixed bugs. Revision 6.5-1 November 20, 2013 Ella Deon Ballard Fixed bugs. Revision 6.4-3 August 20, 2013 Ella Deon Lackey Fixed bugs, reorganized some chapters. Revision 6.4-1 March 1, 2013 Ella Deon Lackey Added trusts. Revision 6.3-1 October 18, 2012 Ella Deon Lackey Removed sudo configuration example, group sync information, CRL generation section. Revision 6.2-8 December 16, 2011 Ella Deon Lackey Updated sudoers_debug example. Fixed migration command example. Revision 6.2-7 December 6, 2011 Ella Deon Lackey Release for GA of Red Hat Enterprise Linux 6.2.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/doc-history
Chapter 27. Designing the case definition
Chapter 27. Designing the case definition You design cases using the process designer in Business Central. Case design is the basis of case management and sets the specific goals and tasks for each case. The case flow can be modified dynamically during run time by adding dynamic tasks or processes. In this procedure, you will create this same case definition to familiarize yourself with the case definition design process. The IT_Orders sample project in Business Central includes the following orderhardware business process case definition. Figure 27.1. orderhardware business process case definition Prerequisites You have created a new case in Business Central. For more information, see Chapter 25, Creating a new IT_Orders case project . You have created the data objects. For more information, see Chapter 26, Data objects . Procedure In Business Central, go to Menu Design Projects and click IT_Orders_New . Click Add Asset Case Definition . In the Create new Case definition window, add the following required information: Case Definition : Input orderhardware . This is usually the subject of the case or project that is being case managed. Package : Select com.myspace.it_orders_new to specify the location that the case file is created in. Click Ok to open the process designer. Define values for the case file variables that are accessible to the sub-processes, subcases, and business rules used in the case. In the upper-right corner, click the Properties icon. Scroll down and expand Case Management , click in the Case File Variables section, and enter the following: Figure 27.2. orderhardware case file variables Note The following case file variables are custom data types: hwSpec : org.jbpm.document.Document (type in this value) survey : Survey [com.myspace.it_orders_new] (select this value) Click Save . Define the roles involved in the case. In the upper-right corner, click the Properties icon. Scroll down and expand Case Management , click in the Case Roles section, and enter the following: Figure 27.3. orderhardware case roles owner : The employee who is making the hardware order request. The role cardinality is set to 1 , which means that only one person or group can be assigned to this role. manager : The employee's manager; the person who will approve or deny the requested hardware. The role cardinality is set to 1 , which means that only one person or group can be assigned to this role. supplier : The available suppliers of IT hardware in the system. The role cardinality is set to 2 , which means that more than one supplier can be assigned to this role. Click Save . 27.1. Creating the Place order sub-process Create the Place order sub-process, which is a separate business process that is carried out by the supplier. This is a reusable process that occurs during the course of case execution as described in Chapter 27, Designing the case definition . Prerequisites You have created a new case in Business Central. For more information, see Chapter 25, Creating a new IT_Orders case project . You have created the data objects. For more information, see Chapter 26, Data objects . Procedure In Business Central, go to Menu Design Projects IT_Orders_New . From the project menu, click Add Asset Business Process . In the Create new Business Process wizard, enter the following values: Business Process : place-order Package : Select com.myspace.it_orders_new Click Ok . The diagram editor opens. Click an empty space in the canvas, and in the upper-right corner, click the Properties icon. Scroll down, expand Process Data , click in the Process Variables section, and enter the following values under Process Variables : Table 27.1. Process variables Name Data Type CaseID String Requestor String _hwSpec org.jbm.doc ordered_ Boolean info_ String caseFile_hwSpec org.jbm.doc caseFile-ordered Boolean caseFile-orderinf String Figure 27.4. Completed process variables Click Save . Drag a start event onto the canvas and create an outgoing connection from the start event to a task and convert the new task to a user task. Click the user task and in the Properties panel, input Place order in the Name field. Expand Implementation/Execution , click Add below the Groups menu, click Select New , and input supplier . click in the Assignments field and add the following data inputs and outputs in the Place order Data I/O dialog box: Table 27.2. Data inputs and assignements Name Data Type Source _hwSpec org.jbpm.document caseFile_hwSpec orderNumber String CaseId Requestor String Requestor Table 27.3. Data outputs and assignements Name Data Type Target ordered_ Boolean caseFile_ordered info_ String CaseFile_orderInfo For the first input assignment, select Custom for the Data Type and input org.jbpm.document.Document . Click OK . Select the Skippable check box and enter the following text in the Description field: Approved order #{CaseId} to be placed Create an outgoing connection from the Place order user task and connect it to an end event. Click Save to confirm your changes. You can open the sub-process in a new editor in Business Central by clicking the Place order task in the main process and then clicking the Open Sub-process task icon. 27.2. Creating the Manager approval business process The manager approval process determines whether or not the order will be placed or rejected. Procedure In Business Central, go to Menu Design Projects IT_Orders_New orderhardware Business Processes . Create and configure the Prepare hardware spec user task: Expand Tasks in the Object Library and drag a user task onto the canvas and convert the new task to a user task. Click the new user task and click the Properties icon in the upper-right corner. Input Prepare hardware spec in the Name field. Expand Implementation/Execution , click Add below the Groups menu, click Select New , and input supplier . Input PrepareHardwareSpec in the Task Name field. Select the Skippable check box and enter the following text in the Description field: Prepare hardware specification for #{initiator} (order number #{CaseId}) click in the Assignments field and add the following: Click OK . Create and configure the manager approval user task: Click the Prepare hardware spec user task and create a new user task. Click the new user task and click the Properties icon in the upper-right corner. Click the user task and in the Properties panel input Manager approval in the Name field. Expand Implementation/Execution , click Add below the Actors menu, click Select New , and input manager . Input ManagerApproval in the Task Name field. click in the Assignments field and add the following: Click OK . Select the Skippable check box and enter the following text in the Description field: Approval request for new hardware for #{initiator} (order number #{CaseId}) Enter the following Java expression in the On Exit Action field: kcontext.setVariable("caseFile_managerDecision", approved); Click Save . Click the Manager approval user task and create a Data-based Exclusive (XOR) gateway. Create and configure the Place order reusable sub-process: From the Object Library , expand sub-processes , click Reusable , and drag the new element to the canvas on the right side of the Data-based Exclusive (XOR) gateway. Connect the Data-based Exclusive (XOR) gateway to the sub-process. Click the new sub task and click the Properties icon in the upper-right corner. Input Place order in the Name field. Expand Data Assignments and click in the Assignments field and add the following: Click OK . Click the connection from the Data-based Exclusive (XOR) gateway to the sub-process and click the Properties icon. Expand Implementation/Execution , select Condition , and set the following condition expressions. Click the Place order user task and create an end event. Create and configure the order rejected user task: Click the Data-based Exclusive (XOR) gateway and create a new user task. Drag the new task to align it below the Place order task. Click the new user task and click the Properties icon in the upper-right corner. Input Order rejected in the Name field. Expand Implementation/Execution and input OrderRejected in the Task Name field. Click Add below the Actors menu, click Select New , and input owner . click in the Assignments field and add the following: Click OK . Select the Skippable check box and enter the following text in the Description field: Order #{CaseId} has been rejected by manager Click the Order rejected user task and create an end event. Click Save . Click the connection from the Data-based Exclusive (XOR) gateway to the Order rejected user task and click the Properties icon. Expand Implementation/Execution , select Condition , and set the following condition expressions. Click Save . Figure 27.5. Manager approval business process
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/case-management-designing-IT-hardware-proc
Data Grid documentation
Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/upgrading_data_grid/rhdg-docs_datagrid
Chapter 2. Network policies
Chapter 2. Network policies A cluster hosts two types of projects: Projects associated with managed services. These projects support inbound and outbound connections. User projects. These projects support communication from managed services. In OpenShift Dedicated, there are two approaches to enabling communications: Using network policies Using the join-project option of the oc command In OpenShift API Management, you can use network policies to enable communication and allow 3scale to communicate directly with the service endpoint, instead of the external URL. You cannot use the join-projects option of the oc command with managed services projects. 2.1. Enabling communication between managed services and customer applications You can create NetworkPolicy objects to define granular rules describing the Ingress network traffic that is allowed for projects in your cluster. By default, when you create projects in a cluster, communication between the projects is disabled. This procedure describes how to enable communication for a project so that managed services, such as 3scale, can access customer applications. Prerequisites You have installed the OpenShift command-line interface (CLI), commonly known as oc . Procedure Log in to the cluster using the oc login command. Use the following command to change the project: where <project_name> is the name of a project that you want to accept communications from other projects. Create a NetworkPolicy object: Create a allow-from-middleware-namespaces.yaml file. Define a policy in the file you just created, such as in the following example: Run the following command to create the policy object: 2.2. Enabling communication between managed services and projects By default, when you create projects in a cluster, communication between the projects is disabled. Use this procedure to enable communication in a project. Prerequisites You have installed the OpenShift command-line interface (CLI), commonly known as oc . Procedure Log in to the cluster using the oc login command. Use the following command to change the project: where <project_name> is the name of a project that you want to accept communications from other projects. Create a NetworkPolicy object: Create a NetworkPolicy.yaml file. Define a policy in the file you just created, such as in the following example. This policy enables incoming communication for all projects in the cluster: Note This policy configuration enables this project to communicate with all projects in the cluster. Run the following command to create the policy object: 2.3. Enabling communication between customer applications You can enable communication between user applications. Prerequisites You have installed the OpenShift command-line interface (CLI), commonly known as oc . Procedure Log in to the cluster using the oc login command. Use the following command to change the project: <project_name> is the name of a project that you want to accept communications from. Create a NetworkPolicy object: Create a allow-from-myproject-namespace.yaml file. Define a policy in the file you just created, such as in the following example. This policy enables incoming communication for a specific project ( myproject ): Run the following commands to create the policy object: 2.4. Disabling communication from a managed service to a project By default, projects are created with a template that allows communication from a managed service. For example, 3scale can communicate with all of your projects. You can disable the communication from a managed service to a project. Prerequisites You have installed the OpenShift command-line interface (CLI), commonly known as oc You have a project you want to isolate from the managed services. Procedure Log in to the cluster using the oc login command. Use the following command to change the project: where <project_name> is the name of a project that you want to isolate from the managed services. Create a NetworkPolicy object: Create a deny-all.yaml file. Define a policy in the file you just created, such as in the following example: Run the following command to create the policy object: 2.5. Additional resources Networking in OpenShift Dedicated
[ "oc project <project_name>", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-middleware-namespaces spec: podSelector: ingress: - from: - namespaceSelector: matchLabels: integreatly-middleware-service: 'true'", "oc create -f allow-from-middleware-namespaces.yaml -n <project> networkpolicy \"allow-from-middleware-namespaces\" created", "oc project <project_name>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-all spec: podSelector: ingress: - {}", "oc create -f <policy-name>.yaml -n <project>", "oc project <project_name>", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-myproject-namespace spec: podSelector: ingress: - from: - namespaceSelector: matchLabels: project: myproject", "oc create -f allow-from-myproject-namespace.yaml -n <project> networkpolicy \"allow-from-myproject-namespace\" created", "oc project <project_name>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-all spec: podSelector: {} ingress: - from: - namespaceSelector: matchLabels: integreatly-middleware-service: 'true'", "oc create -f <policy-name>.yaml -n <project>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/administering_red_hat_openshift_api_management/network-policies_network-policies
Chapter 7. Jakarta Contexts and Dependency Injection
Chapter 7. Jakarta Contexts and Dependency Injection 7.1. Introduction to Jakarta Contexts and Dependency Injection 7.1.1. About Jakarta Contexts and Dependency Injection Jakarta Contexts and Dependency Injection 2.0 is a specification designed to enable Jakarta Enterprise Beans 3 components to be used as Jakarta Server Faces managed beans. Jakarta Contexts and Dependency Injection unifies the two component models and enables a considerable simplification to the programming model for web-based applications in Java. Details about Jakarta Contexts and Dependency Injection 2.0 can be found in Jakarta Contexts and Dependency Injection 2.0 Specification . JBoss EAP includes Weld, which is a Jakarta Contexts and Dependency Injection 2.0 compatible specification. Note Weld is a compatible implementation of Jakarta Contexts and Dependency Injection for the Jakarta EE Platform. Jakarta Contexts and Dependency Injection is a Jakarta EE standard for dependency injection and contextual lifecycle management. Further, Jakarta Contexts and Dependency Injection is one of the most important parts of the Jakarta EE. Benefits of Jakarta Contexts and Dependency Injection The benefits of Jakarta Contexts and Dependency Injection include: Simplifying and shrinking your code base by replacing big chunks of code with annotations. Flexibility, allowing you to disable and enable injections and events, use alternative beans, and inject non-Contexts and Dependency Injection objects easily. Optionally, allowing you to include a beans.xml file in your META-INF/ or WEB-INF/ directory if you need to customize the configuration to differ from the default. The file can be empty. Simplifying packaging and deployments and reducing the amount of XML you need to add to your deployments. Providing lifecycle management via contexts. You can tie injections to requests, sessions, conversations, or custom contexts. Providing type-safe dependency injection, which is safer and easier to debug than string-based injection. Decoupling interceptors from beans. Providing complex event notification. 7.2. Use Jakarta Contexts and Dependency Injection to develop an application Jakarta Contexts and Dependency Injection gives you tremendous flexibility in developing applications, reusing code, adapting your code at deployment or runtime, and unit testing. Weld comes with a special mode for application development. When enabled, certain built-in tools, which facilitate the development of Jakarta Contexts and Dependency Injection applications, are available. Note The development mode should not be used in production as it can have a negative impact on the performance of the application. Make sure to disable the development mode before deploying to production. Enabling the Development Mode for a Web Application: For a web application, set the servlet initialization parameter org.jboss.weld.development to true : <web-app> <context-param> <param-name>org.jboss.weld.development</param-name> <param-value>true</param-value> </context-param> </web-app> Enabling Development Mode for JBoss EAP Using the Management CLI: It is possible to enable the Weld development mode globally for all the applications deployed by setting development-mode attribute to true : 7.2.1. Default Bean Discovery Mode The default bean discovery mode for a bean archive is annotated . Such a bean archive is said to be an implicit bean archive . If the bean discovery mode is annotated , then: Bean classes that do not have bean defining annotation and are not bean classes of sessions beans are not discovered. Producer methods that are not on a session bean and whose bean class does not have a bean defining annotation are not discovered. Producer fields that are not on a session bean and whose bean class does not have a bean defining annotation are not discovered. Disposer methods that are not on a session bean and whose bean class does not have a bean defining annotation are not discovered. Observer methods that are not on a session bean and whose bean class does not have a bean defining annotation are not discovered. Important All examples in the Contexts and Dependency Injection section are valid only when you have a discovery mode set to all . Bean Defining Annotations A bean class can have a bean defining annotation , allowing it to be placed anywhere in an application, as defined in bean archives. A bean class with a bean defining annotation is said to be an implicit bean. The set of bean defining annotations contains: @ApplicationScoped , @SessionScoped , @ConversationScoped and @RequestScoped annotations. All other normal scope types. @Interceptor and @Decorator annotations. All stereotype annotations, i.e. annotations annotated with @Stereotype . The @Dependent scope annotation. If one of these annotations is declared on a bean class, then the bean class is said to have a bean defining annotation. Example: Bean Defining Annotation @Dependent public class BookShop extends Business implements Shop<Book> { ... } Note To ensure compatibility with other JSR-330 implementations and the Jakarta Contexts and Dependency Injection specification, all pseudo-scope annotations, except @Dependent , are not bean defining annotations. However, a stereotype annotation, including a pseudo-scope annotation, is a bean defining annotation. 7.2.2. Exclude Beans From the Scanning Process Exclude filters are defined by <exclude> elements in the beans.xml file for the bean archive as children of the <scan> element. By default an exclude filter is active. The exclude filter becomes inactive, if its definition contains: A child element named <if-class-available> with a name attribute, and the class loader for the bean archive can not load a class for that name, or A child element named <if-class-not-available> with a name attribute, and the class loader for the bean archive can load a class for that name, or A child element named <if-system-property> with a name attribute, and there is no system property defined for that name, or A child element named <if-system-property> with a name attribute and a value attribute, and there is no system property defined for that name with that value. The type is excluded from discovery, if the filter is active, and: The fully qualified name of the type being discovered matches the value of the name attribute of the exclude filter, or The package name of the type being discovered matches the value of the name attribute with a suffix ".*" of the exclude filter, or The package name of the type being discovered starts with the value of the name attribute with a suffix ".**" of the exclude filter Example 7.1. Example: beans.xml File <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"> <scan> <exclude name="com.acme.rest.*" /> 1 <exclude name="com.acme.faces.**"> 2 <if-class-not-available name="javax.faces.context.FacesContext"/> </exclude> <exclude name="com.acme.verbose.*"> 3 <if-system-property name="verbosity" value="low"/> </exclude> <exclude name="com.acme.ejb.**"> 4 <if-class-available name="javax.enterprise.inject.Model"/> <if-system-property name="exclude-ejbs"/> </exclude> </scan> </beans> 1 The first exclude filter will exclude all classes in com.acme.rest package. 2 The second exclude filter will exclude all classes in the com.acme.faces package, and any subpackages, but only if Jakarta Server Faces is not available. 3 The third exclude filter will exclude all classes in the com.acme.verbose package if the system property verbosity has the value low . 4 The fourth exclude filter will exclude all classes in the com.acme.ejb package, and any subpackages, if the system property exclude-ejbs is set with any value and if at the same time, the javax.enterprise.inject.Model class is also available to the classloader. Note It is safe to annotate Jakarta EE components with @Vetoed to prevent them being considered beans. An event is not fired for any type annotated with @Vetoed , or in a package annotated with @Vetoed . For more information, see @Vetoed . 7.2.3. Use an Injection to Extend an Implementation You can use an injection to add or change a feature of your existing code. The following example adds a translation ability to an existing class, and assumes you already have a Welcome class, which has a method buildPhrase . The buildPhrase method takes as an argument the name of a city, and outputs a phrase like "Welcome to Boston!". This example injects a hypothetical Translator object into the Welcome class. The Translator object can be a Jakarta Enterprise Beans stateless bean or another type of bean, which can translate sentences from one language to another. In this instance, the Translator is used to translate the entire greeting, without modifying the original Welcome class. The Translator is injected before the buildPhrase method is called. Example: Inject a Translator Bean into the Welcome Class public class TranslatingWelcome extends Welcome { @Inject Translator translator; public String buildPhrase(String city) { return translator.translate("Welcome to " + city + "!"); } ... } 7.3. Ambiguous or Unsatisfied Dependencies Ambiguous dependencies exist when the container is unable to resolve an injection to exactly one bean. Unsatisfied dependencies exist when the container is unable to resolve an injection to any bean at all. The container takes the following steps to try to resolve dependencies: It resolves the qualifier annotations on all beans that implement the bean type of an injection point. It filters out disabled beans. Disabled beans are @Alternative beans which are not explicitly enabled. In the event of an ambiguous or unsatisfied dependency, the container aborts deployment and throws an exception. To fix an ambiguous dependency, see Use a Qualifier to Resolve an Ambiguous Injection . 7.3.1. Qualifiers Qualifiers are annotations used to avoid ambiguous dependencies when the container can resolve multiple beans, which fit into an injection point. A qualifier declared at an injection point provides the set of eligible beans, which declare the same qualifier. Qualifiers must be declared with a retention and target as shown in the example below. Example: Define the @Synchronous and @Asynchronous Qualifiers @Qualifier @Retention(RUNTIME) @Target({TYPE, METHOD, FIELD, PARAMETER}) public @interface Synchronous {} @Qualifier @Retention(RUNTIME) @Target({TYPE, METHOD, FIELD, PARAMETER}) public @interface Asynchronous {} Example: Use the @Synchronous and @Asynchronous Qualifiers @Synchronous public class SynchronousPaymentProcessor implements PaymentProcessor { public void process(Payment payment) { ... } } @Asynchronous public class AsynchronousPaymentProcessor implements PaymentProcessor { public void process(Payment payment) { ... } } '@Any' Whenever a bean or injection point does not explicitly declare a qualifier, the container assumes the qualifier @Default . From time to time, you will need to declare an injection point without specifying a qualifier. There is a qualifier for that too. All beans have the qualifier @Any . Therefore, by explicitly specifying @Any at an injection point, you suppress the default qualifier, without otherwise restricting the beans that are eligible for injection. This is especially useful if you want to iterate over all beans of a certain bean type. import javax.enterprise.inject.Instance; ... @Inject void initServices(@Any Instance<Service> services) { for (Service service: services) { service.init(); } } Every bean has the qualifier @Any , even if it does not explicitly declare this qualifier. Every event also has the qualifier @Any , even if it was raised without explicit declaration of this qualifier. @Inject @Any Event<User> anyUserEvent; The @Any qualifier allows an injection point to refer to all beans or all events of a certain bean type. @Inject @Delegate @Any Logger logger; 7.3.2. Use a Qualifier to Resolve an Ambiguous Injection You can resolve an ambiguous injection using a qualifier. Read more about ambiguous injections at Ambiguous or Unsatisfied Dependencies . The following example is ambiguous and features two implementations of Welcome , one which translates and one which does not. The injection needs to be specified to use the translating Welcome . Example: Ambiguous Injection public class Greeter { private Welcome welcome; @Inject void init(Welcome welcome) { this.welcome = welcome; } ... } Resolve an Ambiguous Injection with a Qualifier To resolve the ambiguous injection, create a qualifier annotation called @Translating : @Qualifier @Retention(RUNTIME) @Target({TYPE,METHOD,FIELD,PARAMETERS}) public @interface Translating{} Annotate your translating Welcome with the @Translating annotation: @Translating public class TranslatingWelcome extends Welcome { @Inject Translator translator; public String buildPhrase(String city) { return translator.translate("Welcome to " + city + "!"); } ... } Request the translating Welcome in your injection. You must request a qualified implementation explicitly, similar to the factory method pattern. The ambiguity is resolved at the injection point. public class Greeter { private Welcome welcome; @Inject void init(@Translating Welcome welcome) { this.welcome = welcome; } public void welcomeVisitors() { System.out.println(welcome.buildPhrase("San Francisco")); } } 7.4. Managed Beans Jakarta EE establishes a common definition in the Jakarta Managed Beans specification . For Jakarta EE, managed beans are defined as container-managed objects with minimal programming restrictions, otherwise known by the acronym POJO (Plain Old Java Object). They support a small set of basic services, such as resource injection, lifecycle callbacks, and interceptors. Companion specifications, such as Jakarta Enterprise Beans and Jakarta Contexts and Dependency Injection, build on this basic model. With very few exceptions, almost every concrete Java class that has a constructor with no parameters, or a constructor designated with the annotation @Inject , is a bean. This includes every JavaBean and every Jakarta Enterprise Beans session bean. 7.4.1. Types of Classes That are Beans A managed bean is a Java class. For Jakarta EE, the basic lifecycle and semantics of a managed bean are defined by the Jakarta Managed Beans 1.0 specification . You can explicitly declare a managed bean by annotating the bean class @ManagedBean , but in Contexts and Dependency Injection you do not need to. According to the specification, the Contexts and Dependency Injection container treats any class that satisfies the following conditions as a managed bean: It is not a non-static inner class. It is a concrete class or is annotated with @Decorator . It is not annotated with a Jakarta Enterprise Beans component-defining annotation or declared as a Jakarta Enterprise Beans bean class in the ejb-jar.xml file. It does not implement the interface javax.enterprise.inject.spi.Extension . It has either a constructor with no parameters, or a constructor annotated with @Inject . It is not annotated with @Vetoed or in a package annotated with @Vetoed . The unrestricted set of bean types for a managed bean contains the bean class, every superclass, and all interfaces it implements directly or indirectly. If a managed bean has a public field, it must have the default scope @Dependent . @Vetoed You can veto processing of a class so that no beans or observer methods defined by this class are installed: @Vetoed public class SimpleGreeting implements Greeting { ... } In this code, the SimpleGreeting bean is not considered for injection. All beans in a package can be prevented from injection: @Vetoed package org.sample.beans; import javax.enterprise.inject.Vetoed; This code in package-info.java in the org.sample.beans package will prevent all beans inside this package from injection. Jakarta EE components, such as stateless Jakarta Enterprise Beans or Jakarta RESTful Web Services resource endpoints, can be marked with @Vetoed to prevent them from being considered beans. Adding the @Vetoed annotation to all persistent entities prevents the BeanManager from managing an entity as a Jakarta Contexts and Dependency Injection Bean. When an entity is annotated with @Vetoed , no injections take place. The reasoning behind this is to prevent the BeanManager from performing the operations that might cause the Jakarta Persistence provider to break. 7.4.2. Use Contexts and Dependency Injection to Inject an Object Into a Bean Contexts and Dependency Injection is activated automatically if Contexts and Dependency Injection components are detected in an application. If you want to customize your configuration to differ from the default, you can include a META-INF/beans.xml file or a WEB-INF/beans.xml file in your deployment archive. Inject Objects into Other Objects To obtain an instance of a class, annotate the field with @Inject within your bean: public class TranslateController { @Inject TextTranslator textTranslator; ... Use your injected object's methods directly. Assume that TextTranslator has a method translate : // in TranslateController class public void translate() { translation = textTranslator.translate(inputText); } Use an injection in the constructor of a bean. You can inject objects into the constructor of a bean as an alternative to using a factory or service locator to create them: public class TextTranslator { private SentenceParser sentenceParser; private Translator sentenceTranslator; @Inject TextTranslator(SentenceParser sentenceParser, Translator sentenceTranslator) { this.sentenceParser = sentenceParser; this.sentenceTranslator = sentenceTranslator; } // Methods of the TextTranslator class ... } Use the Instance(<T>) interface to get instances programmatically. The Instance interface can return an instance of TextTranslator when parameterized with the bean type. @Inject Instance<TextTranslator> textTranslatorInstance; ... public void translate() { textTranslatorInstance.get().translate(inputText); } When you inject an object into a bean, all of the object's methods and properties are available to your bean. If you inject into your bean's constructor, instances of the injected objects are created when your bean's constructor is called, unless the injection refers to an instance that already exists. For instance, a new instance would not be created if you inject a session-scoped bean during the lifetime of the session. 7.5. Contexts and Scopes A context, in terms of Contexts and Dependency Injection, is a storage area that holds instances of beans associated with a specific scope. A scope is the link between a bean and a context. A scope/context combination can have a specific lifecycle. Several predefined scopes exist, and you can create your own. Examples of predefined scopes are @RequestScoped , @SessionScoped , and @ConversationScope . Table 7.1. Available Scopes Scope Description @Dependent The bean is bound to the lifecycle of the bean holding the reference. The default scope for an injected bean is @Dependent . @ApplicationScoped The bean is bound to the lifecycle of the application. @RequestScoped The bean is bound to the lifecycle of the request. @SessionScoped The bean is bound to the lifecycle of the session. @ConversationScoped The bean is bound to the lifecycle of the conversation. The conversation scope is between the lengths of the request and the session, and is controlled by the application. Custom scopes If the above contexts do not meet your needs, you can define custom scopes. 7.6. Named Beans You can name a bean by using the @Named annotation. Naming a bean allows you to use it directly in Jakarta Server Faces and Jakarta Expression Language. The @Named annotation takes an optional parameter, which is the bean name. If this parameter is omitted, the bean name defaults to the class name of the bean with its first letter converted to lowercase. 7.6.1. Use Named Beans Configure Bean Names Using the @Named Annotation Use the @Named annotation to assign a name to a bean. @Named("greeter") public class GreeterBean { private Welcome welcome; @Inject void init (Welcome welcome) { this.welcome = welcome; } public void welcomeVisitors() { System.out.println(welcome.buildPhrase("San Francisco")); } } In the example above, the default name would be greeterBean if no name had been specified. Use the named bean in a Jakarta Server Faces view. 7.7. Bean Lifecycle This task shows you how to save a bean for the life of a request. The default scope for an injected bean is @Dependent . This means that the bean's lifecycle is dependent upon the lifecycle of the bean that holds the reference. Several other scopes exist, and you can define your own scopes. For more information, see Contexts and Scopes . Manage Bean Lifecycles Annotate the bean with the desired scope. @RequestScoped @Named("greeter") public class GreeterBean { private Welcome welcome; private String city; // getter & setter not shown @Inject void init(Welcome welcome) { this.welcome = welcome; } public void welcomeVisitors() { System.out.println(welcome.buildPhrase(city)); } } When your bean is used in the Jakarta Server Faces view, it holds state. <h:form> <h:inputText value="#{greeter.city}"/> <h:commandButton value="Welcome visitors" action="#{greeter.welcomeVisitors}"/> </h:form> Your bean is saved in the context relating to the scope that you specify, and lasts as long as the scope applies. 7.7.1. Use a Producer Method A producer method is a method that acts as a source of bean instances. When no instance exists in the specified context, the method declaration itself describes the bean, and the container invokes the method to obtain an instance of the bean. A producer method lets the application take full control of the bean instantiation process. This section shows how to use producer methods to produce a variety of different objects that are not beans for injection. Example: Use a Producer Method By using a producer method instead of an alternative, polymorphism after deployment is allowed. The @Preferred annotation in the example is a qualifier annotation. For more information about qualifiers, see Qualifiers . @SessionScoped public class Preferences implements Serializable { private PaymentStrategyType paymentStrategy; ... @Produces @Preferred public PaymentStrategy getPaymentStrategy() { switch (paymentStrategy) { case CREDIT_CARD: return new CreditCardPaymentStrategy(); case CHECK: return new CheckPaymentStrategy(); default: return null; } } } The following injection point has the same type and qualifier annotations as the producer method, so it resolves to the producer method using the usual Contexts and Dependency Injection injection rules. The producer method is called by the container to obtain an instance to service this injection point. @Inject @Preferred PaymentStrategy paymentStrategy; Example: Assign a Scope to a Producer Method The default scope of a producer method is @Dependent . If you assign a scope to a bean, it is bound to the appropriate context. The producer method in this example is only called once per session. @Produces @Preferred @SessionScoped public PaymentStrategy getPaymentStrategy() { ... } Example: Use an Injection Inside a Producer Method Objects instantiated directly by an application cannot take advantage of dependency injection and do not have interceptors. However, you can use dependency injection into the producer method to obtain bean instances. @Produces @Preferred @SessionScoped public PaymentStrategy getPaymentStrategy(CreditCardPaymentStrategy ccps, CheckPaymentStrategy cps ) { switch (paymentStrategy) { case CREDIT_CARD: return ccps; case CHEQUE: return cps; default: return null; } } If you inject a request-scoped bean into a session-scoped producer, the producer method promotes the current request-scoped instance into session scope. This is almost certainly not the desired behavior, so use caution when you use a producer method in this way. Note The scope of the producer method is not inherited from the bean that declares the producer method. Producer methods allow you to inject non-bean objects and change your code dynamically. 7.8. Alternative Beans Alternatives are beans whose implementation is specific to a particular client module or deployment scenario. By default, @Alternative beans are disabled. They are enabled for a specific bean archive by editing its beans.xml file. However, this activation only applies to the beans in that archive. You can enable alternative for the entire application using the @Priority annotation. Example: Defining Alternatives This alternative defines an implementation of the PaymentProcessor class using both @Synchronous and @Asynchronous alternatives: @Alternative @Synchronous @Asynchronous public class MockPaymentProcessor implements PaymentProcessor { public void process(Payment payment) { ... } } Example: Enabling @Alternative Using beans.xml <beans xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd"> <alternatives> <class>org.mycompany.mock.MockPaymentProcessor</class> </alternatives> </beans> Declaring Selected Alternatives The @Priority annotation allows an alternative to be enabled for an entire application. An alternative can be given a priority for the application: by placing the @Priority annotation on the bean class of a managed bean or session bean, or by placing the @Priority annotation on the bean class that declares the producer method, field or resource. 7.8.1. Override an Injection with an Alternative You can use alternative beans to override existing beans. They can be thought of as a way to plug in a class which fills the same role, but functions differently. They are disabled by default. This task shows you how to specify and enable an alternative. Override an Injection This task assumes that you already have a TranslatingWelcome class in your project, but you want to override it with a "mock" TranslatingWelcome class. This would be the case for a test deployment, where the true Translator bean cannot be used. Define the alternative. @Alternative @Translating public class MockTranslatingWelcome extends Welcome { public String buildPhrase(string city) { return "Bienvenue A " + city + "!"); } } Activate the substitute implementation by adding the fully-qualified class name to your META-INF/beans.xml or WEB-INF/beans.xml file. <beans> <alternatives> <class>com.acme.MockTranslatingWelcome</class> </alternatives> </beans> The alternative implementation is now used instead of the original one. 7.9. Stereotypes In many systems, use of architectural patterns produces a set of recurring bean roles. A stereotype allows you to identify such a role and declare some common metadata for beans with that role in a central place. A stereotype encapsulates any combination of: A default scope. A set of interceptor bindings. A stereotype can also specify either: All beans where the stereotypes are defaulted bean EL names. All beans where the stereotypes are alternatives. A bean can declare zero, one, or multiple stereotypes. A stereotype is an @Stereotype annotation that packages several other annotations. Stereotype annotations can be applied to a bean class, producer method, or field. A class that inherits a scope from a stereotype can override that stereotype and specify a scope directly on the bean. In addition, if a stereotype has a @Named annotation, any bean it is placed on has a default bean name. The bean can override this name if the @Named annotation is specified directly on the bean. For more information about named beans, see Named Beans . 7.9.1. Use Stereotypes Without stereotypes, annotations can become cluttered. This task shows you how to use stereotypes to reduce the clutter and streamline your code. Example: Annotation Clutter @Secure @Transactional @RequestScoped @Named public class AccountManager { public boolean transfer(Account a, Account b) { ... } } Define and Use Stereotypes Define the stereotype. @Secure @Transactional @RequestScoped @Named @Stereotype @Retention(RUNTIME) @Target(TYPE) public @interface BusinessComponent { ... } Use the stereotype. @BusinessComponent public class AccountManager { public boolean transfer(Account a, Account b) { ... } } 7.10. Observer Methods Observer methods receive notifications when events occur. Contexts and Dependency Injection also provides transactional observer methods, which receive event notifications during the before completion or after completion phase of the transaction in which the event was fired. 7.10.1. Fire and Observe Events Example: Fire an Event The following code shows an event being injected and used in a method. public class AccountManager { @Inject Event<Withdrawal> event; public boolean transfer(Account a, Account b) { ... event.fire(new Withdrawal(a)); } } Example: Fire an Event with a Qualifier You can annotate your event injection with a qualifier, to make it more specific. For more information about qualifiers, see Qualifiers . public class AccountManager { @Inject @Suspicious Event <Withdrawal> event; public boolean transfer(Account a, Account b) { ... event.fire(new Withdrawal(a)); } } Example: Observe an Event To observe an event, use the @Observes annotation. public class AccountObserver { void checkTran(@Observes Withdrawal w) { ... } } You can use qualifiers to observe only specific types of events. public class AccountObserver { void checkTran(@Observes @Suspicious Withdrawal w) { ... } } 7.10.2. Transactional Observers Transactional observers receive the event notifications before or after the completion phase of the transaction in which the event was raised. Transactional observers are important in a stateful object model because state is often held for longer than a single atomic transaction. There are five kinds of transactional observers: IN_PROGRESS : By default, observers are invoked immediately. AFTER_SUCCESS : Observers are invoked after the completion phase of the transaction, but only if the transaction completes successfully. AFTER_FAILURE : Observers are invoked after the completion phase of the transaction, but only if the transaction fails to complete successfully. AFTER_COMPLETION : Observers are invoked after the completion phase of the transaction. BEFORE_COMPLETION : Observers are invoked before the completion phase of the transaction. The following observer method refreshes a query result set cached in the application context, but only when transactions that update the Category tree are successful: public void refreshCategoryTree(@Observes(during = AFTER_SUCCESS) CategoryUpdateEvent event) { ... } Assume you have cached a Jakarta Persistence query result set in the application scope as shown in the following example: import javax.ejb.Singleton; import javax.enterprise.inject.Produces; @ApplicationScoped @Singleton public class Catalog { @PersistenceContext EntityManager em; List<Product> products; @Produces @Catalog List<Product> getCatalog() { if (products==null) { products = em.createQuery("select p from Product p where p.deleted = false") .getResultList(); } return products; } } Occasionally a Product is created or deleted. When this occurs, you need to refresh the Product catalog. But you must wait for the transaction to complete successfully before performing this refresh. The following is an example of a bean that creates and deletes Products triggers events: import javax.enterprise.event.Event; @Stateless public class ProductManager { @PersistenceContext EntityManager em; @Inject @Any Event<Product> productEvent; public void delete(Product product) { em.delete(product); productEvent.select(new AnnotationLiteral<Deleted>(){}).fire(product); } public void persist(Product product) { em.persist(product); productEvent.select(new AnnotationLiteral<Created>(){}).fire(product); } ... } The Catalog can now observe the events after successful completion of the transaction: import javax.ejb.Singleton; @ApplicationScoped @Singleton public class Catalog { ... void addProduct(@Observes(during = AFTER_SUCCESS) @Created Product product) { products.add(product); } void removeProduct(@Observes(during = AFTER_SUCCESS) @Deleted Product product) { products.remove(product); } } 7.11. Interceptors Interceptors allow you to add functionality to the business methods of a bean without modifying the bean's method directly. The interceptor is executed before any of the business methods of the bean. Interceptors are defined as part of the Jakarta Enterprise Beans specification. Jakarta Contexts and Dependency Injection enhances this functionality by allowing you to use annotations to bind interceptors to beans. Interception points Business method interception: A business method interceptor applies to invocations of methods of the bean by clients of the bean. Lifecycle callback interception: A lifecycle callback interceptor applies to invocations of lifecycle callbacks by the container. Timeout method interception: A timeout method interceptor applies to invocations of the Jakarta Enterprise Beans timeout methods by the container. Enabling Interceptors By default, all interceptors are disabled. You can enable the interceptor by using the beans.xml descriptor of a bean archive. However, this activation only applies to the beans in that archive. You can enable interceptors for the whole application using the @Priority annotation. Example: Enabling Interceptors in beans.xml <beans xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2.0.xsd"> <interceptors> <class>org.mycompany.myapp.TransactionInterceptor</class> </interceptors> </beans> Having the XML declaration solves two problems: It enables you to specify an ordering for the interceptors in your system, ensuring deterministic behavior. It lets you enable or disable interceptor classes at deployment time. Interceptors enabled using @Priority are called before interceptors enabled using the beans.xml file. Note Having an interceptor enabled by @Priority and at the same time invoked by the beans.xml file leads to a nonportable behavior. This combination of enablement should therefore be avoided in order to maintain consistent behavior across different Jakarta Contexts and Dependency Injection implementations. 7.11.1. Use Interceptors with Jakarta Contexts and Dependency Injection Jakarta Contexts and Dependency Injection can simplify your interceptor code and make it easier to apply to your business code. Without Jakarta Contexts and Dependency Injection, interceptors have two problems: The bean must specify the interceptor implementation directly. Every bean in the application must specify the full set of interceptors in the correct order. This makes adding or removing interceptors on an application-wide basis time-consuming and error-prone. Using Interceptors with Jakarta Contexts and Dependency Injection Define the interceptor binding type. @InterceptorBinding @Retention(RUNTIME) @Target({TYPE, METHOD}) public @interface Secure {} Mark the interceptor implementation. @Secure @Interceptor public class SecurityInterceptor { @AroundInvoke public Object aroundInvoke(InvocationContext ctx) throws Exception { // enforce security ... return ctx.proceed(); } } Use the interceptor in your development environment. @Secure public class AccountManager { public boolean transfer(Account a, Account b) { ... } } Enable the interceptor in your deployment, by adding it to the META-INF/beans.xml or WEB-INF/beans.xml file. <beans> <interceptors> <class>com.acme.SecurityInterceptor</class> <class>com.acme.TransactionInterceptor</class> </interceptors> </beans> The interceptors are applied in the order listed. 7.12. Decorators A decorator intercepts invocations from a specific Java interface, and is aware of all the semantics attached to that interface. Decorators are useful for modeling some kinds of business concerns, but do not have the generality of interceptors. A decorator is a bean, or even an abstract class, that implements the type it decorates, and is annotated with @Decorator . To invoke a decorator in a Jakarta Contexts and Dependency Injection application, it must be specified in the beans.xml file. Example: Invoke a Decorator Through beans.xml <beans xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd"> <decorators> <class>org.mycompany.myapp.LargeTransactionDecorator</class> </decorators> </beans> This declaration serves two main purposes: It enables you to specify an ordering for decorators in your system, ensuring deterministic behavior. It lets you enable or disable decorator classes at deployment time. A decorator must have exactly one @Delegate injection point to obtain a reference to the decorated object. Example: Decorator Class @Decorator public abstract class LargeTransactionDecorator implements Account { @Inject @Delegate @Any Account account; @PersistenceContext EntityManager em; public void withdraw(BigDecimal amount) { ... } public void deposit(BigDecimal amount); ... } } You can enable decorator for the whole application using @Priority annotation. Decorators enabled using @Priority are called before decorators enabled using the beans.xml file. The lower priority values are called first. Note Having a decorator enabled by @Priority and at the same time invoked by beans.xml , leads to a nonportable behavior. This combination of enablement should therefore be avoided in order to maintain consistent behavior across different Contexts and Dependency Injection implementations. 7.13. Portable Extensions Contexts and Dependency Injection is intended to be a foundation for frameworks, extensions, and for integration with other technologies. Therefore, Contexts and Dependency Injection exposes a set of SPIs for the use of developers of portable extensions to Contexts and Dependency Injection. Extensions can provide the following types of functionality: Integration with Business Process Management engines. Integration with third-party frameworks, such as Spring, Seam, GWT, or Wicket. New technology based upon the Contexts and Dependency Injection programming model. According to the Jakarta Contexts and Dependency Injection specification, a portable extension can integrate with the container in the following ways: Providing its own beans, interceptors, and decorators to the container. Injecting dependencies into its own objects using the dependency. injection service. Providing a context implementation for a custom scope. Augmenting or overriding the annotation-based metadata with metadata from another source. For more information, see Portable extensions in the Weld documentation. 7.14. Bean Proxies Clients of an injected bean do not usually hold a direct reference to a bean instance. Unless the bean is a dependent object, scope @Dependent , the container must redirect all injected references to the bean using a proxy object. A bean proxy, which can be referred to as client proxy, is responsible for ensuring the bean instance that receives a method invocation is the instance associated with the current context. The client proxy also allows beans bound to contexts, such as the session context, to be serialized to disk without recursively serializing other injected beans. Due to Java limitations, some Java types cannot be proxied by the container. If an injection point declared with one of these types resolves to a bean with a scope other than @Dependent , the container aborts the deployment. Certain Java types cannot be proxied by the container. These include: Classes that do not have a non-private constructor with no parameters. Classes that are declared final or have a final method. Arrays and primitive types. 7.15. Use a Proxy in an Injection A proxy is used for injection when the lifecycles of the beans are different from each other. The proxy is a subclass of the bean that is created at runtime, and overrides all the non-private methods of the bean class. The proxy forwards the invocation onto the actual bean instance. In this example, the PaymentProcessor instance is not injected directly into Shop . Instead, a proxy is injected, and when the processPayment() method is called, the proxy looks up the current PaymentProcessor bean instance and calls the processPayment() method on it. Example: Proxy Injection @ConversationScoped class PaymentProcessor { public void processPayment(int amount) { System.out.println("I'm taking USD" + amount); } } @ApplicationScoped public class Shop { @Inject PaymentProcessor paymentProcessor; public void buyStuff() { paymentProcessor.processPayment(100); } }
[ "<web-app> <context-param> <param-name>org.jboss.weld.development</param-name> <param-value>true</param-value> </context-param> </web-app>", "/subsystem=weld:write-attribute(name=development-mode,value=true)", "@Dependent public class BookShop extends Business implements Shop<Book> { }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://xmlns.jcp.org/xml/ns/javaee\"> <scan> <exclude name=\"com.acme.rest.*\" /> 1 <exclude name=\"com.acme.faces.**\"> 2 <if-class-not-available name=\"javax.faces.context.FacesContext\"/> </exclude> <exclude name=\"com.acme.verbose.*\"> 3 <if-system-property name=\"verbosity\" value=\"low\"/> </exclude> <exclude name=\"com.acme.ejb.**\"> 4 <if-class-available name=\"javax.enterprise.inject.Model\"/> <if-system-property name=\"exclude-ejbs\"/> </exclude> </scan> </beans>", "public class TranslatingWelcome extends Welcome { @Inject Translator translator; public String buildPhrase(String city) { return translator.translate(\"Welcome to \" + city + \"!\"); } }", "@Qualifier @Retention(RUNTIME) @Target({TYPE, METHOD, FIELD, PARAMETER}) public @interface Synchronous {}", "@Qualifier @Retention(RUNTIME) @Target({TYPE, METHOD, FIELD, PARAMETER}) public @interface Asynchronous {}", "@Synchronous public class SynchronousPaymentProcessor implements PaymentProcessor { public void process(Payment payment) { ... } }", "@Asynchronous public class AsynchronousPaymentProcessor implements PaymentProcessor { public void process(Payment payment) { ... } }", "import javax.enterprise.inject.Instance; @Inject void initServices(@Any Instance<Service> services) { for (Service service: services) { service.init(); } }", "@Inject @Any Event<User> anyUserEvent;", "@Inject @Delegate @Any Logger logger;", "public class Greeter { private Welcome welcome; @Inject void init(Welcome welcome) { this.welcome = welcome; } }", "@Qualifier @Retention(RUNTIME) @Target({TYPE,METHOD,FIELD,PARAMETERS}) public @interface Translating{}", "@Translating public class TranslatingWelcome extends Welcome { @Inject Translator translator; public String buildPhrase(String city) { return translator.translate(\"Welcome to \" + city + \"!\"); } }", "public class Greeter { private Welcome welcome; @Inject void init(@Translating Welcome welcome) { this.welcome = welcome; } public void welcomeVisitors() { System.out.println(welcome.buildPhrase(\"San Francisco\")); } }", "@Vetoed public class SimpleGreeting implements Greeting { }", "@Vetoed package org.sample.beans; import javax.enterprise.inject.Vetoed;", "public class TranslateController { @Inject TextTranslator textTranslator;", "// in TranslateController class public void translate() { translation = textTranslator.translate(inputText); }", "public class TextTranslator { private SentenceParser sentenceParser; private Translator sentenceTranslator; @Inject TextTranslator(SentenceParser sentenceParser, Translator sentenceTranslator) { this.sentenceParser = sentenceParser; this.sentenceTranslator = sentenceTranslator; } // Methods of the TextTranslator class }", "@Inject Instance<TextTranslator> textTranslatorInstance; public void translate() { textTranslatorInstance.get().translate(inputText); }", "@Named(\"greeter\") public class GreeterBean { private Welcome welcome; @Inject void init (Welcome welcome) { this.welcome = welcome; } public void welcomeVisitors() { System.out.println(welcome.buildPhrase(\"San Francisco\")); } }", "<h:form> <h:commandButton value=\"Welcome visitors\" action=\"#{greeter.welcomeVisitors}\"/> </h:form>", "@RequestScoped @Named(\"greeter\") public class GreeterBean { private Welcome welcome; private String city; // getter & setter not shown @Inject void init(Welcome welcome) { this.welcome = welcome; } public void welcomeVisitors() { System.out.println(welcome.buildPhrase(city)); } }", "<h:form> <h:inputText value=\"#{greeter.city}\"/> <h:commandButton value=\"Welcome visitors\" action=\"#{greeter.welcomeVisitors}\"/> </h:form>", "@SessionScoped public class Preferences implements Serializable { private PaymentStrategyType paymentStrategy; @Produces @Preferred public PaymentStrategy getPaymentStrategy() { switch (paymentStrategy) { case CREDIT_CARD: return new CreditCardPaymentStrategy(); case CHECK: return new CheckPaymentStrategy(); default: return null; } } }", "@Inject @Preferred PaymentStrategy paymentStrategy;", "@Produces @Preferred @SessionScoped public PaymentStrategy getPaymentStrategy() { }", "@Produces @Preferred @SessionScoped public PaymentStrategy getPaymentStrategy(CreditCardPaymentStrategy ccps, CheckPaymentStrategy cps ) { switch (paymentStrategy) { case CREDIT_CARD: return ccps; case CHEQUE: return cps; default: return null; } }", "@Alternative @Synchronous @Asynchronous public class MockPaymentProcessor implements PaymentProcessor { public void process(Payment payment) { ... } }", "<beans xmlns=\"http://xmlns.jcp.org/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd\"> <alternatives> <class>org.mycompany.mock.MockPaymentProcessor</class> </alternatives> </beans>", "@Alternative @Translating public class MockTranslatingWelcome extends Welcome { public String buildPhrase(string city) { return \"Bienvenue A \" + city + \"!\"); } }", "<beans> <alternatives> <class>com.acme.MockTranslatingWelcome</class> </alternatives> </beans>", "@Secure @Transactional @RequestScoped @Named public class AccountManager { public boolean transfer(Account a, Account b) { } }", "@Secure @Transactional @RequestScoped @Named @Stereotype @Retention(RUNTIME) @Target(TYPE) public @interface BusinessComponent { }", "@BusinessComponent public class AccountManager { public boolean transfer(Account a, Account b) { } }", "public class AccountManager { @Inject Event<Withdrawal> event; public boolean transfer(Account a, Account b) { event.fire(new Withdrawal(a)); } }", "public class AccountManager { @Inject @Suspicious Event <Withdrawal> event; public boolean transfer(Account a, Account b) { event.fire(new Withdrawal(a)); } }", "public class AccountObserver { void checkTran(@Observes Withdrawal w) { } }", "public class AccountObserver { void checkTran(@Observes @Suspicious Withdrawal w) { } }", "public void refreshCategoryTree(@Observes(during = AFTER_SUCCESS) CategoryUpdateEvent event) { ... }", "import javax.ejb.Singleton; import javax.enterprise.inject.Produces; @ApplicationScoped @Singleton public class Catalog { @PersistenceContext EntityManager em; List<Product> products; @Produces @Catalog List<Product> getCatalog() { if (products==null) { products = em.createQuery(\"select p from Product p where p.deleted = false\") .getResultList(); } return products; } }", "import javax.enterprise.event.Event; @Stateless public class ProductManager { @PersistenceContext EntityManager em; @Inject @Any Event<Product> productEvent; public void delete(Product product) { em.delete(product); productEvent.select(new AnnotationLiteral<Deleted>(){}).fire(product); } public void persist(Product product) { em.persist(product); productEvent.select(new AnnotationLiteral<Created>(){}).fire(product); } }", "import javax.ejb.Singleton; @ApplicationScoped @Singleton public class Catalog { void addProduct(@Observes(during = AFTER_SUCCESS) @Created Product product) { products.add(product); } void removeProduct(@Observes(during = AFTER_SUCCESS) @Deleted Product product) { products.remove(product); } }", "<beans xmlns=\"http://xmlns.jcp.org/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2.0.xsd\"> <interceptors> <class>org.mycompany.myapp.TransactionInterceptor</class> </interceptors> </beans>", "@InterceptorBinding @Retention(RUNTIME) @Target({TYPE, METHOD}) public @interface Secure {}", "@Secure @Interceptor public class SecurityInterceptor { @AroundInvoke public Object aroundInvoke(InvocationContext ctx) throws Exception { // enforce security return ctx.proceed(); } }", "@Secure public class AccountManager { public boolean transfer(Account a, Account b) { } }", "<beans> <interceptors> <class>com.acme.SecurityInterceptor</class> <class>com.acme.TransactionInterceptor</class> </interceptors> </beans>", "<beans xmlns=\"http://xmlns.jcp.org/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd\"> <decorators> <class>org.mycompany.myapp.LargeTransactionDecorator</class> </decorators> </beans>", "@Decorator public abstract class LargeTransactionDecorator implements Account { @Inject @Delegate @Any Account account; @PersistenceContext EntityManager em; public void withdraw(BigDecimal amount) { } public void deposit(BigDecimal amount); } }", "@ConversationScoped class PaymentProcessor { public void processPayment(int amount) { System.out.println(\"I'm taking USD\" + amount); } } @ApplicationScoped public class Shop { @Inject PaymentProcessor paymentProcessor; public void buyStuff() { paymentProcessor.processPayment(100); } }" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/contexts_and_dependency_injection
probe::nfs.fop.write
probe::nfs.fop.write Name probe::nfs.fop.write - NFS client write operation Synopsis nfs.fop.write Values devname block device name Description SystemTap uses the vfs.do_sync_write probe to implement this probe and as a result will get operations other than the NFS client write operations.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-write
16.4. Sub-Collections
16.4. Sub-Collections 16.4.1. Statistics Sub-Collection Each floating disk exposes a statistics sub-collection for disk-specific statistics. Each statistic contains the following elements: Table 16.2. Elements for virtual disk statistics Element Type Description name string The unique identifier for the statistic entry. description string A plain text description of the statistic. unit string The unit or rate to measure the statistical values. type One of GAUGE or COUNTER The type of statistic measures. values type= One of INTEGER or DECIMAL The data type for the statistical values that follow. value complex A data set that contains datum . datum see values type An individual piece of data from a value . disk id= relationship A relationship to the containing disk resource. The following table lists the statistic types for floating disks. Table 16.3. Disk statistic types Name Description data.current.read The data transfer rate in bytes per second when reading from the disk. data.current.write The data transfer rate in bytes per second when writing to the disk. Example 16.3. An XML representation of a virtual machine's statistics sub-collection Note This statistics sub-collection is read-only.
[ "<statistics> <statistic id=\"33b9212b-f9cb-3fd0-b364-248fb61e1272\" href=\"/ovirt-engine/api/disks/f28ec14c-fc85-43e1-818d-96b49d50e27b/statistics/ 33b9212b-f9cb-3fd0-b364-248fb61e1272\"> <name>data.current.read</name> <description>Read data rate</description> <values type=\"DECIMAL\"> <value> <datum>0</datum> </value> </values> <type>GAUGE</type> <unit>BYTES_PER_SECOND</unit> <disk id=\"f28ec14c-fc85-43e1-818d-96b49d50e27b\" href=\"/ovirt-engine/api/disks/f28ec14c-fc85-43e1-818d-96b49d50e27b\"/> </statistic> </statistics>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-sub-collections2
function::delete_stopwatch
function::delete_stopwatch Name function::delete_stopwatch - Remove an existing stopwatch Synopsis Arguments name the stopwatch name Description Remove stopwatch name .
[ "delete_stopwatch(name:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-delete-stopwatch
Chapter 12. Managing machines with the Cluster API
Chapter 12. Managing machines with the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Cluster API is an upstream project that is integrated into OpenShift Container Platform as a Technology Preview for Amazon Web Services (AWS) and Google Cloud Platform (GCP) clusters. You can use the Cluster API to create and manage machine sets and machines in your OpenShift Container Platform cluster. This capability is in addition or an alternative to managing machines with the Machine API. For OpenShift Container Platform 4.11 clusters, you can use the Cluster API to perform node host provisioning management actions after the cluster installation finishes. This system enables an elastic, dynamic provisioning method on top of public or private cloud infrastructure. With the Cluster API Technology Preview, you can create compute machines and machine sets on OpenShift Container Platform clusters for supported providers. You can also explore the features that are enabled by this implementation that might not be available with the Machine API. Benefits By using the Cluster API, OpenShift Container Platform users and developers are able to realize the following advantages: The option to use upstream community Cluster API infrastructure providers which might not be supported by the Machine API. The opportunity to collaborate with third parties who maintain machine controllers for infrastructure providers. The ability to use the same set of Kubernetes tools for infrastructure management in OpenShift Container Platform. The ability to create machine sets using the Cluster API that support features that are not available with the Machine API. Limitations Using the Cluster API to manage machines is a Technology Preview feature and has the following limitations: Only AWS and GCP clusters are supported. To use this feature, you must enable the TechPreviewNoUpgrade feature set . Enabling this feature set cannot be undone and prevents minor version updates. You must create the primary resources that the Cluster API requires manually. Control plane machines cannot be managed by the Cluster API. Migration of existing machine sets created by the Machine API to Cluster API machine sets is not supported. Full feature parity with the Machine API is not available. 12.1. Cluster API architecture The OpenShift Container Platform integration of the upstream Cluster API is implemented and managed by the Cluster CAPI Operator. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, in contrast to the Machine API, which uses the openshift-machine-api namespace. 12.1.1. The Cluster CAPI Operator The Cluster CAPI Operator is an OpenShift Container Platform Operator that maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster. If a cluster is configured correctly to allow the use of the Cluster API, the Cluster CAPI Operator installs the Cluster API Operator on the cluster. Note The Cluster CAPI Operator is distinct from the upstream Cluster API Operator. For more information, see the entry for the Cluster CAPI Operator in the Cluster Operators reference content. 12.1.2. Primary resources The Cluster API is comprised of the following primary resources. For the Technology Preview of this feature, you must create these resources manually in the openshift-cluster-api namespace. Cluster A fundamental unit that represents a cluster that is managed by the Cluster API. Infrastructure A provider-specific resource that defines properties that are shared by all the machine sets in the cluster, such as the region and subnets. Machine template A provider-specific template that defines the properties of the machines that a machine set creates. Machine set A group of machines. Machine sets are to machines as replica sets are to pods. If you need more machines or must scale them down, you change the replicas field on the machine set to meet your compute needs. With the Cluster API, a machine set references a Cluster object and a provider-specific machine template. Machine A fundamental unit that describes the host for a node. The Cluster API creates machines based on the configuration in the machine template. Additional resources Cluster CAPI Operator 12.2. Sample YAML files For the Cluster API Technology Preview, you must create the primary resources that the Cluster API requires manually. The example YAML files in this section demonstrate how to make these resources work together and configure settings for the machines that they create that are appropriate for your environment. 12.2.1. Sample YAML for a Cluster API cluster resource The cluster resource defines the name and infrastructure provider for the cluster and is managed by the Cluster API. This resource has the same structure for all providers. apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> 3 namespace: openshift-cluster-api 1 3 Specify the name of the cluster. 2 Specify the infrastructure kind for the cluster. Valid values are: AWSCluster : The cluster is running on Amazon Web Services (AWS). GCPCluster : The cluster is running on Google Cloud Platform (GCP). The remaining Cluster API resources are provider-specific. Refer to the example YAML files for your cluster: Sample YAML files for configuring Amazon Web Services clusters Sample YAML files for configuring Google Cloud Platform clusters 12.2.2. Sample YAML files for configuring Amazon Web Services clusters Some Cluster API resources are provider-specific. The example YAML files in this section show configurations for an Amazon Web Services (AWS) cluster. 12.2.2.1. Sample YAML for a Cluster API infrastructure resource on Amazon Web Services The infrastructure resource is provider-specific and defines properties that are shared by all the machine sets in the cluster, such as the region and subnets. The machine set references this resource when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api spec: region: <region> 3 1 Specify the infrastructure kind for the cluster. This value must match the value for your platform. 2 Specify the name of the cluster. 3 Specify the AWS region. 12.2.2.2. Sample YAML for a Cluster API machine template resource on Amazon Web Services The machine template resource is provider-specific and defines the basic properties of the machines that a machine set creates. The machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: .... instanceType: m5.large cloudInit: insecureSkipSecretsManager: true ami: id: .... subnet: filters: - name: tag:Name values: - ... additionalSecurityGroups: - filters: - name: tag:Name values: - ... 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 12.2.2.3. Sample YAML for a Cluster API machine set resource on Amazon Web Services The machine set resource defines additional properties of the machines that it creates. The machine set also references the infrastructure resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1alpha4 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> 4 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4 kind: AWSMachineTemplate 5 name: <cluster_name> 6 1 Specify a name for the machine set. 2 4 6 Specify the name of the cluster. 3 For the Cluster API Technology Preview, the Operator can use the worker user data secret from openshift-machine-api namespace. 5 Specify the machine template kind. This value must match the value for your platform. 12.2.3. Sample YAML files for configuring Google Cloud Platform clusters Some Cluster API resources are provider-specific. The example YAML files in this section show configurations for a Google Cloud Platform (GCP) cluster. 12.2.3.1. Sample YAML for a Cluster API infrastructure resource on Google Cloud Platform The infrastructure resource is provider-specific and defines properties that are shared by all the machine sets in the cluster, such as the region and subnets. The machine set references this resource when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPCluster 1 metadata: name: <cluster_name> 2 spec: network: name: <cluster_name>-network 3 project: <project> 4 region: <region> 5 1 Specify the infrastructure kind for the cluster. This value must match the value for your platform. 2 3 Specify the name of the cluster. 4 Specify the GCP project name. 5 Specify the GCP region. 12.2.3.2. Sample YAML for a Cluster API machine template resource on Google Cloud Platform The machine template resource is provider-specific and defines the basic properties of the machines that a machine set creates. The machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 12.2.3.3. Sample YAML for a Cluster API machine set resource on Google Cloud Platform The machine set resource defines additional properties of the machines that it creates. The machine set also references the infrastructure resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: test template: metadata: labels: test: test spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> 4 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 5 name: <machine_set_name> 6 failureDomain: <failure_domain> 7 1 6 Specify a name for the machine set. 2 4 Specify the name of the cluster. 3 For the Cluster API Technology Preview, the Operator can use the worker user data secret from openshift-machine-api namespace. 5 Specify the machine template kind. This value must match the value for your platform. 7 Specify the failure domain within the GCP region. 12.3. Creating a Cluster API machine set You can create machine sets that use the Cluster API to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Enable the use of the Cluster API. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a YAML file that contains the cluster custom resource (CR) and is named <cluster_resource_file>.yaml . If you are not sure which value to set for the <cluster_name> parameter, you can check the value for an existing Machine API machine set in your cluster. To list the Machine API machine sets, run the following command: USD oc get machinesets -n openshift-machine-api 1 1 Specify the openshift-machine-api namespace. Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To display the contents of a specific machine set CR, run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api \ -o yaml Example output ... template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a ... 1 The cluster ID, which you use for the <cluster_name> parameter. Create the cluster CR by running the following command: USD oc create -f <cluster_resource_file>.yaml Verification To confirm that the cluster CR is created, run the following command: USD oc get cluster Example output NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m Create a YAML file that contains the infrastructure CR and is named <infrastructure_resource_file>.yaml . Create the infrastructure CR by running the following command: USD oc create -f <infrastructure_resource_file>.yaml Verification To confirm that the infrastructure CR is created, run the following command: USD oc get <infrastructure_kind> where <infrastructure_kind> is the value that corresponds to your platform. Example output NAME CLUSTER READY VPC BASTION IP <cluster_name> <cluster_name> true Create a YAML file that contains the machine template CR and is named <machine_template_resource_file>.yaml . Create the machine template CR by running the following command: USD oc create -f <machine_template_resource_file>.yaml Verification To confirm that the machine template CR is created, run the following command: USD oc get <machine_template_kind> where <machine_template_kind> is the value that corresponds to your platform. Example output NAME AGE <template_name> 77m Create a YAML file that contains the machine set CR and is named <machine_set_resource_file>.yaml . Create the machine set CR by running the following command: USD oc create -f <machine_set_resource_file>.yaml Verification To confirm that the machine set CR is created, run the following command: USD oc get machineset -n openshift-cluster-api 1 1 Specify the openshift-cluster-api namespace. Example output NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m When the new machine set is available, the REPLICAS and AVAILABLE values match. If the machine set is not available, wait a few minutes and run the command again. Verification To verify that the machine set is creating machines according to your desired configuration, you can review the lists of machines and nodes in the cluster. To view the list of Cluster API machines, run the following command: USD oc get machine -n openshift-cluster-api 1 1 Specify the openshift-cluster-api namespace. Example output NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s To view the list of nodes, run the following command: USD oc get node Example output NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.24.0+284d62a <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.24.0+284d62a <ip_address_3>.<region>.compute.internal Ready worker 7m v1.24.0+284d62a 12.4. Troubleshooting clusters that use the Cluster API Use the information in this section to understand and recover from issues you might encounter. Generally, troubleshooting steps for problems with the Cluster API are similar to those steps for problems with the Machine API. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, whereas the Machine API uses the openshift-machine-api namespace. When using oc commands that reference a namespace, be sure to reference the correct one. 12.4.1. CLI commands return Cluster API machines For clusters that use the Cluster API, oc commands such as oc get machine return results for Cluster API machines. Because the letter c precedes the letter m alphabetically, Cluster API machines appear in the return before Machine API machines do. To list only Machine API machines, use the fully qualified name machines.machine.openshift.io when running the oc get machine command: USD oc get machines.machine.openshift.io To list only Cluster API machines, use the fully qualified name machines.cluster.x-k8s.io when running the oc get machine command: USD oc get machines.cluster.x-k8s.io
[ "apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> 3 namespace: openshift-cluster-api", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api spec: region: <region> 3", "apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: . instanceType: m5.large cloudInit: insecureSkipSecretsManager: true ami: id: . subnet: filters: - name: tag:Name values: - additionalSecurityGroups: - filters: - name: tag:Name values: -", "apiVersion: cluster.x-k8s.io/v1alpha4 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> 4 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4 kind: AWSMachineTemplate 5 name: <cluster_name> 6", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPCluster 1 metadata: name: <cluster_name> 2 spec: network: name: <cluster_name>-network 3 project: <project> 4 region: <region> 5", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: test template: metadata: labels: test: test spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> 4 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 5 name: <machine_set_name> 6 failureDomain: <failure_domain> 7", "oc get machinesets -n openshift-machine-api 1", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <cluster_resource_file>.yaml", "oc get cluster", "NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m", "oc create -f <infrastructure_resource_file>.yaml", "oc get <infrastructure_kind>", "NAME CLUSTER READY VPC BASTION IP <cluster_name> <cluster_name> true", "oc create -f <machine_template_resource_file>.yaml", "oc get <machine_template_kind>", "NAME AGE <template_name> 77m", "oc create -f <machine_set_resource_file>.yaml", "oc get machineset -n openshift-cluster-api 1", "NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m", "oc get machine -n openshift-cluster-api 1", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s", "oc get node", "NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.24.0+284d62a <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.24.0+284d62a <ip_address_3>.<region>.compute.internal Ready worker 7m v1.24.0+284d62a", "oc get machines.machine.openshift.io", "oc get machines.cluster.x-k8s.io" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/machine_management/capi-machine-management
3.3. Monitoring Resources
3.3. Monitoring Resources To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, by default the pcs command will create a monitoring operation, with an interval that is determined by the resource agent. If the resource agent does not provide a default monitoring interval, the pcs command will create a monitoring operation with an interval of 60 seconds.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/s1-resourcemonitor-haao
Chapter 2. Using Ansible roles to automate repetitive tasks on clients
Chapter 2. Using Ansible roles to automate repetitive tasks on clients 2.1. Assigning Ansible roles to an existing host You can use Ansible roles for remote management of Satellite clients. Prerequisites Ensure that you have configured and imported Ansible roles. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the host and click Edit . On the Ansible Roles tab, select the role that you want to add from the Available Ansible Roles list. Click the + icon to add the role to the host. You can add more than one role. Click Submit . After you assign Ansible roles to hosts, you can use Ansible for remote execution. For more information, see Section 4.13, "Distributing SSH keys for remote execution" . Overriding parameter variables On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Ansible playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter . 2.2. Removing Ansible roles from a host Use the following procedure to remove Ansible roles from a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the host and click Edit . Select the Ansible Roles tab. In the Assigned Ansible Roles area, click the - icon to remove the role from the host. Repeat to remove more roles. Click Submit . 2.3. Changing the order of Ansible roles Use the following procedure to change the order of Ansible roles applied to a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. Select the Ansible Roles tab. In the Assigned Ansible Roles area, you can change the order of the roles by dragging and dropping the roles into the preferred position. Click Submit to save the order of the Ansible roles. 2.4. Running Ansible roles on a host You can run Ansible roles on a host through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host that contains the Ansible role you want to run. From the Select Action list, select Run all Ansible roles . You can view the status of your Ansible job on the Run Ansible roles page. To rerun a job, click Rerun . 2.5. Assigning Ansible roles to a host group You can use Ansible roles for remote management of Satellite clients. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . Procedure In the Satellite web UI, navigate to Configure > Host Groups . Click the host group name to which you want to assign an Ansible role. On the Ansible Roles tab, select the role that you want to add from the Available Ansible Roles list. Click the + icon to add the role to the host group. You can add more than one role. Click Submit . 2.6. Running Ansible roles on a host group You can run Ansible roles on a host group through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host group. You must have at least one host in your host group. Procedure In the Satellite web UI, navigate to Configure > Host Groups . From the list in the Actions column for the host group, select Run all Ansible roles . You can view the status of your Ansible job on the Run Ansible roles page. Click Rerun to rerun a job. 2.7. Running Ansible roles in check mode You can run Ansible roles in check mode through the Satellite web UI. Prerequisites You must configure your deployment to run Ansible roles. For more information, see Section 1.2, "Configuring your Satellite to run Ansible roles" . You must have assigned the Ansible roles to the host group. You must have at least one host in your host group. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Edit for the host you want to enable check mode for. In the Parameters tab, ensure that the host has a parameter named ansible_roles_check_mode with type boolean set to true . Click Submit .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_configurations_using_ansible_integration/Using_Ansible_Roles_to_Automate_Repetitive_Tasks_on_Clients_ansible
Upgrading
Upgrading Red Hat OpenShift Service on AWS 4 Understanding upgrading options for Red Hat OpenShift Service on AWS Red Hat OpenShift Documentation Team
[ "rosa describe cluster --cluster=<cluster_name_or_id> 1", "rosa list upgrade --cluster=<cluster_name_or_id>", "VERSION NOTES 4.14.8 recommended 4.14.7 4.14.6", "rosa upgrade cluster -c <cluster_name_or_id> --control-plane [--schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm>] --version <version_number>", "rosa upgrade cluster -c <cluster_name_or_id> --control-plane --version <version_number>", "rosa upgrade cluster -c <cluster_name_or_id> --control-plane --schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm> --version=<version_number>", "rosa describe cluster --cluster=<cluster_name_or_id> 1", "OpenShift Version: 4.14.0", "rosa list upgrade --cluster <cluster-name> --machinepool <machinepool_name>", "VERSION NOTES 4.14.5 recommended 4.14.4 4.14.3", "rosa describe machinepool --cluster=<cluster_name_or_id> <machinepool_name>", "Replicas: 5 Node drain grace period: 30 minutes Management upgrade: - Type: Replace - Max surge: 20% - Max unavailable: 20%", "rosa upgrade machinepool -c <cluster_name> <machinepool_name> [--schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm>] --version <version_number>", "rosa upgrade machinepool -c <cluster_name> <machinepool_name> --version <version_number>", "rosa upgrade machinepool -c <cluster_name> <machinepool_name> --schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm> --version <version_number>", "rosa describe cluster --cluster=<cluster_name|cluster_id> 1", "rosa list upgrade --cluster=<cluster_name|cluster_id>", "rosa upgrade cluster --cluster=<cluster_name|cluster_id> --version <version-id>", "rosa upgrade cluster --cluster=<cluster_name|cluster_id> --version <version-id> --schedule-date yyyy-mm-dd --schedule-time HH:mm", "rosa upgrade cluster --cluster=<cluster_name|cluster_id> --version <version-id> --node-drain-grace-period 15 minutes", "rosa list upgrade --cluster=<cluster_name|cluster_id>", "VERSION NOTES 4.15.14 recommended - scheduled for 2024-06-02 15:00 UTC 4.15.13", "rosa list upgrades --cluster=<cluster_name|cluster_id>", "VERSION NOTES 4.15.14 recommended - scheduled for 2024-06-02 15:00 UTC 4.15.13", "rosa delete upgrade --cluster=<cluster_name|cluster_id>", "I: Successfully canceled scheduled upgrade on cluster 'my-cluster'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/upgrading/index
7.224. vim
7.224. vim 7.224.1. RHBA-2015:1310 - vim bug fix and enhancement update Updated vim packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Vim (Vi IMproved) is an updated and improved version of the vi editor. Note The vim packages have been upgraded to upstream version 7.4, which provides a number of bug fixes and enhancements over the version. (BZ# 820331 , BZ# 893239 , BZ# 1083924 , BZ# 1112441 , BZ# 1201834 , BZ# 1202897 , BZ# 1204179 ) Users of vim are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-vim
Chapter 50. mapping
Chapter 50. mapping This chapter describes the commands under the mapping command. 50.1. mapping create Create new mapping Usage: Table 50.1. Positional Arguments Value Summary <name> New mapping name (must be unique) Table 50.2. Optional Arguments Value Summary -h, --help Show this help message and exit --rules <filename> Filename that contains a set of mapping rules (required) Table 50.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 50.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.2. mapping delete Delete mapping(s) Usage: Table 50.7. Positional Arguments Value Summary <mapping> Mapping(s) to delete Table 50.8. Optional Arguments Value Summary -h, --help Show this help message and exit 50.3. mapping list List mappings Usage: Table 50.9. Optional Arguments Value Summary -h, --help Show this help message and exit Table 50.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 50.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 50.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 50.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 50.4. mapping set Set mapping properties Usage: Table 50.14. Positional Arguments Value Summary <name> Mapping to modify Table 50.15. Optional Arguments Value Summary -h, --help Show this help message and exit --rules <filename> Filename that contains a new set of mapping rules 50.5. mapping show Display mapping details Usage: Table 50.16. Positional Arguments Value Summary <mapping> Mapping to display Table 50.17. Optional Arguments Value Summary -h, --help Show this help message and exit Table 50.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 50.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 50.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 50.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack mapping create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --rules <filename> <name>", "openstack mapping delete [-h] <mapping> [<mapping> ...]", "openstack mapping list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack mapping set [-h] [--rules <filename>] <name>", "openstack mapping show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <mapping>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/mapping
Chapter 2. Deploying OpenShift Container Storage on Red Hat OpenStack Platform in internal mode
Chapter 2. Deploying OpenShift Container Storage on Red Hat OpenStack Platform in internal mode Deploying OpenShift Container Storage on OpenShift Container Platform in internal mode using dynamic storage devices provided by Red Hat OpenStack Platform installer-provisioned infrastructure (IPI) enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Container Storage chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Container Storage Operator . Create the OpenShift Container Storage Cluster Service 2.1. Installing Red Hat OpenShift Container Storage Operator You can install Red Hat OpenShift Container Storage Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. You have satisfied any additional requirements required. For more information, see Planning your deployment . Note When you need to override the cluster-wide default node selector for OpenShift Container Storage, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Container Storage resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Container Storage chapter in Managing and Allocating Storage Resources guide. Procedure Log in to OpenShift Web Console. Click Operators OperatorHub . Search for OpenShift Container Storage from the list of operators and click on it. Click Install . Set the following options on the Install Operator page: Channel as stable-4.8 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it will be created during the operator installation. Approval Strategy as Automatic or Manual . Click Install . If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your operator without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the operator updated to the new version. Verification step Verify that the OpenShift Container Storage Operator shows a green tick indicating successful installation. 2.2. Creating an OpenShift Container Storage Cluster Service in internal mode Use this procedure to create an OpenShift Container Storage Cluster Service after you install the OpenShift Container Storage operator. Prerequisites The OpenShift Container Storage operator must be installed from the Operator Hub. For more information, see Installing OpenShift Container Storage Operator using the Operator Hub . Procedure Log into the OpenShift Web Console. Click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Container Storage > Create Instance link of Storage Cluster. Select Mode is set to Internal by default. Select Capacity and nodes Select Storage Class . By default, it is set to standard . Select Requested Capacity from the drop down list. It is set to 2 TiB by default. You can use the drop down to modify the capacity value. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (3 times of raw storage). In the Select Nodes section, select at least three available nodes. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Container Storage cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster will be deployed. For minimum starting node requirements, see Resource requirements section in Planning guide. Click . (Optional) Set Security and network configuration Select the Enable encryption checkbox to encrypt block and file storage. Choose any one or both Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). Storage class encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Container Storage. (Optional) Enter TLS Server Name and Vault Enterprise Namespace . Provide CA Certificate , Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Select Default (SDN) if you are using a single network or Custom (Multus) Network if you plan on using multiple network interfaces. Select a Public Network Interface from drop down. Select a Cluster Network Interface from drop down. Note If only using one additional network interface select the single NetworkAttachementDefinition (i.e. ocs-public-cluster) for the Public Network Interface and leave the Cluster Network Interface blank. Click . Review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create . Edit the configmap if Vault Key/Value (KV) secret engine API, version 2 is used for cluster-wide encryption with Key Management System (KMS). On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click ocs-kms-connection-details . Edit the configmap. Click Action menu (...) Edit ConfigMap . Set the VAULT_BACKEND parameter to v2 . Click Save . Verification steps On the storage cluster details page, the storage cluster name displays a green tick to it to indicate that the cluster was created successfully. Verify that the final Status of the installed storage cluster shows as Phase: Ready with a green tick mark. Click Operators Installed Operators Storage Cluster link to view the storage cluster installation status. Alternatively, when you are on the Operator Details tab, you can click on the Storage Cluster tab to view the status. To verify that OpenShift Container Storage is successfully installed, see Verifying OpenShift Container Storage deployment . 2.3. Verifying OpenShift Container Storage deployment Use this section to verify that OpenShift Container Storage is deployed correctly. 2.3.1. Verifying the state of the pods To verify that the pods of OpenShift Containers Storage are in running state, follow the below procedure: Procedure Log in to OpenShift Web Console. Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop down list. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Container storage cluster" . Click on the Running and Completed tabs to verify that the pods are running and in a completed state: Table 2.1. Pods corresponding to OpenShift Container storage cluster Component Corresponding pods OpenShift Container Storage Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.3.2. Verifying the OpenShift Container Storage cluster is healthy To verify that the cluster of OpenShift Container Storage is healthy, follow the steps in the procedure. Procedure Click Storage Overview and click the Block and File tab. In the Status card , verify that Storage Cluster and Data Resiliency has a green tick mark. In the Details card , verify that the cluster information is displayed. For more information on the health of the OpenShift Container Storage clusters using the Block and File dashboard, see Monitoring OpenShift Container Storage . 2.3.3. Verifying the Multicloud Object Gateway is healthy To verify that the OpenShift Container Storage Multicloud Object Gateway is healthy, follow the steps in the procedure. Procedure Click Storage Overview from the OpenShift Web Console and click the Object tab. In the Status card , verify that both Object Service and Data Resiliency are in Ready state (green tick). In the Details card , verify that the Multicloud Object Gateway information is displayed. For more information on the health of the OpenShift Container Storage cluster using the object service dashboard, see Monitoring OpenShift Container Storage . 2.3.4. Verifying that the OpenShift Container Storage specific storage classes exist To verify the storage classes exists in the cluster, follow the steps in the procedure. Procedure Click Storage Storage Classes from the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Container Storage cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io 2.4. Uninstalling OpenShift Container Storage in internal mode 2.4.1. Uninstalling OpenShift Container Storage in Internal mode Use the steps in this section to uninstall OpenShift Container Storage. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful The below table provides information on the different values that can used with these annotations: Table 2.2. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user mode forced No Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively. You can change the cleanup policy or the uninstall mode by editing the value of the annotation by using the following commands: Prerequisites Ensure that the OpenShift Container Storage cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Container Storage. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Container Storage. If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources which consumed them. Procedure Delete the volume snapshots that are using OpenShift Container Storage. List the volume snapshots from all the namespaces. From the output of the command, identify and delete the volume snapshots that are using OpenShift Container Storage. Delete PVCs and OBCs that are using OpenShift Container Storage. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Container Storage are deleted. If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to forced and skip this step. Doing this results in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Container Storage. For more information, see Section 2.4.1.1, "Removing monitoring stack from OpenShift Container Storage" . Delete OpenShift Container Platform Registry PVCs using OpenShift Container Storage. For more information, see Section 2.4.1.2, "Removing OpenShift Container Platform registry from OpenShift Container Storage" . Delete OpenShift Container Platform logging PVCs using OpenShift Container Storage. For more information, see Section 2.4.1.3, "Removing the cluster logging operator from OpenShift Container Storage" . Delete other PVCs and OBCs provisioned using OpenShift Container Storage. Following script is sample script to identify the PVCs and OBCs provisioned using OpenShift Container Storage. The script ignores the PVCs that are used internally by Openshift Container Storage. Note Omit RGW_PROVISIONER for cloud platforms. Delete the OBCs. Delete the PVCs. Note Ensure that you have removed any custom backing stores, bucket classes, etc., created in the cluster. Delete the Storage Cluster object and wait for the removal of the associated resources. Check for cleanup pods if the uninstall.ocs.openshift.io/cleanup-policy was set to delete (default) and ensure that their status is Completed . Confirm that the directory /var/lib/rook is now empty. This directory will be empty only if the uninstall.ocs.openshift.io/cleanup-policy annotation was set to delete (default). If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from OSD devices on all the OpenShift Container Storage nodes. Create a debug pod and chroot to the host on the storage node. Get Device names and make note of the OpenShift Container Storage devices. Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the namespace and wait till the deletion is complete. You need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Unlabel the storage nodes. Remove the OpenShift Container Storage taint if the nodes were tainted. Confirm all PVs provisioned using OpenShift Container Storage are deleted. If there is any PV left in the Released state, delete it. Delete the Multicloud Object Gateway storageclass. Remove CustomResourceDefinitions . Optional: To ensure that the vault keys are deleted permanently you need to manually delete the metadata associated with the vault key. Note Execute this step only if Vault Key/Value (KV) secret engine API, version 2 is used for cluster-wide encryption with Key Management System (KMS) since the vault keys are marked as deleted and not permanently deleted during the uninstallation of OpenShift Container Storage. You can always restore it later if required. List the keys in the vault. <backend_path> Is the path in the vault where the encryption keys are stored. For example: Example output: List the metadata associated with the vault key. For the Multicloud Object Gateway (MCG) key: <key> Is the encryption key. For Example: Example output: Delete the metadata. For the MCG key: <key> Is the encryption key. For Example: Example output: Repeat these steps to delete the metadata associated with all the vault keys. To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console, Click Storage . Verify that Overview no longer appears under Storage. 2.4.1.1. Removing monitoring stack from OpenShift Container Storage Use this section to clean up the monitoring stack from OpenShift Container Storage. The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use OpenShift Container Platform monitoring stack. For information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Edit the monitoring configmap . Remove any config sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Container Storage PVCs. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. 2.4.1.2. Removing OpenShift Container Platform registry from OpenShift Container Storage To clean the OpenShift Container Platform registry from OpenShift Container Storage, follow the steps in the procedure. If you want to configure an alternative storage, see image registry The PVCs created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry must be configured to use an OpenShift Container Storage PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. 2.4.1.3. Removing the cluster logging operator from OpenShift Container Storage To clean the cluster logging operator from the OpenShift Container Storage, follow the steps in the procedure. The PVCs created as a part of configuring cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance must be configured to use OpenShift Container Storage PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete PVCs.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "kind: ConfigMap apiVersion: v1 metadata: name: ocs-kms-connection-details [...] data: KMS_PROVIDER: vault KMS_SERVICE_NAME: vault [...] VAULT_BACKEND: v2 [...]", "oc annotate storagecluster -n openshift-storage ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy=\"retain\" --overwrite storagecluster.ocs.openshift.io/ocs-storagecluster annotated", "oc annotate storagecluster -n openshift-storage ocs-storagecluster uninstall.ocs.openshift.io/mode=\"forced\" --overwrite storagecluster.ocs.openshift.io/ocs-storagecluster annotated", "oc get volumesnapshot --all-namespaces", "oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>", "#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done", "oc delete obc <obc name> -n <project name>", "oc delete pvc <pvc name> -n <project-name>", "oc delete -n openshift-storage storagecluster --all --wait=true", "oc get pods -n openshift-storage | grep -i cleanup NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35s", "for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host ls -l /var/lib/rook; done", "oc debug node/<node name> chroot /host", "dmsetup ls ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)", "cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc project default oc delete project openshift-storage --wait=true --timeout=5m", "oc get project openshift-storage", "oc label nodes --all cluster.ocs.openshift.io/openshift-storage- oc label nodes --all topology.rook.io/rack-", "oc adm taint nodes --all node.ocs.openshift.io/storage-", "oc get pv oc delete pv <pv name>", "oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m", "oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io --wait=true --timeout=5m", "vault kv list <backend_path>", "vault kv list kv-v2", "Keys ----- NOOBAA_ROOT_SECRET_PATH/ rook-ceph-osd-encryption-key-ocs-deviceset-thin-0-data-0m27q8 rook-ceph-osd-encryption-key-ocs-deviceset-thin-1-data-0sq227 rook-ceph-osd-encryption-key-ocs-deviceset-thin-2-data-0xzszb", "vault kv get kv-v2/ <key>", "vault kv get kv-v2/NOOBAA_ROOT_SECRET_PATH/ <key>", "vault kv get kv-v2/rook-ceph-osd-encryption-key-ocs-deviceset-thin-0-data-0m27q8", "====== Metadata ====== Key Value --- ----- created_time 2021-06-23T10:06:30.650103555Z deletion_time 2021-06-23T11:46:35.045328495Z destroyed false version 1", "vault kv metadata delete kv-v2/ <key>", "vault kv metadata delete kv-v2/NOOBAA_ROOT_SECRET_PATH/ <key>", "vault kv metadata delete kv-v2/rook-ceph-osd-encryption-key-ocs-deviceset-thin-0-data-0m27q8", "Success! Data deleted (if it existed) at: kv-v2/metadata/rook-ceph-osd-encryption-key-ocs-deviceset-thin-0-data-0m27q8", "oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", ". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .", ". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .", "oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m", "oc edit configs.imageregistry.operator.openshift.io", ". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .", ". . . storage: . . .", "oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m", "oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m", "oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/deploying-openshift-container-storage-on-red-hat-openstack-platform_internal-osp
Chapter 7. Bucket policies in the Multicloud Object Gateway
Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). Note To give access to certain buckets of MCG accounts, use AWS S3 bucket policies. For more information, see Using bucket policies in AWS documentation.
[ "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--default_resource='']" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/bucket-policies-in-the-multicloud-object-gateway
Chapter 19. Managing cloud provider credentials
Chapter 19. Managing cloud provider credentials 19.1. About the Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. 19.1.1. Modes By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint , passthrough , or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers. Mint : In mint mode, the CCO uses the provided admin-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. Passthrough : In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials. Manual mode with long-term credentials for components : In manual mode, you can manage long-term cloud credentials instead of the CCO. Manual mode with short-term credentials for components : For some providers, you can use the CCO utility ( ccoctl ) during installation to implement short-term credentials for individual components. These credentials are created and managed outside the OpenShift Container Platform cluster. Table 19.1. CCO mode support matrix Cloud provider Mint Passthrough Manual with long-term credentials Manual with short-term credentials Amazon Web Services (AWS) X X X X Global Microsoft Azure X X X Microsoft Azure Stack Hub X Google Cloud Platform (GCP) X X X X IBM Cloud(R) X [1] Nutanix X [1] Red Hat OpenStack Platform (RHOSP) X VMware vSphere X This platform uses the ccoctl utility during installation to configure long-term credentials. 19.1.2. Determining the Cloud Credential Operator mode For platforms that support using the CCO in multiple modes, you can determine what mode the CCO is configured to use by using the web console or the CLI. Figure 19.1. Determining the CCO configuration 19.1.2.1. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, inspect the annotations on the cluster root secret: Navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials To view the CCO mode that the cluster is using, click 1 annotation under Annotations , and check the value field. The following values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.2.2. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, run the following command: USD oc get secret <secret_name> \ -n kube-system \ -o jsonpath \ --template '{ .metadata.annotations }' where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. This command displays the value of the .metadata.annotations parameter in the cluster root secret object. The following output values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.3. Default behavior For platforms on which multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process CredentialsRequest CRs. By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process CredentialsRequest CRs. If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installation program fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered. If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials. To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can configure your cluster to use a different CCO mode that is supported for your cloud provider. 19.1.4. Additional resources Cluster Operators reference page for the Cloud Credential Operator 19.2. The Cloud Credential Operator in mint mode Mint mode is the default Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform on platforms that support it. Mint mode supports Amazon Web Services (AWS) and Google Cloud Platform (GCP) clusters. 19.2.1. Mint mode credentials management For clusters that use the CCO in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. With mint mode, each cluster component has only the specific permissions it requires. Cloud credential reconciliation is automatic and continuous so that components can perform actions that require additional credentials or permissions. For example, a minor version cluster update (such as updating from OpenShift Container Platform 4.16 to 4.17) might include an updated CredentialsRequest resource for a cluster component. The CCO, operating in mint mode, uses the admin credential to process the CredentialsRequest resource and create users with limited permissions to satisfy the updated authentication requirements. Note By default, mint mode requires storing the admin credential in the cluster kube-system namespace. If this approach does not meet the security requirements of your organization, you can remove the credential after installing the cluster . 19.2.1.1. Mint mode permissions requirements When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user. The credential you provide for mint mode in Amazon Web Services (AWS) must have the following permissions: Example 19.1. Required AWS permissions iam:CreateAccessKey iam:CreateUser iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUser iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser iam:SimulatePrincipalPolicy The credential you provide for mint mode in Google Cloud Platform (GCP) must have the following permissions: Example 19.2. Required GCP permissions resourcemanager.projects.get serviceusage.services.list iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.list iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.roles.create iam.roles.get iam.roles.list iam.roles.undelete iam.roles.update resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy 19.2.1.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> 19.2.2. Maintaining cloud provider credentials If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . Delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. 19.2.3. Additional resources Removing cloud provider credentials 19.3. The Cloud Credential Operator in passthrough mode Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere. In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode. Note Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub. 19.3.1. Passthrough mode permissions requirements When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a CredentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for. 19.3.1.1. Amazon Web Services (AWS) permissions The credential you provide for passthrough mode in AWS must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for AWS . 19.3.1.2. Microsoft Azure permissions The credential you provide for passthrough mode in Azure must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for Azure . 19.3.1.3. Google Cloud Platform (GCP) permissions The credential you provide for passthrough mode in GCP must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for GCP . 19.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the permissions of a member user role. 19.3.1.5. VMware vSphere permissions To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges: Table 19.2. Required vSphere privileges Category Privileges Datastore Allocate space Folder Create folder , Delete folder vSphere Tagging All privileges Network Assign network Resource Assign virtual machine to resource pool Profile-driven storage All privileges vApp All privileges Virtual machine All privileges 19.3.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Microsoft Azure secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region> On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster's infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests: USD cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r Example output mycluster-2mpcn This value would be used in the secret data as follows: azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> Red Hat OpenStack Platform (RHOSP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init> VMware vSphere secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password> 19.3.3. Passthrough mode credential maintenance If CredentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the CredentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud provider, see Manually creating long-term credentials for AWS , Azure , or GCP . 19.3.3.1. Maintaining cloud provider credentials If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources vSphere CSI Driver Operator 19.3.4. Reducing permissions after installation When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer. After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using. To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see Manually creating long-term credentials for AWS , Azure , or GCP . 19.3.5. Additional resources Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for GCP 19.4. Manual mode with long-term credentials for components Manual mode is supported for Amazon Web Services (AWS), global Microsoft Azure, Microsoft Azure Stack Hub, Google Cloud Platform (GCP), IBM Cloud(R), and Nutanix. 19.4.1. User-managed credentials In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). To use this mode, you must examine the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequest CRs for the cluster's cloud provider. Some platforms use the CCO utility ( ccoctl ) to facilitate this process during installation and updates. Using manual mode with long-term credentials allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. This mode also does not require connectivity to services such as the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. For information about configuring your cloud provider to use manual mode, see the manual credentials management options for your cloud provider. Note An AWS, global Azure, or GCP cluster that uses manual mode might be configured to use short-term credentials for different components. For more information, see Manual mode with short-term credentials for components . 19.4.2. Additional resources Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for GCP Configuring IAM for IBM Cloud(R) Configuring IAM for Nutanix Manual mode with short-term credentials for components Preparing to update a cluster with manually maintained credentials 19.5. Manual mode with short-term credentials for components During installation, you can configure the Cloud Credential Operator (CCO) to operate in manual mode and use the CCO utility ( ccoctl ) to implement short-term security credentials for individual components that are created and managed outside the OpenShift Container Platform cluster. Note This credentials strategy is supported for Amazon Web Services (AWS), Google Cloud Platform (GCP), and global Microsoft Azure only. For AWS and GCP clusters, you must configure your cluster to use this strategy during installation of a new OpenShift Container Platform cluster. You cannot configure an existing AWS or GCP cluster that uses a different credentials strategy to use this feature. If you did not configure your Azure cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster . Cloud providers use different terms for their implementation of this authentication method. Table 19.3. Short-term credentials provider terminology Cloud provider Provider nomenclature Amazon Web Services (AWS) AWS Security Token Service (STS) Google Cloud Platform (GCP) GCP Workload Identity Global Microsoft Azure Microsoft Entra Workload ID 19.5.1. AWS Security Token Service In manual mode with STS, the individual OpenShift Container Platform cluster components use the AWS Security Token Service (STS) to assign components IAM roles that provide short-term, limited-privilege security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls. Additional resources Configuring an AWS cluster to use short-term credentials 19.5.1.1. AWS Security Token Service authentication process The AWS Security Token Service (STS) and the AssumeRole API action allow pods to retrieve access keys that are defined by an IAM role policy. The OpenShift Container Platform cluster includes a Kubernetes service account signing service. This service uses a private key to sign service account JSON web tokens (JWT). A pod that requires a service account token requests one through the pod specification. When the pod is created and assigned to a node, the node retrieves a signed service account from the service account signing service and mounts it onto the pod. Clusters that use STS contain an IAM role ID in their Kubernetes configuration secrets. Workloads assume the identity of this IAM role ID. The signed service account token issued to the workload aligns with the configuration in AWS, which allows AWS STS to grant access keys for the specified IAM role to the workload. AWS STS grants access keys only for requests that include service account tokens that meet the following conditions: The token name and namespace match the service account name and namespace. The token is signed by a key that matches the public key. The public key pair for the service account signing key used by the cluster is stored in an AWS S3 bucket. AWS STS federation validates that the service account token signature aligns with the public key stored in the S3 bucket. 19.5.1.1.1. Authentication flow for AWS STS The following diagram illustrates the authentication flow between AWS and the OpenShift Container Platform cluster when using AWS STS. Token signing is the Kubernetes service account signing service on the OpenShift Container Platform cluster. The Kubernetes service account in the pod is the signed service account token. Figure 19.2. AWS Security Token Service authentication flow Requests for new and refreshed credentials are automated by using an appropriately configured AWS IAM OpenID Connect (OIDC) identity provider combined with AWS IAM roles. Service account tokens that are trusted by AWS IAM are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. 19.5.1.1.2. Token refreshing for AWS STS The signed service account token that a pod uses expires after a period of time. For clusters that use AWS STS, this time period is 3600 seconds, or one hour. The kubelet on the node that the pod is assigned to ensures that the token is refreshed. The kubelet attempts to rotate a token when it is older than 80 percent of its time to live. 19.5.1.1.3. OpenID Connect requirements for AWS STS You can store the public portion of the encryption keys for your OIDC configuration in a public or private S3 bucket. The OIDC spec requires the use of HTTPS. AWS services require a public endpoint to expose the OIDC documents in the form of JSON web key set (JWKS) public keys. This allows AWS services to validate the bound tokens signed by Kubernetes and determine whether to trust certificates. As a result, both S3 bucket options require a public HTTPS endpoint and private endpoints are not supported. To use AWS STS, the public AWS backbone for the AWS STS service must be able to communicate with a public S3 bucket or a private S3 bucket with a public CloudFront endpoint. You can choose which type of bucket to use when you process CredentialsRequest objects during installation: By default, the CCO utility ( ccoctl ) stores the OIDC configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. As an alternative, you can have the ccoctl utility store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL. 19.5.1.2. AWS component secret formats Using manual mode with the AWS Security Token Service (STS) changes the content of the AWS credentials that are provided to individual OpenShift Container Platform components. Compare the following secret formats: AWS secret format using long-term credentials apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: aws_access_key_id: <base64_encoded_access_key_id> aws_secret_access_key: <base64_encoded_secret_access_key> 1 The namespace for the component. 2 The name of the component secret. AWS secret format using AWS STS apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator_role_name> 3 web_identity_token_file: <path_to_token> 4 1 The namespace for the component. 2 The name of the component secret. 3 The IAM role for the component. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.5.1.3. AWS component secret permissions requirements OpenShift Container Platform components require the following permissions. These values are in the CredentialsRequest custom resource (CR) for each component. Note These permissions apply to all resources. Unless specified, there are no request conditions on these permissions. Component Custom resource Required permissions for services Cluster CAPI Operator openshift-cluster-api-aws EC2 ec2:CreateTags ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstances ec2:DescribeInternetGateways ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeVpcs ec2:DescribeNetworkInterfaces ec2:DescribeNetworkInterfaceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RunInstances ec2:TerminateInstances Elastic load balancing elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroups elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:DeregisterTargets Identity and Access Management (IAM) iam:PassRole iam:CreateServiceLinkedRole Key Management Service (KMS) kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Machine API Operator openshift-machine-api-aws EC2 ec2:CreateTags ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeSecurityGroups ec2:DescribeRegions ec2:DescribeSubnets ec2:DescribeVpcs ec2:RunInstances ec2:TerminateInstances Elastic load balancing elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroups elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:DeregisterTargets Identity and Access Management (IAM) iam:PassRole iam:CreateServiceLinkedRole Key Management Service (KMS) kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Cloud Credential Operator cloud-credential-operator-iam-ro Identity and Access Management (IAM) iam:GetUser iam:GetUserPolicy iam:ListAccessKeys Cluster Image Registry Operator openshift-image-registry S3 s3:CreateBucket s3:DeleteBucket s3:PutBucketTagging s3:GetBucketTagging s3:PutBucketPublicAccessBlock s3:GetBucketPublicAccessBlock s3:PutEncryptionConfiguration s3:GetEncryptionConfiguration s3:PutLifecycleConfiguration s3:GetLifecycleConfiguration s3:GetBucketLocation s3:ListBucket s3:GetObject s3:PutObject s3:DeleteObject s3:ListBucketMultipartUploads s3:AbortMultipartUpload s3:ListMultipartUploadParts Ingress Operator openshift-ingress Elastic load balancing elasticloadbalancing:DescribeLoadBalancers Route 53 route53:ListHostedZones route53:ListTagsForResources route53:ChangeResourceRecordSets Tag tag:GetResources Security Token Service (STS) sts:AssumeRole Cluster Network Operator openshift-cloud-network-config-controller-aws EC2 ec2:DescribeInstances ec2:DescribeInstanceStatus ec2:DescribeInstanceTypes ec2:UnassignPrivateIpAddresses ec2:AssignPrivateIpAddresses ec2:UnassignIpv6Addresses ec2:AssignIpv6Addresses ec2:DescribeSubnets ec2:DescribeNetworkInterfaces AWS Elastic Block Store CSI Driver Operator aws-ebs-csi-driver-operator EC2 ec2:AttachVolume ec2:CreateSnapshot ec2:CreateTags ec2:CreateVolume ec2:DeleteSnapshot ec2:DeleteTags ec2:DeleteVolume ec2:DescribeInstances ec2:DescribeSnapshots ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVolumesModifications ec2:DetachVolume ec2:ModifyVolume ec2:DescribeAvailabilityZones ec2:EnableFastSnapshotRestores Key Management Service (KMS) kms:ReEncrypt* kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Request condition: kms:GrantIsForAWSResource: true 19.5.1.4. OLM-managed Operator support for authentication with AWS STS In addition to OpenShift Container Platform cluster components, some Operators managed by the Operator Lifecycle Manager (OLM) on AWS clusters can use manual mode with STS. These Operators authenticate with limited-privilege, short-term credentials that are managed outside the cluster. To determine if an Operator supports authentication with AWS STS, see the Operator description in OperatorHub. Additional resources CCO-based workflow for OLM-managed Operators with AWS STS 19.5.2. GCP Workload Identity In manual mode with GCP Workload Identity, the individual OpenShift Container Platform cluster components use the GCP workload identity provider to allow components to impersonate GCP service accounts using short-term, limited-privilege credentials. Additional resources Configuring a GCP cluster to use short-term credentials 19.5.2.1. GCP Workload Identity authentication process Requests for new and refreshed credentials are automated by using an appropriately configured OpenID Connect (OIDC) identity provider combined with IAM service accounts. Service account tokens that are trusted by GCP are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. Tokens are refreshed after one hour. The following diagram details the authentication flow between GCP and the OpenShift Container Platform cluster when using GCP Workload Identity. Figure 19.3. GCP Workload Identity authentication flow 19.5.2.2. GCP component secret formats Using manual mode with GCP Workload Identity changes the content of the GCP credentials that are provided to individual OpenShift Container Platform components. Compare the following secret content: GCP secret format apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3 1 The namespace for the component. 2 The name of the component secret. 3 The Base64 encoded service account. Content of the Base64 encoded service_account.json file using long-term credentials { "type": "service_account", 1 "project_id": "<project_id>", "private_key_id": "<private_key_id>", "private_key": "<private_key>", 2 "client_email": "<client_email_address>", "client_id": "<client_id>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>" } 1 The credential type is service_account . 2 The private RSA key that is used to authenticate to GCP. This key must be kept secure and is not rotated. Content of the Base64 encoded service_account.json file using GCP Workload Identity { "type": "external_account", 1 "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider", 2 "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken", 3 "credential_source": { "file": "<path_to_token>", 4 "format": { "type": "text" } } } 1 The credential type is external_account . 2 The target audience is the GCP Workload Identity provider. 3 The resource URL of the service account that can be impersonated with these credentials. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.5.3. Microsoft Entra Workload ID In manual mode with Microsoft Entra Workload ID, the individual OpenShift Container Platform cluster components use the Workload ID provider to assign components short-term security credentials. Additional resources Configuring a global Microsoft Azure cluster to use short-term credentials 19.5.3.1. Microsoft Entra Workload ID authentication process The following diagram details the authentication flow between Azure and the OpenShift Container Platform cluster when using Microsoft Entra Workload ID. Figure 19.4. Workload ID authentication flow 19.5.3.2. Azure component secret formats Using manual mode with Microsoft Entra Workload ID changes the content of the Azure credentials that are provided to individual OpenShift Container Platform components. Compare the following secret formats: Azure secret format using long-term credentials apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_client_secret: <client_secret> 4 azure_region: <region> azure_resource_prefix: <resource_group_prefix> 5 azure_resourcegroup: <resource_group_prefix>-rg 6 azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque 1 The namespace for the component. 2 The name of the component secret. 3 The client ID of the Microsoft Entra ID identity that the component uses to authenticate. 4 The component secret that is used to authenticate with Microsoft Entra ID for the <client_id> identity. 5 The resource group prefix. 6 The resource group. This value is formed by the <resource_group_prefix> and the suffix -rg . Azure secret format using Microsoft Entra Workload ID apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_federated_token_file: <path_to_token_file> 4 azure_region: <region> azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque 1 The namespace for the component. 2 The name of the component secret. 3 The client ID of the user-assigned managed identity that the component uses to authenticate. 4 The path to the mounted service account token file. 19.5.3.3. Azure component secret permissions requirements OpenShift Container Platform components require the following permissions. These values are in the CredentialsRequest custom resource (CR) for each component. Component Custom resource Required permissions for services Cloud Controller Manager Operator openshift-azure-cloud-controller-manager Microsoft.Compute/virtualMachines/read Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/read Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/write Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Cluster CAPI Operator openshift-cluster-api-azure role: Contributor [1] Machine API Operator openshift-machine-api-azure Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/disks/delete Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/skus/read Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/extensions/delete Microsoft.Compute/virtualMachines/extensions/read Microsoft.Compute/virtualMachines/extensions/write Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.Network/applicationSecurityGroups/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/loadBalancers/read Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/routeTables/read Microsoft.Network/virtualNetworks/delete Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Resources/subscriptions/resourceGroups/read Cluster Image Registry Operator openshift-image-registry-azure Data permissions Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action General permissions Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Resources/tags/write Ingress Operator openshift-ingress-azure Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/A/write Cluster Network Operator openshift-cloud-network-config-controller-azure Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Compute/virtualMachines/read Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/loadBalancers/backendAddressPools/join/action Azure File CSI Driver Operator azure-file-csi-driver-operator Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Azure Disk CSI Driver Operator azure-disk-csi-driver-operator Microsoft.Compute/disks/* Microsoft.Compute/snapshots/* Microsoft.Compute/virtualMachineScaleSets/*/read Microsoft.Compute/virtualMachineScaleSets/read Microsoft.Compute/virtualMachineScaleSets/virtualMachines/write Microsoft.Compute/virtualMachines/*/read Microsoft.Compute/virtualMachines/write Microsoft.Resources/subscriptions/resourceGroups/read This component requires a role rather than a set of permissions. 19.5.3.4. OLM-managed Operator support for authentication with Microsoft Entra Workload ID In addition to OpenShift Container Platform cluster components, some Operators managed by the Operator Lifecycle Manager (OLM) on Azure clusters can use manual mode with Microsoft Entra Workload ID. These Operators authenticate with short-term credentials that are managed outside the cluster. To determine if an Operator supports authentication with Workload ID, see the Operator description in OperatorHub. Additional resources CCO-based workflow for OLM-managed Operators with Microsoft Entra Workload ID 19.5.4. Additional resources Configuring an AWS cluster to use short-term credentials Configuring a GCP cluster to use short-term credentials Configuring a global Microsoft Azure cluster to use short-term credentials Preparing to update a cluster with manually maintained credentials
[ "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>", "cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r", "mycluster-2mpcn", "azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>", "apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: aws_access_key_id: <base64_encoded_access_key_id> aws_secret_access_key: <base64_encoded_secret_access_key>", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator_role_name> 3 web_identity_token_file: <path_to_token> 4", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3", "{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }", "{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_client_secret: <client_secret> 4 azure_region: <region> azure_resource_prefix: <resource_group_prefix> 5 azure_resourcegroup: <resource_group_prefix>-rg 6 azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque", "apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_federated_token_file: <path_to_token_file> 4 azure_region: <region> azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/managing-cloud-provider-credentials
Chapter 6. Understanding identity provider configuration
Chapter 6. Understanding identity provider configuration The OpenShift Container Platform master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 6.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 6.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . Once an identity provider has been defined, you can use RBAC to define and apply permissions . 6.3. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 6.4. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 6.5. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 6.6. Manually provisioning a user when using the lookup mapping method Typically, identities are automatically mapped to users during login. The lookup mapping method disables this automatic mapping, which requires you to provision users manually. If you are using the lookup mapping method, use the following procedure for each user after configuring the identity provider. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create an OpenShift Container Platform user: USD oc create user <username> Create an OpenShift Container Platform identity: USD oc create identity <identity_provider>:<identity_provider_user_id> Where <identity_provider_user_id> is a name that uniquely represents the user in the identity provider. Create a user identity mapping for the created user and identity: USD oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username> Additional resources How to create user, identity and map user and identity in LDAP authentication for mappingMethod as lookup inside the OAuth manifest How to create user, identity and map user and identity in OIDC authentication for mappingMethod as lookup
[ "oc delete secrets kubeadmin -n kube-system", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc create user <username>", "oc create identity <identity_provider>:<identity_provider_user_id>", "oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authentication_and_authorization/understanding-identity-provider
Chapter 2. Using OpenID Connect to secure applications and services
Chapter 2. Using OpenID Connect to secure applications and services This section describes how you can secure applications and services with OpenID Connect using Red Hat build of Keycloak. 2.1. Available Endpoints As a fully-compliant OpenID Connect Provider implementation, Red Hat build of Keycloak exposes a set of endpoints that applications and services can use to authenticate and authorize their users. This section describes some of the key endpoints that your application and service should be use when interacting with Red Hat build of Keycloak. 2.1.1. Endpoints The most important endpoint to understand is the well-known configuration endpoint. It lists endpoints and other configuration options relevant to the OpenID Connect implementation in Red Hat build of Keycloak. The endpoint is: To obtain the full URL, add the base URL for Red Hat build of Keycloak and replace {realm-name} with the name of your realm. For example: http://localhost:8080/realms/master/.well-known/openid-configuration Some RP libraries retrieve all required endpoints from this endpoint, but for others you might need to list the endpoints individually. 2.1.1.1. Authorization endpoint The authorization endpoint performs authentication of the end-user. This authentication is done by redirecting the user agent to this endpoint. For more details see the Authorization Endpoint section in the OpenID Connect specification. 2.1.1.2. Token endpoint The token endpoint is used to obtain tokens. Tokens can either be obtained by exchanging an authorization code or by supplying credentials directly depending on what flow is used. The token endpoint is also used to obtain new access tokens when they expire. For more details, see the Token Endpoint section in the OpenID Connect specification. 2.1.1.3. Userinfo endpoint The userinfo endpoint returns standard claims about the authenticated user; this endpoint is protected by a bearer token. For more details, see the Userinfo Endpoint section in the OpenID Connect specification. 2.1.1.4. Logout endpoint The logout endpoint logs out the authenticated user. The user agent can be redirected to the endpoint, which causes the active user session to be logged out. The user agent is then redirected back to the application. The endpoint can also be invoked directly by the application. To invoke this endpoint directly, the refresh token needs to be included as well as the credentials required to authenticate the client. 2.1.1.5. Certificate endpoint The certificate endpoint returns the public keys enabled by the realm, encoded as a JSON Web Key (JWK). Depending on the realm settings, one or more keys can be enabled for verifying tokens. For more information, see the Server Administration Guide and the JSON Web Key specification . 2.1.1.6. Introspection endpoint The introspection endpoint is used to retrieve the active state of a token. In other words, you can use it to validate an access or refresh token. This endpoint can only be invoked by confidential clients. For more details on how to invoke on this endpoint, see OAuth 2.0 Token Introspection specification . 2.1.1.7. Dynamic Client Registration endpoint The dynamic client registration endpoint is used to dynamically register clients. For more details, see the Client Registration chapter and the OpenID Connect Dynamic Client Registration specification . 2.1.1.8. Token Revocation endpoint The token revocation endpoint is used to revoke tokens. Both refresh tokens and access tokens are supported by this endpoint. When revoking a refresh token, the user consent for the corresponding client is also revoked. For more details on how to invoke on this endpoint, see OAuth 2.0 Token Revocation specification . 2.1.1.9. Device Authorization endpoint The device authorization endpoint is used to obtain a device code and a user code. It can be invoked by confidential or public clients. For more details on how to invoke on this endpoint, see OAuth 2.0 Device Authorization Grant specification . 2.1.1.10. Backchannel Authentication endpoint The backchannel authentication endpoint is used to obtain an auth_req_id that identifies the authentication request made by the client. It can only be invoked by confidential clients. For more details on how to invoke on this endpoint, see OpenID Connect Client Initiated Backchannel Authentication Flow specification . Also refer to other places of Red Hat build of Keycloak documentation like Client Initiated Backchannel Authentication Grant section of this guide and Client Initiated Backchannel Authentication Grant section of Server Administration Guide. 2.2. Supported Grant Types This section describes the different grant types available to relaying parties. 2.2.1. Authorization code The Authorization Code flow redirects the user agent to Red Hat build of Keycloak. Once the user has successfully authenticated with Red Hat build of Keycloak, an Authorization Code is created and the user agent is redirected back to the application. The application then uses the authorization code along with its credentials to obtain an Access Token, Refresh Token and ID Token from Red Hat build of Keycloak. The flow is targeted towards web applications, but is also recommended for native applications, including mobile applications, where it is possible to embed a user agent. For more details refer to the Authorization Code Flow in the OpenID Connect specification. 2.2.2. Implicit The Implicit flow works similarly to the Authorization Code flow, but instead of returning an Authorization Code, the Access Token and ID Token is returned. This approach reduces the need for the extra invocation to exchange the Authorization Code for an Access Token. However, it does not include a Refresh Token. This results in the need to permit Access Tokens with a long expiration; however, that approach is not practical because it is very hard to invalidate these tokens. Alternatively, you can require a new redirect to obtain a new Access Token once the initial Access Token has expired. The Implicit flow is useful if the application only wants to authenticate the user and deals with logout itself. You can instead use a Hybrid flow where both the Access Token and an Authorization Code are returned. One thing to note is that both the Implicit flow and Hybrid flow have potential security risks as the Access Token may be leaked through web server logs and browser history. You can somewhat mitigate this problem by using short expiration for Access Tokens. For more details, see the Implicit Flow in the OpenID Connect specification. 2.2.3. Resource Owner Password Credentials Resource Owner Password Credentials, referred to as Direct Grant in Red Hat build of Keycloak, allows exchanging user credentials for tokens. Using this flow is not recommended unlesss it is essential. Examples where this flow could be useful are legacy applications and command-line interfaces. The limitations of using this flow include: User credentials are exposed to the application Applications need login pages Application needs to be aware of the authentication scheme Changes to authentication flow requires changes to application No support for identity brokering or social login Flows are not supported (user self-registration, required actions, and so on.) For a client to be permitted to use the Resource Owner Password Credentials grant, the client has to have the Direct Access Grants Enabled option enabled. This flow is not included in OpenID Connect, but is a part of the OAuth 2.0 specification. For more details, see the Resource Owner Password Credentials Grant chapter in the OAuth 2.0 specification. 2.2.3.1. Example using CURL The following example shows how to obtain an access token for a user in the realm master with username user and password password . The example is using the confidential client myclient : curl \ -d "client_id=myclient" \ -d "client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578" \ -d "username=user" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" 2.2.4. Client credentials Client Credentials are used when clients (applications and services) want to obtain access on behalf of themselves rather than on behalf of a user. For example, these credentials can be useful for background services that apply changes to the system in general rather than for a specific user. Red Hat build of Keycloak provides support for clients to authenticate either with a secret or with public/private keys. This flow is not included in OpenID Connect, but is a part of the OAuth 2.0 specification. For more details, see the Client Credentials Grant chapter in the OAuth 2.0 specification. 2.2.5. Device Authorization Grant Device Authorization Grant is used by clients running on internet-connected devices that have limited input capabilities or lack a suitable browser. . The application requests Red Hat build of Keycloak a device code and a user code. . Red Hat build of Keycloak creates a device code and a user code. . Red Hat build of Keycloak returns a response including the device code and the user code to the application. . The application provides the user with the user code and the verification URI. The user accesses a verification URI to be authenticated by using another browser. . The application repeatedly polls Red Hat build of Keycloak until Red Hat build of Keycloak completes the user authorization. . If user authentication is complete, the application obtains the device code. . The application uses the device code along with its credentials to obtain an Access Token, Refresh Token and ID Token from Red Hat build of Keycloak. For more details, see the OAuth 2.0 Device Authorization Grant specification . 2.2.6. Client Initiated Backchannel Authentication Grant Client Initiated Backchannel Authentication Grant is used by clients who want to initiate the authentication flow by communicating with the OpenID Provider directly without redirect through the user's browser like OAuth 2.0's authorization code grant. The client requests from Red Hat build of Keycloak an auth_req_id that identifies the authentication request made by the client. Red Hat build of Keycloak creates the auth_req_id. After receiving this auth_req_id, this client repeatedly needs to poll Red Hat build of Keycloak to obtain an Access Token, Refresh Token, and ID Token from Red Hat build of Keycloak in return for the auth_req_id until the user is authenticated. In case that client uses ping mode, it does not need to repeatedly poll the token endpoint, but it can wait for the notification sent by Red Hat build of Keycloak to the specified Client Notification Endpoint. The Client Notification Endpoint can be configured in the Red Hat build of Keycloak Admin Console. The details of the contract for Client Notification Endpoint are described in the CIBA specification. For more details, see OpenID Connect Client Initiated Backchannel Authentication Flow specification . Also refer to other places of Red Hat build of Keycloak documentation such as Backchannel Authentication Endpoint of this guide and Client Initiated Backchannel Authentication Grant section of Server Administration Guide. For the details about FAPI CIBA compliance, see the FAPI section of this guide . 2.3. Red Hat build of Keycloak Java adapters 2.3.1. Red Hat JBoss Enterprise Application Platform Red Hat build of Keycloak does not include any adapters for Red Hat JBoss Enterprise Application Platform. However, there are alternatives for existing applications deployed to Red Hat JBoss Enterprise Application Platform. 2.3.1.1. 8.0 Beta Red Hat Enterprise Application Platform 8.0 Beta provides a native OpenID Connect client through the Elytron OIDC client subsystem. For more information, see the Red Hat JBoss Enterprise Application Platform documentation . 2.3.1.2. 6.4 and 7.x Existing applications deployed to Red Hat JBoss Enterprise Application Platform 6.4 and 7.x can leverage adapters from Red Hat Single Sign-On 7.6 in combination with the Red Hat build of Keycloak server. For more information, see the Red Hat Single Sign-On documentation . 2.3.2. Spring Boot adapter Red Hat build of Keycloak does not include any adapters for Spring Boot. However, there are alternatives for existing applications built with Spring Boot. Spring Security provides comprehensive support for OAuth 2 and OpenID Connect. For more information, see the Spring Security documentation . Alternatively, for Spring Boot 2.x the Spring Boot adapter from Red Hat Single Sign-On 7.6 can be used in combination with the Red Hat build of Keycloak server. For more information, see the Red Hat Single Sign-On documentation . 2.4. Red Hat build of Keycloak JavaScript adapter Red Hat build of Keycloak comes with a client-side JavaScript library called keycloak-js that can be used to secure web applications. The adapter also comes with built-in support for Cordova applications. 2.4.1. Installation The adapter is distributed in several ways, but we recommend that you install the keycloak-js package from NPM: npm install keycloak-js Alternatively, the library can be retrieved directly from the Red Hat build of Keycloak server at /js/keycloak.js and is also distributed as a ZIP archive. We are however considering the inclusion of the adapter directly from the Keycloak server as deprecated, and this functionality might be removed in the future. 2.4.2. Red Hat build of Keycloak server configuration One important thing to consider about using client-side applications is that the client has to be a public client as there is no secure way to store client credentials in a client-side application. This consideration makes it very important to make sure the redirect URIs you have configured for the client are correct and as specific as possible. To use the adapter, create a client for your application in the Red Hat build of Keycloak Admin Console. Make the client public by toggling Client authentication to Off on the Capability config page. You also need to configure Valid Redirect URIs and Web Origins . Be as specific as possible as failing to do so may result in a security vulnerability. 2.4.3. Using the adapter The following example shows how to initialize the adapter. Make sure that you replace the options passed to the Keycloak constructor with those of the client you have configured. import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: 'http://keycloak-serverUSD{kc_base_path}', realm: 'myrealm', clientId: 'myapp' }); try { const authenticated = await keycloak.init(); console.log(`User is USD{authenticated ? 'authenticated' : 'not authenticated'}`); } catch (error) { console.error('Failed to initialize adapter:', error); } To authenticate, you call the login function. Two options exist to make the adapter automatically authenticate. You can pass login-required or check-sso to the init() function. login-required authenticates the client if the user is logged in to Red Hat build of Keycloak or displays the login page if the user is not logged in. check-sso only authenticates the client if the user is already logged in. If the user is not logged in, the browser is redirected back to the application and remains unauthenticated. You can configure a silent check-sso option. With this feature enabled, your browser will not perform a full redirect to the Red Hat build of Keycloak server and back to your application, but this action will be performed in a hidden iframe. Therefore, your application resources are only loaded and parsed once by the browser, namely when the application is initialized and not again after the redirect back from Red Hat build of Keycloak to your application. This approach is particularly useful in case of SPAs (Single Page Applications). To enable the silent check-sso , you provide a silentCheckSsoRedirectUri attribute in the init method. Make sure this URI is a valid endpoint in the application; it must be configured as a valid redirect for the client in the Red Hat build of Keycloak Admin Console: keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` }); The page at the silent check-sso redirect uri is loaded in the iframe after successfully checking your authentication state and retrieving the tokens from the Red Hat build of Keycloak server. It has no other task than sending the received tokens to the main application and should only look like this: <!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html> Remember that this page must be served by your application at the specified location in silentCheckSsoRedirectUri and is not part of the adapter. Warning Silent check-sso functionality is limited in some modern browsers. Please see the Modern Browsers with Tracking Protection Section . To enable login-required set onLoad to login-required and pass to the init method: keycloak.init({ onLoad: 'login-required' }); After the user is authenticated the application can make requests to RESTful services secured by Red Hat build of Keycloak by including the bearer token in the Authorization header. For example: async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); } One thing to keep in mind is that the access token by default has a short life expiration so you may need to refresh the access token prior to sending the request. You refresh this token by calling the updateToken() method. This method returns a Promise, which makes it easy to invoke the service only if the token was successfully refreshed and displays an error to the user if it was not refreshed. For example: try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers(); Note Both access and refresh token are stored in memory and are not persisted in any kind of storage. Therefore, these tokens should never be persisted to prevent hijacking attacks. 2.4.4. Session Status iframe By default, the adapter creates a hidden iframe that is used to detect if a Single-Sign Out has occurred. This iframe does not require any network traffic. Instead the status is retrieved by looking at a special status cookie. This feature can be disabled by setting checkLoginIframe: false in the options passed to the init() method. You should not rely on looking at this cookie directly. Its format can change and it's also associated with the URL of the Red Hat build of Keycloak server, not your application. Warning Session Status iframe functionality is limited in some modern browsers. Please see Modern Browsers with Tracking Protection Section . 2.4.5. Implicit and hybrid flow By default, the adapter uses the Authorization Code flow. With this flow, the Red Hat build of Keycloak server returns an authorization code, not an authentication token, to the application. The JavaScript adapter exchanges the code for an access token and a refresh token after the browser is redirected back to the application. Red Hat build of Keycloak also supports the Implicit flow where an access token is sent immediately after successful authentication with Red Hat build of Keycloak. This flow may have better performance than the standard flow because no additional request exists to exchange the code for tokens, but it has implications when the access token expires. However, sending the access token in the URL fragment can be a security vulnerability. For example the token could be leaked through web server logs and or browser history. To enable implicit flow, you enable the Implicit Flow Enabled flag for the client in the Red Hat build of Keycloak Admin Console. You also pass the parameter flow with the value implicit to init method: keycloak.init({ flow: 'implicit' }) Note that only an access token is provided and no refresh token exists. This situation means that once the access token has expired, the application has to redirect to Red Hat build of Keycloak again to obtain a new access token. Red Hat build of Keycloak also supports the Hybrid flow. This flow requires the client to have both the Standard Flow and Implicit Flow enabled in the Admin Console. The Red Hat build of Keycloak server then sends both the code and tokens to your application. The access token can be used immediately while the code can be exchanged for access and refresh tokens. Similar to the implicit flow, the hybrid flow is good for performance because the access token is available immediately. But, the token is still sent in the URL, and the security vulnerability mentioned earlier may still apply. One advantage in the Hybrid flow is that the refresh token is made available to the application. For the Hybrid flow, you need to pass the parameter flow with value hybrid to the init method: keycloak.init({ flow: 'hybrid' }); 2.4.6. Hybrid Apps with Cordova Red Hat build of Keycloak supports hybrid mobile apps developed with Apache Cordova . The adapter has two modes for this: cordova and cordova-native : The default is cordova , which the adapter automatically selects if no adapter type has been explicitly configured and window.cordova is present. When logging in, it opens an InApp Browser that lets the user interact with Red Hat build of Keycloak and afterwards returns to the app by redirecting to http://localhost . Because of this behavior, you whitelist this URL as a valid redirect-uri in the client configuration section of the Admin Console. While this mode is easy to set up, it also has some disadvantages: The InApp-Browser is a browser embedded in the app and is not the phone's default browser. Therefore it will have different settings and stored credentials will not be available. The InApp-Browser might also be slower, especially when rendering more complex themes. There are security concerns to consider, before using this mode, such as that it is possible for the app to gain access to the credentials of the user, as it has full control of the browser rendering the login page, so do not allow its use in apps you do not trust. Use this example app to help you get started: https://github.com/keycloak/keycloak/tree/master/examples/cordova The alternative mode is`cordova-native`, which takes a different approach. It opens the login page using the system's browser. After the user has authenticated, the browser redirects back into the application using a special URL. From there, the Red Hat build of Keycloak adapter can finish the login by reading the code or token from the URL. You can activate the native mode by passing the adapter type cordova-native to the init() method: keycloak.init({ adapter: 'cordova-native' }); This adapter requires two additional plugins: cordova-plugin-browsertab : allows the app to open webpages in the system's browser cordova-plugin-deeplinks : allow the browser to redirect back to your app by special URLs The technical details for linking to an app differ on each platform and special setup is needed. Please refer to the Android and iOS sections of the deeplinks plugin documentation for further instructions. Different kinds of links exist for opening apps: * custom schemes, such as myapp://login or android-app://com.example.myapp/https/example.com/login * Universal Links (iOS) ) / Deep Links (Android) . While the former are easier to set up and tend to work more reliably, the latter offer extra security because they are unique and only the owner of a domain can register them. Custom-URLs are deprecated on iOS. For best reliability, we recommend that you use universal links combined with a fallback site that uses a custom-url link. Furthermore, we recommend the following steps to improve compatibility with the adapter: Universal Links on iOS seem to work more reliably with response-mode set to query To prevent Android from opening a new instance of your app on redirect add the following snippet to config.xml : <preference name="AndroidLaunchMode" value="singleTask" /> There is an example app that shows how to use the native-mode: https://github.com/keycloak/keycloak/tree/master/examples/cordova-native 2.4.7. Custom Adapters In some situations, you may need to run the adapter in environments that are not supported by default, such as Capacitor. To use the JavasScript client in these environments, you can pass a custom adapter. For example, a third-party library could provide such an adapter to make it possible to reliably run the adapter: import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak(); keycloak.init({ adapter: KeycloakCapacitorAdapter, }); This specific package does not exist, but it gives a pretty good example of how such an adapter could be passed into the client. It's also possible to make your own adapter, to do so you will have to implement the methods described in the KeycloakAdapter interface. For example the following TypeScript code ensures that all the methods are properly implemented: import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { login(options) { // Write your own implementation here. } // The other methods go here... }; const keycloak = new Keycloak(); keycloak.init({ adapter: MyCustomAdapter, }); Naturally you can also do this without TypeScript by omitting the type information, but ensuring implementing the interface properly will then be left entirely up to you. 2.4.8. Modern Browsers with Tracking Protection In the latest versions of some browsers, various cookies policies are applied to prevent tracking of the users by third parties, such as SameSite in Chrome or completely blocked third-party cookies. Those policies are likely to become more restrictive and adopted by other browsers over time. Eventually cookies in third-party contexts may become completely unsupported and blocked by the browsers. As a result, the affected adapter features might ultimately be deprecated. The adapter relies on third-party cookies for Session Status iframe, silent check-sso and partially also for regular (non-silent) check-sso . Those features have limited functionality or are completely disabled based on how restrictive the browser is regarding cookies. The adapter tries to detect this setting and reacts accordingly. 2.4.8.1. Browsers with "SameSite=Lax by Default" Policy All features are supported if SSL / TLS connection is configured on the Red Hat build of Keycloak side as well as on the application side. For example, Chrome is affected starting with version 84. 2.4.8.2. Browsers with Blocked Third-Party Cookies Session Status iframe is not supported and is automatically disabled if such browser behavior is detected by the adapter. This means the adapter cannot use a session cookie for Single Sign-Out detection and must rely purely on tokens. As a result, when a user logs out in another window, the application using the adapter will not be logged out until the application tries to refresh the Access Token. Therefore, consider setting the Access Token Lifespan to a relatively short time, so that the logout is detected as soon as possible. For more details, see Session and Token Timeouts . Silent check-sso is not supported and falls back to regular (non-silent) check-sso by default. This behavior can be changed by setting silentCheckSsoFallback: false in the options passed to the init method. In this case, check-sso will be completely disabled if restrictive browser behavior is detected. Regular check-sso is affected as well. Since Session Status iframe is unsupported, an additional redirect to Red Hat build of Keycloak has to be made when the adapter is initialized to check the user's login status. This check is different from the standard behavior when the iframe is used to tell whether the user is logged in, and the redirect is performed only when the user is logged out. An affected browser is for example Safari starting with version 13.1. 2.4.9. API Reference 2.4.9.1. Constructor new Keycloak(); new Keycloak('http://localhost/keycloak.json'); new Keycloak({ url: 'http://localhost', realm: 'myrealm', clientId: 'myApp' }); 2.4.9.2. Properties authenticated Is true if the user is authenticated, false otherwise. token The base64 encoded token that can be sent in the Authorization header in requests to services. tokenParsed The parsed token as a JavaScript object. subject The user id. idToken The base64 encoded ID token. idTokenParsed The parsed id token as a JavaScript object. realmAccess The realm roles associated with the token. resourceAccess The resource roles associated with the token. refreshToken The base64 encoded refresh token that can be used to retrieve a new token. refreshTokenParsed The parsed refresh token as a JavaScript object. timeSkew The estimated time difference between the browser time and the Red Hat build of Keycloak server in seconds. This value is just an estimation, but is accurate enough when determining if a token is expired or not. responseMode Response mode passed in init (default value is fragment). flow Flow passed in init. adapter Allows you to override the way that redirects and other browser-related functions will be handled by the library. Available options: "default" - the library uses the browser api for redirects (this is the default) "cordova" - the library will try to use the InAppBrowser cordova plugin to load keycloak login/registration pages (this is used automatically when the library is working in a cordova ecosystem) "cordova-native" - the library tries to open the login and registration page using the phone's system browser using the BrowserTabs cordova plugin. This requires extra setup for redirecting back to the app (see Section 2.4.6, "Hybrid Apps with Cordova" ). "custom" - allows you to implement a custom adapter (only for advanced use cases) responseType Response type sent to Red Hat build of Keycloak with login requests. This is determined based on the flow value used during initialization, but can be overridden by setting this value. 2.4.9.3. Methods init(options) Called to initialize the adapter. Options is an Object, where: useNonce - Adds a cryptographic nonce to verify that the authentication response matches the request (default is true ). onLoad - Specifies an action to do on load. Supported values are login-required or check-sso . silentCheckSsoRedirectUri - Set the redirect uri for silent authentication check if onLoad is set to 'check-sso'. silentCheckSsoFallback - Enables fall back to regular check-sso when silent check-sso is not supported by the browser (default is true ). token - Set an initial value for the token. refreshToken - Set an initial value for the refresh token. idToken - Set an initial value for the id token (only together with token or refreshToken). scope - Set the default scope parameter to the Red Hat build of Keycloak login endpoint. Use a space-delimited list of scopes. Those typically reference Client scopes defined on a particular client. Note that the scope openid will always be added to the list of scopes by the adapter. For example, if you enter the scope options address phone , then the request to Red Hat build of Keycloak will contain the scope parameter scope=openid address phone . Note that the default scope specified here is overwritten if the login() options specify scope explicitly. timeSkew - Set an initial value for skew between local time and Red Hat build of Keycloak server in seconds (only together with token or refreshToken). checkLoginIframe - Set to enable/disable monitoring login state (default is true ). checkLoginIframeInterval - Set the interval to check login state (default is 5 seconds). responseMode - Set the OpenID Connect response mode send to Red Hat build of Keycloak server at login request. Valid values are query or fragment . Default value is fragment , which means that after successful authentication will Red Hat build of Keycloak redirect to JavaScript application with OpenID Connect parameters added in URL fragment. This is generally safer and recommended over query . flow - Set the OpenID Connect flow. Valid values are standard , implicit or hybrid . enableLogging - Enables logging messages from Keycloak to the console (default is false ). pkceMethod - The method for Proof Key Code Exchange ( PKCE ) to use. Configuring this value enables the PKCE mechanism. Available options: "S256" - The SHA256 based PKCE method scope - Used to forward the scope parameter to the Red Hat build of Keycloak login endpoint. Use a space-delimited list of scopes. Those typically reference Client scopes defined on a particular client. Note that the scope openid is always added to the list of scopes by the adapter. For example, if you enter the scope options address phone , then the request to Red Hat build of Keycloak will contain the scope parameter scope=openid address phone . messageReceiveTimeout - Set a timeout in milliseconds for waiting for message responses from the Keycloak server. This is used, for example, when waiting for a message during 3rd party cookies check. The default value is 10000. locale - When onLoad is 'login-required', sets the 'ui_locales' query param in compliance with section 3.1.2.1 of the OIDC 1.0 specification . Returns a promise that resolves when initialization completes. login(options) Redirects to login form. Options is an optional Object, where: redirectUri - Specifies the uri to redirect to after login. prompt - This parameter allows to slightly customize the login flow on the Red Hat build of Keycloak server side. For example enforce displaying the login screen in case of value login . See Parameters Forwarding Section for the details and all the possible values of the prompt parameter. maxAge - Used just if user is already authenticated. Specifies maximum time since the authentication of user happened. If user is already authenticated for longer time than maxAge , the SSO is ignored and he will need to re-authenticate again. loginHint - Used to pre-fill the username/email field on the login form. scope - Override the scope configured in init with a different value for this specific login. idpHint - Used to tell Red Hat build of Keycloak to skip showing the login page and automatically redirect to the specified identity provider instead. More info in the Identity Provider documentation . acr - Contains the information about acr claim, which will be sent inside claims parameter to the Red Hat build of Keycloak server. Typical usage is for step-up authentication. Example of use { values: ["silver", "gold"], essential: true } . See OpenID Connect specification and Step-up authentication documentation for more details. action - If the value is register , the user is redirected to the registration page. See Registration requested by client section for more details. If the value is UPDATE_PASSWORD or another supported required action, the user will be redirected to the reset password page or the other required action page. However, if the user is not authenticated, the user will be sent to the login page and redirected after authentication. See Application Initiated Action section for more details. locale - Sets the 'ui_locales' query param in compliance with section 3.1.2.1 of the OIDC 1.0 specification . cordovaOptions - Specifies the arguments that are passed to the Cordova in-app-browser (if applicable). Options hidden and location are not affected by these arguments. All available options are defined at https://cordova.apache.org/docs/en/latest/reference/cordova-plugin-inappbrowser/ . Example of use: { zoom: "no", hardwareback: "yes" } ; createLoginUrl(options) Returns the URL to login form. Options is an optional Object, which supports same options as the function login . logout(options) Redirects to logout. Options is an Object, where: redirectUri - Specifies the uri to redirect to after logout. createLogoutUrl(options) Returns the URL to log out the user. Options is an Object, where: redirectUri - Specifies the uri to redirect to after logout. register(options) Redirects to registration form. Shortcut for login with option action = 'register' Options are same as for the login method but 'action' is set to 'register' createRegisterUrl(options) Returns the url to registration page. Shortcut for createLoginUrl with option action = 'register' Options are same as for the createLoginUrl method but 'action' is set to 'register' accountManagement() Redirects to the Account Management Console. createAccountUrl(options) Returns the URL to the Account Management Console. Options is an Object, where: redirectUri - Specifies the uri to redirect to when redirecting back to the application. hasRealmRole(role) Returns true if the token has the given realm role. hasResourceRole(role, resource) Returns true if the token has the given role for the resource (resource is optional, if not specified clientId is used). loadUserProfile() Loads the users profile. Returns a promise that resolves with the profile. For example: try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); } isTokenExpired(minValidity) Returns true if the token has less than minValidity seconds left before it expires (minValidity is optional, if not specified 0 is used). updateToken(minValidity) If the token expires within minValidity seconds (minValidity is optional, if not specified 5 is used) the token is refreshed. If -1 is passed as the minValidity, the token will be forcibly refreshed. If the session status iframe is enabled, the session status is also checked. Returns a promise that resolves with a boolean indicating whether or not the token has been refreshed. For example: try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); } clearToken() Clear authentication state, including tokens. This can be useful if application has detected the session was expired, for example if updating token fails. Invoking this results in onAuthLogout callback listener being invoked. 2.4.9.4. Callback Events The adapter supports setting callback listeners for certain events. Keep in mind that these have to be set before the call to the init() method. For example: keycloak.onAuthSuccess = () => console.log('Authenticated!'); The available events are: onReady(authenticated) - Called when the adapter is initialized. onAuthSuccess - Called when a user is successfully authenticated. onAuthError - Called if there was an error during authentication. onAuthRefreshSuccess - Called when the token is refreshed. onAuthRefreshError - Called if there was an error while trying to refresh the token. onAuthLogout - Called if the user is logged out (will only be called if the session status iframe is enabled, or in Cordova mode). onTokenExpired - Called when the access token is expired. If a refresh token is available the token can be refreshed with updateToken, or in cases where it is not (that is, with implicit flow) you can redirect to the login screen to obtain a new access token. 2.5. Red Hat build of Keycloak Node.js adapter Red Hat build of Keycloak provides a Node.js adapter built on top of Connect to protect server-side JavaScript apps - the goal was to be flexible enough to integrate with frameworks like Express.js . To use the Node.js adapter, first you must create a client for your application in the Red Hat build of Keycloak Admin Console. The adapter supports public, confidential, and bearer-only access type. Which one to choose depends on the use-case scenario. Once the client is created click the Installation tab, select Red Hat build of Keycloak OIDC JSON for Format Option , and then click Download . The downloaded keycloak.json file should be at the root folder of your project. 2.5.1. Installation Assuming you've already installed Node.js , create a folder for your application: Use npm init command to create a package.json for your application. Now add the Red Hat build of Keycloak connect adapter in the dependencies list: "dependencies": { "keycloak-connect": "file:keycloak-connect-22.0.13+redhat-00001.tgz" } 2.5.2. Usage Instantiate a Keycloak class The Keycloak class provides a central point for configuration and integration with your application. The simplest creation involves no arguments. In the root directory of your project create a file called server.js and add the following code: const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore }); Install the express-session dependency: To start the server.js script, add the following command in the 'scripts' section of the package.json : Now we have the ability to run our server with following command: By default, this will locate a file named keycloak.json alongside the main executable of your application, in our case on the root folder, to initialize Red Hat build of Keycloak specific settings such as public key, realm name, various URLs. In that case a Red Hat build of Keycloak deployment is necessary to access Red Hat build of Keycloak admin console. Please visit links on how to deploy a Red Hat build of Keycloak admin console with Podman or Docker Now we are ready to obtain the keycloak.json file by visiting the Red Hat build of Keycloak Admin Console clients (left sidebar) choose your client Installation Format Option Keycloak OIDC JSON Download Paste the downloaded file on the root folder of our project. Instantiation with this method results in all the reasonable defaults being used. As alternative, it's also possible to provide a configuration object, rather than the keycloak.json file: const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig); Applications can also redirect users to their preferred identity provider by using: const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig); Configuring a web session store If you want to use web sessions to manage server-side state for authentication, you need to initialize the Keycloak(... ) with at least a store parameter, passing in the actual session store that express-session is using. const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore }); Passing a custom scope value By default, the scope value openid is passed as a query parameter to Red Hat build of Keycloak's login URL, but you can add an additional custom value: const keycloak = new Keycloak({ scope: 'offline_access' }); 2.5.3. Installing middleware Once instantiated, install the middleware into your connect-capable app: In order to do so, first we have to install Express: then require Express in our project as outlined below: const express = require('express'); const app = express(); and configure Keycloak middleware in Express, by adding at the code below: app.use( keycloak.middleware() ); Last but not least, let's set up our server to listen for HTTP requests on port 3000 by adding the following code to main.js : app.listen(3000, function () { console.log('App listening on port 3000'); }); 2.5.4. Configuration for proxies If the application is running behind a proxy that terminates an SSL connection Express must be configured per the express behind proxies guide. Using an incorrect proxy configuration can result in invalid redirect URIs being generated. Example configuration: const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() ); 2.5.5. Protecting resources Simple authentication To enforce that a user must be authenticated before accessing a resource, simply use a no-argument version of keycloak.protect() : app.get( '/complain', keycloak.protect(), complaintHandler ); Role-based authorization To secure a resource with an application role for the current app: app.get( '/special', keycloak.protect('special'), specialHandler ); To secure a resource with an application role for a different app: app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler ); To secure a resource with a realm role: app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler ); Resource-Based Authorization Resource-Based Authorization allows you to protect resources, and their specific methods/actions, * * based on a set of policies defined in Keycloak, thus externalizing authorization from your application. This is achieved by exposing a keycloak.enforcer method which you can use to protect resources.* app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler); The keycloak-enforcer method operates in two modes, depending on the value of the response_mode configuration option. app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler); If response_mode is set to token , permissions are obtained from the server on behalf of the subject represented by the bearer token that was sent to your application. In this case, a new access token is issued by Keycloak with the permissions granted by the server. If the server did not respond with a token with the expected permissions, the request is denied. When using this mode, you should be able to obtain the token from the request as follows: app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile }); Prefer this mode when your application is using sessions and you want to cache decisions from the server, as well automatically handle refresh tokens. This mode is especially useful for applications acting as a client and resource server. If response_mode is set to permissions (default mode), the server only returns the list of granted permissions, without issuing a new access token. In addition to not issuing a new token, this method exposes the permissions granted by the server through the request as follows: app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile }); Regardless of the response_mode in use, the keycloak.enforcer method will first try to check the permissions within the bearer token that was sent to your application. If the bearer token already carries the expected permissions, there is no need to interact with the server to obtain a decision. This is specially useful when your clients are capable of obtaining access tokens from the server with the expected permissions before accessing a protected resource, so they can use some capabilities provided by Keycloak Authorization Services such as incremental authorization and avoid additional requests to the server when keycloak.enforcer is enforcing access to the resource. By default, the policy enforcer will use the client_id defined to the application (for instance, via keycloak.json ) to reference a client in Keycloak that supports Keycloak Authorization Services. In this case, the client can not be public given that it is actually a resource server. If your application is acting as both a public client(frontend) and resource server(backend), you can use the following configuration to reference a different client in Keycloak with the policies that you want to enforce: keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'}) It is recommended to use distinct clients in Keycloak to represent your frontend and backend. If the application you are protecting is enabled with Keycloak authorization services and you have defined client credentials in keycloak.json , you can push additional claims to the server and make them available to your policies in order to make decisions. For that, you can define a claims configuration option which expects a function that returns a JSON with the claims you want to push: app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { "http.uri": ["/protected/resource"], "user.agent": // get user agent from request } } }), function (req, res) { // access granted For more details about how to configure Keycloak to protected your application resources, please take a look at the Authorization Services Guide . Advanced authorization To secure resources based on parts of the URL itself, assuming a role exists for each section: function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler ); Advanced Login Configuration: By default, all unauthorized requests will be redirected to the Red Hat build of Keycloak login page unless your client is bearer-only. However, a confidential or public client may host both browsable and API endpoints. To prevent redirects on unauthenticated API requests and instead return an HTTP 401, you can override the redirectToLogin function. For example, this override checks if the URL contains /api/ and disables login redirects: Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\/api\//i; return !apiReqMatcher.test(req.originalUrl || req.url); }; 2.5.6. Additional URLs Explicit user-triggered logout By default, the middleware catches calls to /logout to send the user through a Red Hat build of Keycloak-centric logout workflow. This can be changed by specifying a logout configuration parameter to the middleware() call: app.use( keycloak.middleware( { logout: '/logoff' } )); When the user-triggered logout is invoked a query parameter redirect_url can be passed: This parameter is then used as the redirect url of the OIDC logout endpoint and the user will be redirected to https://example.com/logged/out . Red Hat build of Keycloak Admin Callbacks Also, the middleware supports callbacks from the Red Hat build of Keycloak console to log out a single session or all sessions. By default, these type of admin callbacks occur relative to the root URL of / but can be changed by providing an admin parameter to the middleware() call: app.use( keycloak.middleware( { admin: '/callbacks' } ); 2.5.7. Complete example A complete example using the Node.js adapter usage can be found in Keycloak quickstarts for Node.js 2.6. Financial-grade API (FAPI) Support Red Hat build of Keycloak makes it easier for administrators to make sure that their clients are compliant with these specifications: Financial-grade API Security Profile 1.0 - Part 1: Baseline Financial-grade API Security Profile 1.0 - Part 2: Advanced Financial-grade API: Client Initiated Backchannel Authentication Profile (FAPI CIBA) This compliance means that the Red Hat build of Keycloak server will verify the requirements for the authorization server, which are mentioned in the specifications. Red Hat build of Keycloak adapters do not have any specific support for the FAPI, hence the required validations on the client (application) side may need to be still done manually or through some other third-party solutions. 2.6.1. FAPI client profiles To make sure that your clients are FAPI compliant, you can configure Client Policies in your realm as described in the Server Administration Guide and link them to the global client profiles for FAPI support, which are automatically available in each realm. You can use either fapi-1-baseline or fapi-1-advanced profile based on which FAPI profile you need your clients to conform with. In case you want to use Pushed Authorization Request (PAR) , it is recommended that your client use both the fapi-1-baseline profile and fapi-1-advanced for PAR requests. Specifically, the fapi-1-baseline profile contains pkce-enforcer executor, which makes sure that client use PKCE with secured S256 algorithm. This is not required for FAPI Advanced clients unless they use PAR requests. In case you want to use CIBA in a FAPI compliant way, make sure that your clients use both fapi-1-advanced and fapi-ciba client profiles. There is a need to use the fapi-1-advanced profile, or other client profile containing the requested executors, as the fapi-ciba profile contains just CIBA-specific executors. When enforcing the requirements of the FAPI CIBA specification, there is a need for more requirements, such as enforcement of confidential clients or certificate-bound access tokens. 2.6.2. Open Finance Brasil Financial-grade API Security Profile Red Hat build of Keycloak is compliant with the Open Finance Brasil Financial-grade API Security Profile 1.0 Implementers Draft 3 . This one is stricter in some requirements than the FAPI 1 Advanced specification and hence it may be needed to configure Client Policies in the more strict way to enforce some of the requirements. Especially: If your client does not use PAR, make sure that it uses encrypted OIDC request objects. This can be achieved by using a client profile with the secure-request-object executor configured with Encryption Required enabled. Make sure that for JWS, the client uses the PS256 algorithm. For JWE, the client should use the RSA-OAEP with A256GCM . This may need to be set in all the Client Settings where these algorithms are applicable. 2.6.3. TLS considerations As confidential information is being exchanged, all interactions shall be encrypted with TLS (HTTPS). Moreover, there are some requirements in the FAPI specification for the cipher suites and TLS protocol versions used. To match these requirements, you can consider configure allowed ciphers. This configuration can be done by setting the https-protocols and https-cipher-suites options. Red Hat build of Keycloak uses TLSv1.3 by default and hence it is possibly not needed to change the default settings. However it may be needed to adjust ciphers if you need to fall back to lower TLS version for some reason. For more details, see Configuring TLS chapter. 2.7. Recommendations This section describes some recommendations when securing your applications with Red Hat build of Keycloak. 2.7.1. Validating access tokens If you need to manually validate access tokens issued by Red Hat build of Keycloak, you can invoke the Introspection Endpoint . The downside to this approach is that you have to make a network invocation to the Red Hat build of Keycloak server. This can be slow and possibly overload the server if you have too many validation requests going on at the same time. Red Hat build of Keycloak issued access tokens are JSON Web Tokens (JWT) digitally signed and encoded using JSON Web Signature (JWS) . Because they are encoded in this way, you can locally validate access tokens using the public key of the issuing realm. You can either hard code the realm's public key in your validation code, or lookup and cache the public key using the certificate endpoint with the Key ID (KID) embedded within the JWS. Depending on what language you code in, many third party libraries exist and they can help you with JWS validation. 2.7.2. Redirect URIs When using the redirect based flows, be sure to use valid redirect uris for your clients. The redirect uris should be as specific as possible. This especially applies to client-side (public clients) applications. Failing to do so could result in: Open redirects - this can allow attackers to create spoof links that looks like they are coming from your domain Unauthorized entry - when users are already authenticated with Red Hat build of Keycloak, an attacker can use a public client where redirect uris have not be configured correctly to gain access by redirecting the user without the users knowledge In production for web applications always use https for all redirect URIs. Do not allow redirects to http. A few special redirect URIs also exist: http://127.0.0.1 This redirect URI is useful for native applications and allows the native application to create a web server on a random port that can be used to obtain the authorization code. This redirect uri allows any port. Note that per OAuth 2.0 for Native Apps , the use of localhost is not recommended and the IP literal 127.0.0.1 should be used instead. urn:ietf:wg:oauth:2.0:oob If you cannot start a web server in the client (or a browser is not available), you can use the special urn:ietf:wg:oauth:2.0:oob redirect uri. When this redirect uri is used, Red Hat build of Keycloak displays a page with the code in the title and in a box on the page. The application can either detect that the browser title has changed, or the user can copy and paste the code manually to the application. With this redirect uri, a user can use a different device to obtain a code to paste back to the application.
[ "/realms/{realm-name}/.well-known/openid-configuration", "/realms/{realm-name}/protocol/openid-connect/auth", "/realms/{realm-name}/protocol/openid-connect/token", "/realms/{realm-name}/protocol/openid-connect/userinfo", "/realms/{realm-name}/protocol/openid-connect/logout", "/realms/{realm-name}/protocol/openid-connect/certs", "/realms/{realm-name}/protocol/openid-connect/token/introspect", "/realms/{realm-name}/clients-registrations/openid-connect", "/realms/{realm-name}/protocol/openid-connect/revoke", "/realms/{realm-name}/protocol/openid-connect/auth/device", "/realms/{realm-name}/protocol/openid-connect/ext/ciba/auth", "curl -d \"client_id=myclient\" -d \"client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578\" -d \"username=user\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"", "npm install keycloak-js", "import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: 'http://keycloak-serverUSD{kc_base_path}', realm: 'myrealm', clientId: 'myapp' }); try { const authenticated = await keycloak.init(); console.log(`User is USD{authenticated ? 'authenticated' : 'not authenticated'}`); } catch (error) { console.error('Failed to initialize adapter:', error); }", "keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` });", "<!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html>", "keycloak.init({ onLoad: 'login-required' });", "async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); }", "try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers();", "keycloak.init({ flow: 'implicit' })", "keycloak.init({ flow: 'hybrid' });", "keycloak.init({ adapter: 'cordova-native' });", "<preference name=\"AndroidLaunchMode\" value=\"singleTask\" />", "import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak(); keycloak.init({ adapter: KeycloakCapacitorAdapter, });", "import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { login(options) { // Write your own implementation here. } // The other methods go here }; const keycloak = new Keycloak(); keycloak.init({ adapter: MyCustomAdapter, });", "new Keycloak(); new Keycloak('http://localhost/keycloak.json'); new Keycloak({ url: 'http://localhost', realm: 'myrealm', clientId: 'myApp' });", "try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); }", "try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); }", "keycloak.onAuthSuccess = () => console.log('Authenticated!');", "mkdir myapp && cd myapp", "\"dependencies\": { \"keycloak-connect\": \"file:keycloak-connect-22.0.13+redhat-00001.tgz\" }", "const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore });", "npm install express-session", "\"scripts\": { \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\", \"start\": \"node server.js\" },", "npm run start", "const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig);", "const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig);", "const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore });", "const keycloak = new Keycloak({ scope: 'offline_access' });", "npm install express", "const express = require('express'); const app = express();", "app.use( keycloak.middleware() );", "app.listen(3000, function () { console.log('App listening on port 3000'); });", "const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() );", "app.get( '/complain', keycloak.protect(), complaintHandler );", "app.get( '/special', keycloak.protect('special'), specialHandler );", "app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler );", "app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler );", "app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler);", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler);", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile });", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile });", "keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'})", "app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { \"http.uri\": [\"/protected/resource\"], \"user.agent\": // get user agent from request } } }), function (req, res) { // access granted", "function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler );", "Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\\/api\\//i; return !apiReqMatcher.test(req.originalUrl || req.url); };", "app.use( keycloak.middleware( { logout: '/logoff' } ));", "https://example.com/logoff?redirect_url=https%3A%2F%2Fexample.com%3A3000%2Flogged%2Fout", "app.use( keycloak.middleware( { admin: '/callbacks' } );" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/securing_applications_and_services_guide/oidc
5.2. Booting into Rescue Mode
5.2. Booting into Rescue Mode Rescue mode provides the ability to boot a small Red Hat Enterprise Linux environment entirely from CD-ROM, or some other boot method, instead of the system's hard drive. As the name implies, rescue mode is provided to rescue you from something. During normal operation, your Red Hat Enterprise Linux system uses files located on your system's hard drive to do everything - run programs, store your files, and more. However, there may be times when you are unable to get Red Hat Enterprise Linux running completely enough to access files on your system's hard drive. Using rescue mode, you can access the files stored on your system's hard drive, even if you cannot actually run Red Hat Enterprise Linux from that hard drive. To boot into rescue mode, you must be able to boot the system using one of the following methods [1] : By booting the system from an installation boot CD-ROM. By booting the system from other installation boot media, such as USB flash devices. By booting the system from the Red Hat Enterprise Linux CD-ROM #1. Once you have booted using one of the described methods, add the keyword rescue as a kernel parameter. For example, for an x86 system, type the following command at the installation boot prompt: You are prompted to answer a few basic questions, including which language to use. It also prompts you to select where a valid rescue image is located. Select from Local CD-ROM , Hard Drive , NFS image , FTP , or HTTP . The location selected must contain a valid installation tree, and the installation tree must be for the same version of Red Hat Enterprise Linux as the Red Hat Enterprise Linux disk from which you booted. If you used a boot CD-ROM or other media to start rescue mode, the installation tree must be from the same tree from which the media was created. For more information about how to setup an installation tree on a hard drive, NFS server, FTP server, or HTTP server, refer to the earlier section of this guide. If you select a rescue image that does not require a network connection, you are asked whether or not you want to establish a network connection. A network connection is useful if you need to backup files to a different computer or install some RPM packages from a shared network location, for example. The following message is displayed: The rescue environment will now attempt to find your Linux installation and mount it under the directory /mnt/sysimage. You can then make any changes required to your system. If you want to proceed with this step choose 'Continue'. You can also choose to mount your file systems read-only instead of read-write by choosing 'Read-only'. If for some reason this process fails you can choose 'Skip' and this step will be skipped and you will go directly to a command shell. If you select Continue , it attempts to mount your file system under the directory /mnt/sysimage/ . If it fails to mount a partition, it notifies you. If you select Read-Only , it attempts to mount your file system under the directory /mnt/sysimage/ , but in read-only mode. If you select Skip , your file system is not mounted. Choose Skip if you think your file system is corrupted. Once you have your system in rescue mode, a prompt appears on VC (virtual console) 1 and VC 2 (use the Ctrl - Alt - F1 key combination to access VC 1 and Ctrl - Alt - F2 to access VC 2): If you selected Continue to mount your partitions automatically and they were mounted successfully, you are in single-user mode. Even if your file system is mounted, the default root partition while in rescue mode is a temporary root partition, not the root partition of the file system used during normal user mode (runlevel 3 or 5). If you selected to mount your file system and it mounted successfully, you can change the root partition of the rescue mode environment to the root partition of your file system by executing the following command: This is useful if you need to run commands such as rpm that require your root partition to be mounted as / . To exit the chroot environment, type exit to return to the prompt. If you selected Skip , you can still try to mount a partition or LVM2 logical volume manually inside rescue mode by creating a directory such as /foo , and typing the following command: In the above command, /foo is a directory that you have created and /dev/mapper/VolGroup00-LogVol02 is the LVM2 logical volume you want to mount. If the partition is of type ext2 , replace ext3 with ext2 . If you do not know the names of all physical partitions, use the following command to list them: If you do not know the names of all LVM2 physical volumes, volume groups, or logical volumes, use the following commands to list them: From the prompt, you can run many useful commands, such as: ssh , scp , and ping if the network is started dump and restore for users with tape drives parted and fdisk for managing partitions rpm for installing or upgrading software joe for editing configuration files Note If you try to start other popular editors such as emacs , pico , or vi , the joe editor is started. 5.2.1. Reinstalling the Boot Loader In many cases, the GRUB boot loader can mistakenly be deleted, corrupted, or replaced by other operating systems. The following steps detail the process on how GRUB is reinstalled on the master boot record: Boot the system from an installation boot medium. Type linux rescue at the installation boot prompt to enter the rescue environment. Type chroot /mnt/sysimage to mount the root partition. Type /sbin/grub-install /dev/hda to reinstall the GRUB boot loader, where /dev/hda is the boot partition. Review the /boot/grub/grub.conf file, as additional entries may be needed for GRUB to control additional operating systems. Reboot the system. [1] Refer to the earlier sections of this guide for more details.
[ "linux rescue", "sh-3.00b#", "chroot /mnt/sysimage", "mount -t ext3 /dev/mapper/VolGroup00-LogVol02 /foo", "fdisk -l", "pvdisplay", "vgdisplay", "lvdisplay" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-rescuemode-boot
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_disconnected_network_environment/providing-feedback-on-red-hat-documentation_satellite
Chapter 18. Using webhooks
Chapter 18. Using webhooks A webhook is a way for a web page or web application to provide other applications with information in real time. Webhooks are only triggered after an event occurs. The request usually contains details of the event. An event triggers callbacks, such as sending an e-mail confirming a host has been provisioned. You can use webhooks to define a call to an external API based on Satellite internal event by using a fire-and-forget message exchange pattern. The application sending the request does not wait for the response, or ignores it. Payload of a webhook is created from webhook templates. Webhook templates use the same ERB syntax as Provisioning templates. Available variables: @event_name : Name of an event. @webhook_id : Unique event ID. @payload : Payload data, different for each event type. To access individual fields, use @payload[:key_name] Ruby hash syntax. @payload[:object] : Database object for events triggered by database actions (create, update, delete). Not available for custom events. @payload[:context] : Additional information as hash like request and session UUID, remote IP address, user, organization and location. Because webhooks use HTTP, no new infrastructure needs be added to existing web services. The typical use case for webhooks in Satellite is making a call to a monitoring system when a host is created or deleted. Webhooks are useful where the action you want to perform in the external system can be achieved through its API. Where it is necessary to run additional commands or edit files, the shellhooks plugin for Capsules is available. The shellhooks plugin enables you to define a shell script on the Capsule that can be executed through the API. You can use webhooks successfully without installing the shellhooks plugin. For a list of available events, see Available webhook events . 18.1. Creating a webhook template Webhook templates are used to generate the body of HTTP request to a configured target when a webhook is triggered. Use the following procedure to create a webhook template in the Satellite web UI. Procedure In the Satellite web UI, navigate to Administer > Webhook > Webhook Templates . Click Clone an existing template or Create Template . Enter a name for the template. Use the editor to make changes to the template payload. A webhook HTTP payload must be created using Satellite template syntax. The webhook template can use a special variable called @object that can represent the main object of the event. @object can be missing in case of certain events. You can determine what data are actually available with the @payload variable. For more information, see Template Writing Reference in Managing hosts and for available template macros and methods, visit /templates_doc on Satellite Server. Optional: Enter the description and audit comment. Assign organizations and locations. Click Submit . Examples When creating a webhook template, you must follow the format of the target application for which the template is intended. For example, an application can expect a "text" field with the webhook message. Refer to the documentation of your target application to find more about how your webhook template format should look like. Running remote execution jobs This webhook template defines a message with the ID and result of a remote execution job. The webhook which uses this template can be subscribed to events such as Actions Remote Execution Run Host Job Succeeded or Actions Remote Execution Run Host Job Failed . Creating users This webhook template defines a message with the login and email of a created user. The webhook which uses this template should be subscribed to the User Created event. 18.2. Creating a webhook You can customize events, payloads, HTTP authentication, content type, and headers through the Satellite web UI. Use the following procedure to create a webhook in the Satellite web UI. Procedure In the Satellite web UI, navigate to Administer > Webhook > Webhooks . Click Create new . From the Subscribe to list, select an event. Enter a Name for your webhook. Enter a Target URL . Webhooks make HTTP requests to pre-configured URLs. The target URL can be a dynamic URL. Click Template to select a template. Webhook templates are used to generate the body of the HTTP request to Satellite Server when a webhook is triggered. Enter an HTTP method. Optional: If you do not want activate the webhook when you create it, uncheck the Enabled flag. Click the Credentials tab. Optional: If HTTP authentication is required, enter User and Password . Optional: Uncheck Verify SSL if you do not want to verify the server certificate against the system certificate store or Satellite CA. On the Additional tab, enter the HTTP Content Type . For example, application/json , application/xml or text/plain on the payload you define. The application does not attempt to convert the content to match the specified content type. Optional: Provide HTTP headers as JSON. ERB is also allowed. When configuring webhooks with endpoints with non-standard HTTP or HTTPS ports, an SELinux port must be assigned, see Configuring SELinux to Ensure Access to Satellite on Custom Ports in Installing Satellite Server in a connected network environment . 18.3. Available webhook events The following table contains a list of webhook events that are available from the Satellite web UI. Action events trigger webhooks only on success , so if an action fails, a webhook is not triggered. For more information about payload, go to Administer > About > Support > Templates DSL . A list of available types is provided in the following table. Some events are marked as custom , in that case, the payload is an object object but a Ruby hash (key-value data structure) so syntax is different. Event name Description Payload Actions Katello Content View Promote Succeeded A content view was successfully promoted. Actions::Katello::ContentView::Promote Actions Katello Content View Publish Succeeded A repository was successfully synchronized. Actions::Katello::ContentView::Publish Actions Remote Execution Run Host Job Succeeded A generic remote execution job succeeded for a host. This event is emitted for all Remote Execution jobs, when complete. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Errata Install Succeeded Install errata using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Install Succeeded Install package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Install Succeeded Install package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Remove Remove package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Remove Succeeded Remove package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Service Restart Succeeded Restart Services using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Update Succeeded Update package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Update Succeeded Update package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Foreman OpenSCAP Run Scans Succeeded Run OpenSCAP scan. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Host Succeeded Runs an Ansible Playbook containing all the roles defined for a host. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Capsule Upgrade Succeeded Upgrade Capsules on given Capsule Servers. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Configure Cloud Connector Succeeded Configure Cloud Connector on given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Insights Plan Succeeded Runs a given maintenance plan from Red Hat Access Insights given an ID. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Playbook Succeeded Run an Ansible Playbook against given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Enable Web Console Succeeded Run an Ansible Playbook to enable the web console on given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Puppet Run Host Succeeded Perform a single Puppet run. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Module Stream Action Succeeded Perform a module stream action using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Pre-upgrade Succeeded Upgradeability check for RHEL 7 host. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Remediation Plan Succeeded Run Remediation plan with Leapp. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Upgrade Succeeded Run Leapp upgrade job for RHEL 7 host. Actions::RemoteExecution::RunHostJob Build Entered A host entered the build mode. Custom event: @payload[:id] (host id), @payload[:hostname] (host name). Build Exited A host build mode was canceled, either it was successfully provisioned or the user canceled the build manually. Custom event: @payload[:id] (host id), @payload[:hostname] (host name). Content View Created/Updated/Destroyed Common database operations on a content view. Katello::ContentView Domain Created/Updated/Destroyed Common database operations on a domain. Domain Host Created/Updated/Destroyed Common database operations on a host. Host Hostgroup Created/Updated/Destroyed Common database operations on a hostgroup. Hostgroup Model Created/Updated/Destroyed Common database operations on a model. Model Status Changed Global host status of a host changed. Custom event: @payload[:id] (host id), @payload[:hostname] , @payload[:global_status] (hash) Subnet Created/Updated/Destroyed Common database operations on a subnet. Subnet Template Render Performed A report template was rendered. Template User Created/Updated/Destroyed Common database operations on a user. User 18.4. Shellhooks With webhooks, you can only map one Satellite event to one API call. For advanced integrations, where a single shell script can contain multiple commands, you can install a Capsule shellhooks plug-in that exposes executables by using a REST HTTP API. You can then configure a webhook to reach out to a Capsule API to run a predefined shellhook. A shellhook is an executable script that can be written in any language provided that it can be executed. The shellhook can for example contain commands or edit files. You must place your executable scripts in /var/lib/foreman-proxy/shellhooks with only alphanumeric characters and underscores in their name. You can pass input to shellhook script through the webhook payload. This input is redirected to standard input of the shellhook script. You can pass arguments to shellhook script by using HTTP headers in format X-Shellhook-Arg-1 to X-Shellhook-Arg-99 . For more information on passing arguments to shellhook script, see: Section 18.6, "Passing arguments to shellhook script using webhooks" Section 18.7, "Passing arguments to shellhook script using curl" The HTTP method must be POST. An example URL would be: https://capsule.example.com:9090/shellhook/My_Script . Note Unlike the shellhooks directory, the URL must contain /shellhook/ in singular to be valid. You must enable Capsule Authorization for each webhook connected to a shellhook to enable it to authorize a call. Standard output and standard error output are redirected to the Capsule logs as messages with debug or warning levels respectively. The shellhook HTTPS calls do not return a value. For an example on creating a shellhook script, see Section 18.8, "Creating a shellhook to print arguments" . 18.5. Installing the shellhooks plugin Optionally, you can install and enable the shellhooks plugin on each Capsule used for shellhooks. Procedure Run the following command: 18.6. Passing arguments to shellhook script using webhooks Use this procedure to pass arguments to a shellhook script using webhooks. Procedure When creating a webhook, on the Additional tab, create HTTP headers in the following format: Ensure that the headers have a valid JSON or ERB format. Only pass safe fields like database ID, name, or labels that do not include new lines or quote characters. For more information, see Section 18.2, "Creating a webhook" . Example 18.7. Passing arguments to shellhook script using curl Use this procedure to pass arguments to a shellhook script using curl. Procedure When executing a shellhook script using curl , create HTTP headers in the following format: "X-Shellhook-Arg-1: VALUE " "X-Shellhook-Arg-2: VALUE " Example 18.8. Creating a shellhook to print arguments Create a simple shellhook script that prints Hello World! when you run a remote execution job. Prerequisites You have the webhooks and shellhooks plugins installed. For more information, see: Section 18.5, "Installing the shellhooks plugin" Procedure Modify the /var/lib/foreman-proxy/shellhooks/print_args script to print arguments to standard error output so you can see them in the Capsule logs: #!/bin/sh # # Prints all arguments to stderr # echo "USD@" >&2 In the Satellite web UI, navigate to Administer > Webhook > Webhooks . Click Create new . From the Subscribe to list, select Actions Remote Execution Run Host Job Succeeded . Enter a Name for your webhook. In the Target URL field, enter the URL of your Capsule Server followed by :9090/shellhook/print_args : Note that shellhook in the URL is singular, unlike the shellhooks directory. From the Template list, select Empty Payload . On the Credentials tab, check Capsule Authorization . On the Additional tab, enter the following text in the Optional HTTP headers field: Click Submit . You now have successfully created a shellhook that prints "Hello World!" to Capsule logs every time you a remote execution job succeeds. Verification Run a remote execution job on any host. You can use time as a command. For more information, see Executing a Remote Job in Managing hosts . Verify that the shellhook script was triggered and printed "Hello World!" to Capsule Server logs: You should find the following lines at the end of the log:
[ "{ \"text\": \"job invocation <%= @object.job_invocation_id %> finished with result <%= @object.task.result %>\" }", "{ \"text\": \"user with login <%= @object.login %> and email <%= @object.mail %> created\" }", "satellite-installer --enable-foreman-proxy-plugin-shellhooks", "{ \"X-Shellhook-Arg-1\": \" VALUE \", \"X-Shellhook-Arg-2\": \" VALUE \" }", "{ \"X-Shellhook-Arg-1\": \"<%= @object.content_view_version_id %>\", \"X-Shellhook-Arg-2\": \"<%= @object.content_view_name %>\" }", "\"X-Shellhook-Arg-1: VALUE \" \"X-Shellhook-Arg-2: VALUE \"", "curl --data \"\" --header \"Content-Type: text/plain\" --header \"X-Shellhook-Arg-1: Version 1.0\" --header \"X-Shellhook-Arg-2: My content view\" --request POST --show-error --silent https://capsule.example.com:9090/shellhook/My_Script", "#!/bin/sh # Prints all arguments to stderr # echo \"USD@\" >&2", "https:// capsule.example.com :9090/shellhook/print_args", "{ \"X-Shellhook-Arg-1\": \"Hello\", \"X-Shellhook-Arg-2\": \"World!\" }", "tail /var/log/foreman-proxy/proxy.log", "[I] Started POST /shellhook/print_args [I] Finished POST /shellhook/print_args with 200 (0.33 ms) [I] [3520] Started task /var/lib/foreman-proxy/shellhooks/print_args\\ Hello\\ World\\! [W] [3520] Hello World!" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/Using_Webhooks_admin
Developer Guide
Developer Guide Red Hat Ceph Storage 4 Using the various application programming interfaces for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/developer_guide/index
Publishing proprietary content collections in Automation Hub
Publishing proprietary content collections in Automation Hub Red Hat Ansible Automation Platform 2.3 Use Automation Hub to publish content collections developed within your organization and intended for internal distribution and use. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/publishing_proprietary_content_collections_in_automation_hub/index
Chapter 4. AMQ Streams Operators
Chapter 4. AMQ Streams Operators AMQ Streams supports Kafka using Operators to deploy and manage the components and dependencies of Kafka to OpenShift. Operators are a method of packaging, deploying, and managing an OpenShift application. AMQ Streams Operators extend OpenShift functionality, automating common and complex tasks related to a Kafka deployment. By implementing knowledge of Kafka operations in code, Kafka administration tasks are simplified and require less manual intervention. Operators AMQ Streams provides Operators for managing a Kafka cluster running within an OpenShift cluster. Cluster Operator Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, and the Entity Operator Entity Operator Comprises the Topic Operator and User Operator Topic Operator Manages Kafka topics User Operator Manages Kafka users The Cluster Operator can deploy the Topic Operator and User Operator as part of an Entity Operator configuration at the same time as a Kafka cluster. Operators within the AMQ Streams architecture 4.1. Cluster Operator AMQ Streams uses the Cluster Operator to deploy and manage clusters for: Kafka (including ZooKeeper, Entity Operator, Kafka Exporter, and Cruise Control) Kafka Connect Kafka MirrorMaker Kafka Bridge Custom resources are used to deploy the clusters. For example, to deploy a Kafka cluster: A Kafka resource with the cluster configuration is created within the OpenShift cluster. The Cluster Operator deploys a corresponding Kafka cluster, based on what is declared in the Kafka resource. The Cluster Operator can also deploy (through configuration of the Kafka resource): A Topic Operator to provide operator-style topic management through KafkaTopic custom resources A User Operator to provide operator-style user management through KafkaUser custom resources The Topic Operator and User Operator function within the Entity Operator on deployment. Example architecture for the Cluster Operator 4.2. Topic Operator The Topic Operator provides a way of managing topics in a Kafka cluster through OpenShift resources. Example architecture for the Topic Operator The role of the Topic Operator is to keep a set of KafkaTopic OpenShift resources describing Kafka topics in-sync with corresponding Kafka topics. Specifically, if a KafkaTopic is: Created, the Topic Operator creates the topic Deleted, the Topic Operator deletes the topic Changed, the Topic Operator updates the topic Working in the other direction, if a topic is: Created within the Kafka cluster, the Operator creates a KafkaTopic Deleted from the Kafka cluster, the Operator deletes the KafkaTopic Changed in the Kafka cluster, the Operator updates the KafkaTopic This allows you to declare a KafkaTopic as part of your application's deployment and the Topic Operator will take care of creating the topic for you. Your application just needs to deal with producing or consuming from the necessary topics. The Topic Operator maintains information about each topic in a topic store , which is continually synchronized with updates from Kafka topics or OpenShift KafkaTopic custom resources. Updates from operations applied to a local in-memory topic store are persisted to a backup topic store on disk. If a topic is reconfigured or reassigned to other brokers, the KafkaTopic will always be up to date. 4.3. User Operator The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users, and ensuring that they are configured properly in the Kafka cluster. For example, if a KafkaUser is: Created, the User Operator creates the user it describes Deleted, the User Operator deletes the user it describes Changed, the User Operator updates the user it describes Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Kafka topics can be created by applications directly in Kafka, but it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator. The User Operator allows you to declare a KafkaUser resource as part of your application's deployment. You can specify the authentication and authorization mechanism for the user. You can also configure user quotas that control usage of Kafka resources to ensure, for example, that a user does not monopolize access to a broker. When the user is created, the user credentials are created in a Secret . Your application needs to use the user and its credentials for authentication and to produce or consume messages. In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user's access rights in the KafkaUser declaration. 4.4. Feature gates in AMQ Streams Operators You can enable and disable some features of operators using feature gates . Feature gates are set in the operator configuration and have three stages of maturity: alpha, beta, or General Availability (GA). For more information, see Feature gates .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_streams_on_openshift_overview/overview-components_str
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_karaf_standalone/making-open-source-more-inclusive
Chapter 3. Autoscaling
Chapter 3. Autoscaling 3.1. Autoscaling Knative Serving provides automatic scaling, or autoscaling , for applications to match incoming demand. For example, if an application is receiving no traffic, and scale-to-zero is enabled, Knative Serving scales the application down to zero replicas. If scale-to-zero is disabled, the application is scaled down to the minimum number of replicas configured for applications on the cluster. Replicas can also be scaled up to meet demand if traffic to the application increases. Autoscaling settings for Knative services can be global settings that are configured by cluster administrators (or dedicated administrators for Red Hat OpenShift Service on AWS and OpenShift Dedicated), or per-revision settings that are configured for individual services. You can modify per-revision settings for your services by using the OpenShift Container Platform web console, by modifying the YAML file for your service, or by using the Knative ( kn ) CLI. Note Any limits or targets that you set for a service are measured against a single instance of your application. For example, setting the target annotation to 50 configures the autoscaler to scale the application so that each revision handles 50 requests at a time. 3.2. Scale bounds Scale bounds determine the minimum and maximum numbers of replicas that can serve an application at any given time. You can set scale bounds for an application to help prevent cold starts or control computing costs. 3.2.1. Minimum scale bounds The minimum number of replicas that can serve an application is determined by the min-scale annotation. If scale to zero is not enabled, the min-scale value defaults to 1 . The min-scale value defaults to 0 replicas if the following conditions are met: The min-scale annotation is not set Scaling to zero is enabled The class KPA is used Example service spec with min-scale annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: "0" ... 3.2.1.1. Setting the min-scale annotation by using the Knative CLI Using the Knative ( kn ) CLI to set the min-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-min flag to create or modify the min-scale value for a service. Prerequisites Knative Serving is installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Set the minimum number of replicas for the service by using the --scale-min flag: USD kn service create <service_name> --image <image_uri> --scale-min <integer> Example command USD kn service create showcase --image quay.io/openshift-knative/showcase --scale-min 2 3.2.2. Maximum scale bounds The maximum number of replicas that can serve an application is determined by the max-scale annotation. If the max-scale annotation is not set, there is no upper limit for the number of replicas created. Example service spec with max-scale annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/max-scale: "10" ... 3.2.2.1. Setting the max-scale annotation by using the Knative CLI Using the Knative ( kn ) CLI to set the max-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-max flag to create or modify the max-scale value for a service. Prerequisites Knative Serving is installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Set the maximum number of replicas for the service by using the --scale-max flag: USD kn service create <service_name> --image <image_uri> --scale-max <integer> Example command USD kn service create showcase --image quay.io/openshift-knative/showcase --scale-max 10 3.3. Concurrency Concurrency determines the number of simultaneous requests that can be processed by each replica of an application at any given time. Concurrency can be configured as a soft limit or a hard limit : A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded. A hard limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests. Important Using a hard limit configuration is only recommended if there is a clear use case for it with your application. Having a low, hard limit specified may have a negative impact on the throughput and latency of an application, and might cause cold starts. Adding a soft target and a hard limit means that the autoscaler targets the soft target number of concurrent requests, but imposes a hard limit of the hard limit value for the maximum number of requests. If the hard limit value is less than the soft limit value, the soft limit value is tuned down, because there is no need to target more requests than the number that can actually be handled. 3.3.1. Configuring a soft concurrency target A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded. You can specify a soft concurrency target for your Knative service by setting the autoscaling.knative.dev/target annotation in the spec, or by using the kn service command with the correct flags. Procedure Optional: Set the autoscaling.knative.dev/target annotation for your Knative service in the spec of the Service custom resource: Example service spec apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: "200" Optional: Use the kn service command to specify the --concurrency-target flag: USD kn service create <service_name> --image <image_uri> --concurrency-target <integer> Example command to create a service with a concurrency target of 50 requests USD kn service create showcase --image quay.io/openshift-knative/showcase --concurrency-target 50 3.3.2. Configuring a hard concurrency limit A hard concurrency limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests. You can specify a hard concurrency limit for your Knative service by modifying the containerConcurrency spec, or by using the kn service command with the correct flags. Procedure Optional: Set the containerConcurrency spec for your Knative service in the spec of the Service custom resource: Example service spec apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: spec: containerConcurrency: 50 The default value is 0 , which means that there is no limit on the number of simultaneous requests that are permitted to flow into one replica of the service at a time. A value greater than 0 specifies the exact number of requests that are permitted to flow into one replica of the service at a time. This example would enable a hard concurrency limit of 50 requests. Optional: Use the kn service command to specify the --concurrency-limit flag: USD kn service create <service_name> --image <image_uri> --concurrency-limit <integer> Example command to create a service with a concurrency limit of 50 requests USD kn service create showcase --image quay.io/openshift-knative/showcase --concurrency-limit 50 3.3.3. Concurrency target utilization This value specifies the percentage of the concurrency limit that is actually targeted by the autoscaler. This is also known as specifying the hotness at which a replica runs, which enables the autoscaler to scale up before the defined hard limit is reached. For example, if the containerConcurrency value is set to 10, and the target-utilization-percentage value is set to 70 percent, the autoscaler creates a new replica when the average number of concurrent requests across all existing replicas reaches 7. Requests numbered 7 to 10 are still sent to the existing replicas, but additional replicas are started in anticipation of being required after the containerConcurrency value is reached. Example service configured using the target-utilization-percentage annotation apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target-utilization-percentage: "70" ... 3.4. Scale-to-zero Knative Serving provides automatic scaling, or autoscaling , for applications to match incoming demand. 3.4.1. Enabling scale-to-zero You can use the enable-scale-to-zero spec to enable or disable scale-to-zero globally for applications on the cluster. Prerequisites You have installed OpenShift Serverless Operator and Knative Serving on your cluster. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler. Procedure Modify the enable-scale-to-zero spec in the KnativeServing custom resource (CR): Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: "false" 1 1 The enable-scale-to-zero spec can be either "true" or "false" . If set to true, scale-to-zero is enabled. If set to false, applications are scaled down to the configured minimum scale bound . The default value is "true" . 3.4.2. Configuring the scale-to-zero grace period Knative Serving provides automatic scaling down to zero pods for applications. You can use the scale-to-zero-grace-period spec to define an upper bound time limit that Knative waits for scale-to-zero machinery to be in place before the last replica of an application is removed. Prerequisites You have installed OpenShift Serverless Operator and Knative Serving on your cluster. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You are using the default Knative Pod Autoscaler. The scale-to-zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler. Procedure Modify the scale-to-zero-grace-period spec in the KnativeServing custom resource (CR): Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: "30s" 1 1 The grace period time in seconds. The default value is 30 seconds.
[ "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: \"0\"", "kn service create <service_name> --image <image_uri> --scale-min <integer>", "kn service create showcase --image quay.io/openshift-knative/showcase --scale-min 2", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/max-scale: \"10\"", "kn service create <service_name> --image <image_uri> --scale-max <integer>", "kn service create showcase --image quay.io/openshift-knative/showcase --scale-max 10", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: \"200\"", "kn service create <service_name> --image <image_uri> --concurrency-target <integer>", "kn service create showcase --image quay.io/openshift-knative/showcase --concurrency-target 50", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: spec: containerConcurrency: 50", "kn service create <service_name> --image <image_uri> --concurrency-limit <integer>", "kn service create showcase --image quay.io/openshift-knative/showcase --concurrency-limit 50", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target-utilization-percentage: \"70\"", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: \"false\" 1", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: \"30s\" 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serving/autoscaling
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.23/pr01
Chapter 12. Viewing threads
Chapter 12. Viewing threads You can view and monitor the state of threads. Procedure Click the Runtime tab and then the Threads subtab. The Threads page lists active threads and stack trace details for each thread. By default, the thread list shows all threads in descending ID order. To sort the list by increasing ID, click the ID column label. Optionally, filter the list by thread state (for example, Blocked ) or by thread name. To drill down to detailed information for a specific thread, such as the lock class name and full stack trace for that thread, in the Actions column, click More .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_jboss_eap_standalone/fuse-console-view-threads-all_eap
Chapter 109. AclRuleGroupResource schema reference
Chapter 109. AclRuleGroupResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleGroupResource type from AclRuleTopicResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value group for the type AclRuleGroupResource . Property Description type Must be group . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal])
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-aclrulegroupresource-reference
Appendix B. Customizing the Manager virtual machine using automation during deployment
Appendix B. Customizing the Manager virtual machine using automation during deployment You can use automation to adjust or otherwise customize the Manager virtual machine during deployment by using one or more Ansible playbooks. You can run playbooks at the following points during deployment: before the self-hosted engine setup after the self-hosted engine setup, but before storage is configured after adding the deployment host to the Manager after the deployment completes entirely Procedure Write one or more Ansible playbooks to run on the Manager virtual machine at specific points in the deployment process. Add the playbooks to the appropriate directory under /usr/share/ansible/collections/ansible_collections/redhat/rhv/roles/hosted_engine_setup/hooks/ : enginevm_before_engine_setup Run the playbook before the self-hosted engine setup. enginevm_after_engine_setup Run the playbook after the self-hosted engine setup, but before storage is configured. after_add_host Run the playbook after adding the deployment host to the Manager. after_setup Run the playbook after deployment is completed. When you run the self-hosted-engine installer, the deployment script runs the ovirt-engine-setup role, which automatically runs any playbooks in either of these directories. Additional resources Deploying the self-hosted engine using the command line Automating Configuration Tasks using Ansible Intro to playbooks in the Ansible documentation
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/customizing_engine_vm_during_deployment_auto_she_cli_deploy
Chapter 8. The rbd kernel module
Chapter 8. The rbd kernel module As a storage administrator, you can access Ceph block devices through the rbd kernel module. You can map and unmap a block device, and displaying those mappings. Also, you can get a list of images through the rbd kernel module. Important Kernel clients on Linux distributions other than Red Hat Enterprise Linux (RHEL) are permitted but not supported. If issues are found in the storage cluster when using these kernel clients, Red Hat will address them, but if the root cause is found to be on the kernel client side, the issue will have to be addressed by the software vendor. Prerequisites A running Red Hat Ceph Storage cluster. 8.1. Create a Ceph Block Device and use it from a Linux kernel module client As a storage administrator, you can create a Ceph Block Device for a Linux kernel module client in the Red Hat Ceph Storage Dashboard. As a system administrator, you can map that block device on a Linux client, and partition, format, and mount it, using the command line. After this, you can read and write files to it. Prerequisites A running Red Hat Ceph Storage cluster. A Red Hat Enterprise Linux client. 8.1.1. Creating a Ceph block device for a Linux kernel module client using dashboard You can create a Ceph block device specifically for a Linux kernel module client using the dashboard web interface by enabling only the features it supports. Kernel module client supports features like Deep flatten, Layering, Exclusive lock, Object map, and Fast diff. Prerequisites A running Red Hat Ceph Storage cluster. A replicated RBD pool created and enabled. Procedure From the Block drop-down menu, select Images . Click Create . In the Create RBD window, enter a image name, select the RBD enabled pool, select the supported features: Click Create RBD . Verification You will get a notification that the image is created successfully. Additional Resources For more information, see Map and mount a Ceph Block Device on Linux using the command line in the Red Hat Ceph Storage Block Device Guide . For more information, see the Red Hat Ceph Storage Dashboard Guide . 8.1.2. Map and mount a Ceph Block Device on Linux using the command line You can map a Ceph Block Device from a Red Hat Enterprise Linux client using the Linux rbd kernel module. After mapping it, you can partition, format, and mount it, so you can write files to it. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph block device for a Linux kernel module client using the dashboard is created. A Red Hat Enterprise Linux client. Procedure On the Red Hat Enterprise Linux client node, enable the Red Hat Ceph Storage 7 Tools repository: Install the ceph-common RPM package: Copy the Ceph configuration file from a Monitor node to the Client node: Syntax Example Copy the key file from a Monitor node to the Client node: Syntax Example Map the image: Syntax Example Create a partition table on the block device: Syntax Example Create a partition for an XFS file system: Syntax Example Format the partition: Syntax Example Create a directory to mount the new file system on: Syntax Example Mount the file system: Syntax Example Verify that the file system is mounted and showing the correct size: Syntax Example Additional Resources For more information, see Creating a Ceph Block Device for a Linux kernel module client using Dashboard . For more information, see Managing file systems for Red Hat Enterprise Linux 8. For more information, see Storage Administration Guide for Red Hat Enterprise Linux 7. 8.2. Mapping a block device Use rbd to map an image name to a kernel module. You must specify the image name, the pool name and the user name. rbd will load the RBD kernel module if it is not already loaded. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Return a list of the images: Example Following are the two options to map the image: Map an image name to a kernel module: Syntax Example Specify a secret when using cephx authentication by either the keyring or a file containing the secret: Syntax or 8.3. Displaying mapped block devices You can display which block device images are mapped to the kernel module with the rbd command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Display the mapped block devices: 8.4. Unmapping a block device You can unmap a block device image with the rbd command, by using the unmap option and providing the device name. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. An image that is mapped. Procedure Get the specification of the device. Example Unmap the block device image: Syntax Example 8.5. Segregating images within isolated namespaces within the same pool When using Ceph Block Devices directly without a higher-level system, such as OpenStack or OpenShift Container Storage, it was not possible to restrict user access to specific block device images. When combined with CephX capabilities, users can be restricted to specific pool namespaces to restrict access to the images. You can use RADOS namespaces, a new level of identity to identify an object, to provide isolation between rados clients within a pool. For example, a client can only have full permissions on a namespace specific to them. This makes using a different RADOS client for each tenant feasible, which is particularly useful for a block device where many different tenants are accessing their own block device images. You can segregate block device images within isolated namespaces within the same pool. Prerequisites A running Red Hat Ceph Storage cluster. Upgrade all the kernels to 4x and to librbd and librados on all clients. Root-level access to the monitor and client nodes. Procedure Create an rbd pool: Syntax Example Associate the rbd pool with the RBD application: Syntax Example Initialize the pool with the RBD application: Syntax Example Create two namespaces: Syntax Example Provide access to the namespaces for two users: Syntax Example Get the key of the clients: Syntax Example Create the block device images and use the pre-defined namespace within a pool: Syntax Example Optional: Get the details of the namespace and the associated image: Syntax Example Copy the Ceph configuration file from the Ceph Monitor node to the client node: Example Copy the admin keyring from the Ceph Monitor node to the client node: Syntax Example Copy the keyrings of the users from the Ceph Monitor node to the client node: Syntax Example Map the block device image: Syntax Example This does not allow access to users in the other namespaces in the same pool. Example Verify the device: Example
[ "subscription-manager repos --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-common", "scp root@ MONITOR_NODE :/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "scp root@cluster1-node2:/etc/ceph/ceph.conf /etc/ceph/ceph.conf [email protected]'s password: ceph.conf 100% 497 724.9KB/s 00:00", "scp root@ MONITOR_NODE :/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring", "scp root@cluster1-node2:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring [email protected]'s password: ceph.client.admin.keyring 100% 151 265.0KB/s 00:00", "rbd map --pool POOL_NAME IMAGE_NAME --id admin", "rbd map --pool block-device-pool image1 --id admin /dev/rbd0", "parted /dev/ MAPPED_BLOCK_DEVICE mklabel msdos", "parted /dev/rbd0 mklabel msdos Information: You may need to update /etc/fstab.", "parted /dev/ MAPPED_BLOCK_DEVICE mkpart primary xfs 0% 100%", "parted /dev/rbd0 mkpart primary xfs 0% 100% Information: You may need to update /etc/fstab.", "mkfs.xfs /dev/ MAPPED_BLOCK_DEVICE_WITH_PARTITION_NUMBER", "mkfs.xfs /dev/rbd0p1 meta-data=/dev/rbd0p1 isize=512 agcount=16, agsize=163824 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2621184, imaxpct=25 = sunit=16 swidth=16 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0", "mkdir PATH_TO_DIRECTORY", "mkdir /mnt/ceph", "mount /dev/ MAPPED_BLOCK_DEVICE_WITH_PARTITION_NUMBER PATH_TO_DIRECTORY", "mount /dev/rbd0p1 /mnt/ceph/", "df -h PATH_TO_DIRECTORY", "df -h /mnt/ceph/ Filesystem Size Used Avail Use% Mounted on /dev/rbd0p1 10G 105M 9.9G 2% /mnt/ceph", "rbd list", "rbd device map POOL_NAME / IMAGE_NAME --id USER_NAME", "rbd device map rbd/myimage --id admin", "rbd device map POOL_NAME / IMAGE_NAME --id USER_NAME --keyring PATH_TO_KEYRING", "rbd device map POOL_NAME / IMAGE_NAME --id USER_NAME --keyfile PATH_TO_FILE", "rbd device list", "rbd device list", "rbd device unmap /dev/rbd/ POOL_NAME / IMAGE_NAME", "rbd device unmap /dev/rbd/pool1/image1", "ceph osd pool create POOL_NAME PG_NUM", "ceph osd pool create mypool 100 pool 'mypool' created", "ceph osd pool application enable POOL_NAME rbd", "ceph osd pool application enable mypool rbd enabled application 'rbd' on pool 'mypool'", "rbd pool init -p POOL_NAME", "rbd pool init -p mypool", "rbd namespace create --namespace NAMESPACE", "rbd namespace create --namespace namespace1 rbd namespace create --namespace namespace2 rbd namespace ls --format=json [{\"name\":\"namespace2\"},{\"name\":\"namespace1\"}]", "ceph auth get-or-create client. USER_NAME mon 'profile rbd' osd 'profile rbd pool=rbd namespace= NAMESPACE ' -o /etc/ceph/client. USER_NAME .keyring", "ceph auth get-or-create client.testuser mon 'profile rbd' osd 'profile rbd pool=rbd namespace=namespace1' -o /etc/ceph/client.testuser.keyring ceph auth get-or-create client.newuser mon 'profile rbd' osd 'profile rbd pool=rbd namespace=namespace2' -o /etc/ceph/client.newuser.keyring", "ceph auth get client. USER_NAME", "ceph auth get client.testuser [client.testuser] key = AQDMp61hBf5UKRAAgjQ2In0Z3uwAase7mrlKnQ== caps mon = \"profile rbd\" caps osd = \"profile rbd pool=rbd namespace=namespace1\" exported keyring for client.testuser ceph auth get client.newuser [client.newuser] key = AQDfp61hVfLFHRAA7D80ogmZl80ROY+AUG4A+Q== caps mon = \"profile rbd\" caps osd = \"profile rbd pool=rbd namespace=namespace2\" exported keyring for client.newuser", "rbd create --namespace NAMESPACE IMAGE_NAME --size SIZE_IN_GB", "rbd create --namespace namespace1 image01 --size 1G rbd create --namespace namespace2 image02 --size 1G", "rbd --namespace NAMESPACE ls --long", "rbd --namespace namespace1 ls --long NAME SIZE PARENT FMT PROT LOCK image01 1 GiB 2 rbd --namespace namespace2 ls --long NAME SIZE PARENT FMT PROT LOCK image02 1 GiB 2", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE :/etc/ceph/", "scp /etc/ceph/ceph.conf root@host02:/etc/ceph/ root@host02's password: ceph.conf 100% 497 724.9KB/s 00:00", "scp /etc/ceph/ceph.client.admin.keyring root@ CLIENT_NODE :/etc/ceph", "scp /etc/ceph/ceph.client.admin.keyring root@host02:/etc/ceph/ root@host02's password: ceph.client.admin.keyring 100% 151 265.0KB/s 00:00", "scp /etc/ceph/ceph.client. USER_NAME .keyring root@ CLIENT_NODE :/etc/ceph/", "scp /etc/ceph/client.newuser.keyring root@host02:/etc/ceph/ scp /etc/ceph/client.testuser.keyring root@host02:/etc/ceph/", "rbd map --name NAMESPACE IMAGE_NAME -n client. USER_NAME --keyring /etc/ceph/client. USER_NAME .keyring", "rbd map --namespace namespace1 image01 -n client.testuser --keyring=/etc/ceph/client.testuser.keyring /dev/rbd0 rbd map --namespace namespace2 image02 -n client.newuser --keyring=/etc/ceph/client.newuser.keyring /dev/rbd1", "rbd map --namespace namespace2 image02 -n client.testuser --keyring=/etc/ceph/client.testuser.keyring rbd: warning: image already mapped as /dev/rbd1 rbd: sysfs write failed rbd: error asserting namespace: (1) Operation not permitted In some cases useful info is found in syslog - try \"dmesg | tail\". 2021-12-06 02:49:08.106 7f8d4fde2500 -1 librbd::api::Namespace: exists: error asserting namespace: (1) Operation not permitted rbd: map failed: (1) Operation not permitted rbd map --namespace namespace1 image01 -n client.newuser --keyring=/etc/ceph/client.newuser.keyring rbd: warning: image already mapped as /dev/rbd0 rbd: sysfs write failed rbd: error asserting namespace: (1) Operation not permitted In some cases useful info is found in syslog - try \"dmesg | tail\". 2021-12-03 12:16:24.011 7fcad776a040 -1 librbd::api::Namespace: exists: error asserting namespace: (1) Operation not permitted rbd: map failed: (1) Operation not permitted", "rbd showmapped id pool namespace image snap device 0 rbd namespace1 image01 - /dev/rbd0 1 rbd namespace2 image02 - /dev/rbd1" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/block_device_guide/the-rbd-kernel-module
Chapter 2. Projects
Chapter 2. Projects 2.1. Working with projects A project allows a community of users to organize and manage their content in isolation from other communities. Note Projects starting with openshift- and kube- are default projects . These projects host cluster components that run as pods and other infrastructure components. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 2.1.1. Creating a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to create a project in your cluster. 2.1.1.1. Creating a project by using the web console You can use the OpenShift Container Platform web console to create a project in your cluster. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- using the web console. Prerequisites Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Container Platform. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Click Create Project : In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . The dashboard for your project is displayed. Optional: Select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab to provide or revoke admin, edit, and view privileges for the project. If you are using the Developer perspective: Click the Project menu and select Create Project : Figure 2.1. Create project In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . Optional: Use the left navigation panel to navigate to the Project view and see the dashboard for your project. Optional: In the project dashboard, select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab of the project dashboard to provide or revoke admin, edit, and view privileges for the project. Additional resources Customizing the available cluster roles using the web console 2.1.1.2. Creating a project by using the CLI If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Procedure Run: USD oc new-project <project_name> \ --description="<description>" --display-name="<display_name>" For example: USD oc new-project hello-openshift \ --description="This is an example project" \ --display-name="Hello OpenShift" Note The number of projects you are allowed to create might be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one. 2.1.2. Viewing a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to view a project in your cluster. 2.1.2.1. Viewing a project by using the web console You can view the projects that you have access to by using the OpenShift Container Platform web console. Procedure If you are using the Administrator perspective: Navigate to Home Projects in the navigation menu. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. Select the YAML tab to view and update the YAML configuration for the project resource. Select the Workloads tab to see workloads in the project. Select the RoleBindings tab to view and create role bindings for your project. If you are using the Developer perspective: Navigate to the Project page in the navigation menu. Select All Projects from the Project drop-down menu at the top of the screen to list all of the projects in your cluster. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. If you have adequate permissions for a project, select the Project access tab view and update the privileges for the project. 2.1.2.2. Viewing a project using the CLI When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy. Procedure To view a list of projects, run: USD oc get projects You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content: USD oc project <project_name> 2.1.3. Providing access permissions to your project using the Developer perspective You can use the Project view in the Developer perspective to grant or revoke access permissions to your project. Prerequisites You have created a project. Procedure To add users to your project and provide Admin , Edit , or View access to them: In the Developer perspective, navigate to the Project page. Select your project from the Project menu. Select the Project Access tab. Click Add access to add a new row of permissions to the default ones. Figure 2.2. Project permissions Enter the user name, click the Select a role drop-down list, and select an appropriate role. Click Save to add the new permissions. You can also use: The Select a role drop-down list, to modify the access permissions of an existing user. The Remove Access icon, to completely remove the access permissions of an existing user to the project. Note Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective. 2.1.4. Customizing the available cluster roles using the web console In the Developer perspective of the web console, the Project Project access page enables a project administrator to grant roles to users in a project. By default, the available cluster roles that can be granted to users in a project are admin, edit, and view. As a cluster administrator, you can define which cluster roles are available in the Project access page for all projects cluster-wide. You can specify the available roles by customizing the spec.customization.projectAccess.availableClusterRoles object in the Console configuration resource. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Administration Cluster settings . Click the Configuration tab. From the Configuration resource list, select Console operator.openshift.io . Navigate to the YAML tab to view and edit the YAML code. In the YAML code under spec , customize the list of available cluster roles for project access. The following example specifies the default admin , edit , and view roles: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster # ... spec: customization: projectAccess: availableClusterRoles: - admin - edit - view Click Save to save the changes to the Console configuration resource. Verification In the Developer perspective, navigate to the Project page. Select a project from the Project menu. Select the Project access tab. Click the menu in the Role column and verify that the available roles match the configuration that you applied to the Console resource configuration. 2.1.5. Adding to a project You can add items to your project by using the +Add page in the Developer perspective. Prerequisites You have created a project. Procedure In the Developer perspective, navigate to the +Add page. Select your project from the Project menu. Click on an item on the +Add page and then follow the workflow. Note You can also use the search feature in the Add* page to find additional items to add to your project. Click * under Add at the top of the page and type the name of a component in the search field. 2.1.6. Checking the project status You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to view the status of your project. 2.1.6.1. Checking project status by using the web console You can review the status of your project by using the web console. Prerequisites You have created a project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Review the project status in the Overview page. If you are using the Developer perspective: Navigate to the Project page. Select a project from the Project menu. Review the project status in the Overview page. 2.1.6.2. Checking project status by using the CLI You can review the status of your project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. Procedure Switch to your project: USD oc project <project_name> 1 1 Replace <project_name> with the name of your project. Obtain a high-level overview of the project: USD oc status 2.1.7. Deleting a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to delete a project. When you delete a project, the server updates the project status to Terminating from Active . Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console. 2.1.7.1. Deleting a project by using the web console You can delete a project by using the web console. Prerequisites You have created a project. You have the required permissions to delete the project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Click the Actions drop-down menu for the project and select Delete Project . Note The Delete Project option is not available if you do not have the required permissions to delete the project. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . If you are using the Developer perspective: Navigate to the Project page. Select the project that you want to delete from the Project menu. Click the Actions drop-down menu for the project and select Delete Project . Note If you do not have the required permissions to delete the project, the Delete Project option is not available. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . 2.1.7.2. Deleting a project by using the CLI You can delete a project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. You have the required permissions to delete the project. Procedure Delete your project: USD oc delete project <project_name> 1 1 Replace <project_name> with the name of the project that you want to delete. 2.2. Creating a project as another user Impersonation allows you to create a project as a different user. 2.2.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.2.2. Impersonating a user when you create a project You can impersonate a different user when you create a project request. Because system:authenticated:oauth is the only bootstrap group that can create project requests, you must impersonate that group. Procedure To create a project request on behalf of a different user: USD oc new-project <project> --as=<user> \ --as-group=system:authenticated --as-group=system:authenticated:oauth 2.3. Configuring project creation In OpenShift Container Platform, projects are used to group and isolate related objects. When a request is made to create a new project using the web console or oc new-project command, an endpoint in OpenShift Container Platform is used to provision the project according to a template, which can be customized. As a cluster administrator, you can allow and configure how developers and service accounts can create, or self-provision , their own projects. 2.3.1. About project creation The OpenShift Container Platform API server automatically provisions new projects based on the project template that is identified by the projectRequestTemplate parameter in the cluster's project configuration resource. If the parameter is not defined, the API server creates a default template that creates a project with the requested name, and assigns the requesting user to the admin role for that project. When a project request is submitted, the API substitutes the following parameters into the template: Table 2.1. Default project template parameters Parameter Description PROJECT_NAME The name of the project. Required. PROJECT_DISPLAYNAME The display name of the project. May be empty. PROJECT_DESCRIPTION The description of the project. May be empty. PROJECT_ADMIN_USER The user name of the administrating user. PROJECT_REQUESTING_USER The user name of the requesting user. Access to the API is granted to developers with the self-provisioner role and the self-provisioners cluster role binding. This role is available to all authenticated developers by default. 2.3.2. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 2.3.3. Disabling project self-provisioning You can prevent an authenticated user group from self-provisioning new projects. Procedure Log in as a user with cluster-admin privileges. View the self-provisioners cluster role binding usage by running the following command: USD oc describe clusterrolebinding.rbac self-provisioners Example output Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth Review the subjects in the self-provisioners section. Remove the self-provisioner cluster role from the group system:authenticated:oauth . If the self-provisioners cluster role binding binds only the self-provisioner role to the system:authenticated:oauth group, run the following command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}' If the self-provisioners cluster role binding binds the self-provisioner role to more users, groups, or service accounts than the system:authenticated:oauth group, run the following command: USD oc adm policy \ remove-cluster-role-from-group self-provisioner \ system:authenticated:oauth Edit the self-provisioners cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state. To update the role binding using the CLI: Run the following command: USD oc edit clusterrolebinding.rbac self-provisioners In the displayed role binding, set the rbac.authorization.kubernetes.io/autoupdate parameter value to false , as shown in the following example: apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" # ... To update the role binding by using a single command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }' Log in as an authenticated user and verify that it can no longer self-provision a project: USD oc new-project test Example output Error from server (Forbidden): You may not request a new project via this API. Consider customizing this project request message to provide more helpful instructions specific to your organization. 2.3.4. Customizing the project request message When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default: You may not request a new project via this API. Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example: To request a project, contact your system administrator at [email protected] . To request a new project, fill out the project request form located at https://internal.example.com/openshift-project-request . To customize the project request message: Procedure Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Log in as a user with cluster-admin privileges. Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestMessage parameter and set the value to your custom message: Project configuration resource with custom project request message apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: <message_string> # ... For example: apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]. # ... After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied.
[ "oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"", "oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"", "oc get projects", "oc project <project_name>", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view", "oc project <project_name> 1", "oc status", "oc delete project <project_name> 1", "oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc describe clusterrolebinding.rbac self-provisioners", "Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth", "oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc edit clusterrolebinding.rbac self-provisioners", "apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"", "oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'", "oc new-project test", "Error from server (Forbidden): You may not request a new project via this API.", "You may not request a new project via this API.", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/building_applications/projects
8.23. boost
8.23. boost 8.23.1. RHBA-2014:1440 - boost bug fix and enhancement update Updated boost packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The boost packages contain a large number of free peer-reviewed portable C++ source libraries. These libraries are suitable for tasks such as portable file-systems and time/date abstraction, serialization, unit testing, thread creation and multi-process synchronization, parsing, graphing, regular expression manipulation, and many others. Bug Fixes BZ# 1037680 Due to the way the Python programming language was packaged for Red Hat Enterprise Linux, the boost packages could not be provided for secondary architectures. For example, boost-devel.i686 was not available on the x86-64 architecture. The Python packaging has been updated, and it is now possible to install secondary-architecture versions of the boost packages. BZ# 1021004 A coding error in the shared_ptr pointer previously caused a memory leak when serializing and unserializing shared pointers. The shared_ptr code has been corrected and the memory leak now no longer occurs. BZ# 969183 Due to an error in threading configuration of GNU Compiler Collection (GCC) version 4.7 or later, Boost failed to detect the support for multithreading versions of GCC. This patch fixes the error and Boost now detects multithreading support in the described circumstances correctly. BZ# 1108268 Prior to this update, a number of boost libraries were not compatible with GCC provided with Red Hat Developer Toolset. A fix has been implemented to address this problem and the affected libraries now properly work with Red Hat Developer Toolset GCC. BZ# 801534 The mpi.so library was previously missing from the boost libraries. Consequently, using the Message Passing Interface (MPI) in combination with Python scripts failed. With this update, mpi.so is included in the boost packages and using MPI with Python works as expected. In addition, this update adds the following Enhancement BZ# 1132455 The MPICH2 library has been replaced with a later version, MPICH 3.0. Note that Boost packaging has been updated accordingly and new packages are named boost-mpich* instead of boost-mpich2*. Users of Boost are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/boost
probe::tcp.disconnect
probe::tcp.disconnect Name probe::tcp.disconnect - TCP socket disconnection Synopsis tcp.disconnect Values flags TCP flags (e.g. FIN, etc) daddr A string representing the destination IP address sport TCP source port family IP address family name Name of this probe saddr A string representing the source IP address dport TCP destination port sock Network socket Context The process which disconnects tcp
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcp-disconnect
8.191. sanlock
8.191. sanlock 8.191.1. RHBA-2013:1632 - sanlock bug fix and enhancement update Updated sanlock packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The sanlock packages provide a shared storage lock manager. Hosts with shared access to a block device or a file can use sanlock to synchronize their activities. VDSM and libvirt use sanlock to synchronize access to virtual machine images. Note The sanlock packages have been upgraded to upstream version 2.8, which provides a number of bug fixes and enhancements over the version, including a new API provided for applications to request the release of a resource. (BZ# 960989 ) Bug Fix BZ# 961032 Previously, the wdmd daemon did not always select the functional device when some watchdog modules provided two devices. Consequently, wdmd did not work correctly in some instances. A patch has been applied to address this bug, and wdmd now verifies the state of both devices and selects the one that works properly. Enhancements BZ# 960993 With this update, a new API has been provided for applications to verify the status of hosts in a lockspace. BZ# 966088 With this update, a new API has been provided for applications to check the status of hosts that are holding a resource lease. Users of sanlock are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/sanlock
Machine management
Machine management OpenShift Container Platform 4.12 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_management/index
10.2.4.2. The mod_ssl Module
10.2.4.2. The mod_ssl Module The configuration for mod_ssl has been moved from the httpd.conf file into the /etc/httpd/conf.d/ssl.conf file. For this file to be loaded, and for mod_ssl to work, the statement Include conf.d/*.conf must be in the httpd.conf file as described in Section 10.2.1.3, "Dynamic Shared Object (DSO) Support" . ServerName directives in SSL virtual hosts must explicitly specify the port number. For example, the following is a sample Apache HTTP Server 1.3 directive: To migrate this setting to Apache HTTP Server 2.0, use the following structure: It is also important to note that both the SSLLog and SSLLogLevel directives have been removed. The mod_ssl module now obeys the ErrorLog and LogLevel directives. Refer to Section 10.5.35, " ErrorLog " and Section 10.5.36, " LogLevel " for more information about these directives. For more on this topic, refer to the following documentation on the Apache Software Foundation's website: http://httpd.apache.org/docs-2.0/mod/mod_ssl.html http://httpd.apache.org/docs-2.0/vhosts/
[ "<VirtualHost _default_:443> # General setup for the virtual host ServerName ssl.example.name </VirtualHost>", "<VirtualHost _default_:443> # General setup for the virtual host ServerName ssl.host.name :443 </VirtualHost>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-httpd-v2-mig-mod-ssl
Chapter 12. Performing basic overcloud administration tasks
Chapter 12. Performing basic overcloud administration tasks This chapter contains information about basic tasks you might need to perform during the lifecycle of your overcloud. 12.1. Managing containerized services Red Hat OpenStack Platform (RHOSP) runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services. Listing containers and images To list running containers, run the following command: To include stopped or failed containers in the command output, add the --all option to the command: To list container images, run the following command: Inspecting container properties To view the properties of a container or container images, use the podman inspect command. For example, to inspect the keystone container, run the following command: Managing containers with Systemd services versions of OpenStack Platform managed containers with Docker and its daemon. In OpenStack Platform 15, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run Systemd commands to perform specific operations for each container. Note It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead. To check a container status, run the systemctl status command: To stop a container, run the systemctl stop command: To start a container, run the systemctl start command: To restart a container, run the systemctl restart command: Because no daemon monitors the containers status, Systemd automatically restarts most containers in these situations: Clean exit code or signal, such as running podman stop command. Unclean exit code, such as the podman container crashing after a start. Unclean signals. Timeout if the container takes more than 1m 30s to start. For more information about Systemd services, see the systemd.service documentation . Note Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the local file system of the node in /var/lib/config-data/puppet-generated/ . For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the local file system of the node, which overwrites any the changes that were made within the container before the restart. Monitoring podman containers with Systemd timers The Systemd timers interface manages container health checks. Each container has a timer that runs a service unit that executes health check scripts. To list all OpenStack Platform containers timers, run the systemctl list-timers command and limit the output to lines containing tripleo : To check the status of a specific container timer, run the systemctl status command for the healthcheck service: To stop, start, restart, and show the status of a container timer, run the relevant systemctl command against the .timer Systemd resource. For example, to check the status of the tripleo_keystone_healthcheck.timer resource, run the following command: If the healthcheck service is disabled but the timer for that service is present and enabled, it means that the check is currently timed out, but will be run according to timer. You can also start the check manually. Note The podman ps command does not show the container health status. Checking container logs OpenStack Platform 16 introduces a new logging directory /var/log/containers/stdout that contains the standard output (stdout) all of the containers, and standard errors (stderr) consolidated in one single file for each container. Paunch and the container-puppet.py script configure podman containers to push their outputs to the /var/log/containers/stdout directory, which creates a collection of all logs, even for the deleted containers, such as container-puppet-* containers. The host also applies log rotation to this directory, which prevents huge files and disk space issues. In case a container is replaced, the new container outputs to the same log file, because podman uses the container name instead of container ID. You can also check the logs for a containerized service with the podman logs command. For example, to view the logs for the keystone container, run the following command: Accessing containers To enter the shell for a containerized service, use the podman exec command to launch /bin/bash . For example, to enter the shell for the keystone container, run the following command: To enter the shell for the keystone container as the root user, run the following command: To exit the container, run the following command: 12.2. Modifying the overcloud environment You can modify the overcloud to add additional features or alter existing operations. To modify the overcloud, make modifications to your custom environment files and heat templates, then rerun the openstack overcloud deploy command from your initial overcloud creation. For example, if you created an overcloud using Section 7.13, "Deployment command" , rerun the following command: Director checks the overcloud stack in heat, and then updates each item in the stack with the environment files and heat templates. Director does not recreate the overcloud, but rather changes the existing overcloud. Important Removing parameters from custom environment files does not revert the parameter value to the default configuration. You must identify the default value from the core heat template collection in /usr/share/openstack-tripleo-heat-templates and set the value in your custom environment file manually. If you want to include a new environment file, add it to the openstack overcloud deploy command with the`-e` option. For example: This command includes the new parameters and resources from the environment file into the stack. Important It is not advisable to make manual modifications to the overcloud configuration because director might overwrite these modifications later. 12.3. Importing virtual machines into the overcloud You can migrate virtual machines from an existing OpenStack environment to your Red Hat OpenStack Platform (RHOSP) environment. Procedure On the existing OpenStack environment, create a new image by taking a snapshot of a running server and download the image: Copy the exported image to the undercloud node: Log in to the undercloud as the stack user. Source the overcloudrc file: Upload the exported image into the overcloud: Launch a new instance: Important These commands copy each virtual machine disk from the existing OpenStack environment to the new Red Hat OpenStack Platform. QCOW snapshots lose their original layering system. This process migrates all instances from a Compute node. You can now perform maintenance on the node without any instance downtime. To return the Compute node to an enabled state, run the following command: 12.4. Running the dynamic inventory script Director can run Ansible-based automation in your Red Hat OpenStack Platform (RHOSP) environment. Director uses the tripleo-ansible-inventory command to generate a dynamic inventory of nodes in your environment. Procedure To view a dynamic inventory of nodes, run the tripleo-ansible-inventory command after sourcing stackrc : Use the --list option to return details about all hosts. This command outputs the dynamic inventory in a JSON format: To execute Ansible playbooks on your environment, run the ansible command and include the full path of the dynamic inventory tool using the -i option. For example: Replace [HOSTS] with the type of hosts that you want to use to use: controller for all Controller nodes compute for all Compute nodes overcloud for all overcloud child nodes. For example, controller and compute nodes undercloud for the undercloud "*" for all nodes Replace [OTHER OPTIONS] with additional Ansible options. Use the --ssh-extra-args='-o StrictHostKeyChecking=no' option to bypass confirmation on host key checking. Use the -u [USER] option to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using the ansible_ssh_user parameter in the dynamic inventory. The -u option overrides this parameter. Use the -m [MODULE] option to use a specific Ansible module. The default is command , which executes Linux commands. Use the -a [MODULE_ARGS] option to define arguments for the chosen module. Important Custom Ansible automation on the overcloud is not part of the standard overcloud stack. Subsequent execution of the openstack overcloud deploy command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes. 12.5. Removing the overcloud To remove the overcloud, complete the following steps: Delete an existing overcloud: Confirm that the overcloud is no longer present in the output of the openstack stack list command: Deletion takes a few minutes. When the deletion completes, follow the standard steps in the deployment scenarios to recreate your overcloud.
[ "sudo podman ps", "sudo podman ps --all", "sudo podman images", "sudo podman inspect keystone", "sudo systemctl status tripleo_keystone ● tripleo_keystone.service - keystone container Loaded: loaded (/etc/systemd/system/tripleo_keystone.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-02-15 23:53:18 UTC; 2 days ago Main PID: 29012 (podman) CGroup: /system.slice/tripleo_keystone.service └─29012 /usr/bin/podman start -a keystone", "sudo systemctl stop tripleo_keystone", "sudo systemctl start tripleo_keystone", "sudo systemctl restart tripleo_keystone", "sudo systemctl list-timers | grep tripleo Mon 2019-02-18 20:18:30 UTC 1s left Mon 2019-02-18 20:17:26 UTC 1min 2s ago tripleo_nova_metadata_healthcheck.timer tripleo_nova_metadata_healthcheck.service Mon 2019-02-18 20:18:33 UTC 4s left Mon 2019-02-18 20:17:03 UTC 1min 25s ago tripleo_mistral_engine_healthcheck.timer tripleo_mistral_engine_healthcheck.service Mon 2019-02-18 20:18:34 UTC 5s left Mon 2019-02-18 20:17:23 UTC 1min 5s ago tripleo_keystone_healthcheck.timer tripleo_keystone_healthcheck.service Mon 2019-02-18 20:18:35 UTC 6s left Mon 2019-02-18 20:17:13 UTC 1min 15s ago tripleo_memcached_healthcheck.timer tripleo_memcached_healthcheck.service (...)", "sudo systemctl status tripleo_keystone_healthcheck.service ● tripleo_keystone_healthcheck.service - keystone healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.service; disabled; vendor preset: disabled) Active: inactive (dead) since Mon 2019-02-18 20:22:46 UTC; 22s ago Process: 115581 ExecStart=/usr/bin/podman exec keystone /openstack/healthcheck (code=exited, status=0/SUCCESS) Main PID: 115581 (code=exited, status=0/SUCCESS) Feb 18 20:22:46 undercloud.localdomain systemd[1]: Starting keystone healthcheck Feb 18 20:22:46 undercloud.localdomain podman[115581]: {\"versions\": {\"values\": [{\"status\": \"stable\", \"updated\": \"2019-01-22T00:00:00Z\", \"...\"}]}]}} Feb 18 20:22:46 undercloud.localdomain podman[115581]: 300 192.168.24.1:35357 0.012 seconds Feb 18 20:22:46 undercloud.localdomain systemd[1]: Started keystone healthcheck.", "sudo systemctl status tripleo_keystone_healthcheck.timer ● tripleo_keystone_healthcheck.timer - keystone container healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.timer; enabled; vendor preset: disabled) Active: active (waiting) since Fri 2019-02-15 23:53:18 UTC; 2 days ago", "sudo podman logs keystone", "sudo podman exec -it keystone /bin/bash", "sudo podman exec --user 0 -it <NAME OR ID> /bin/bash", "exit", "source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e ~/templates/node-info.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --ntp-server pool.ntp.org", "source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e ~/templates/new-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml -e ~/templates/node-info.yaml --ntp-server pool.ntp.org", "openstack server image create instance_name --name image_name openstack image save image_name --file exported_vm.qcow2", "scp exported_vm.qcow2 [email protected]:~/.", "source ~/overcloudrc", "(overcloud) USD openstack image create imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare", "(overcloud) USD openstack server create imported_instance --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id", "source ~/overcloudrc (overcloud) USD openstack compute service set [hostname] nova-compute --enable", "source ~/stackrc (undercloud) USD tripleo-ansible-inventory --list", "{\"overcloud\": {\"children\": [\"controller\", \"compute\"], \"vars\": {\"ansible_ssh_user\": \"heat-admin\"}}, \"controller\": [\"192.168.24.2\"], \"undercloud\": {\"hosts\": [\"localhost\"], \"vars\": {\"overcloud_horizon_url\": \"http://192.168.24.4:80/dashboard\", \"overcloud_admin_password\": \"abcdefghijklm12345678\", \"ansible_connection\": \"local\"}}, \"compute\": [\"192.168.24.3\"]}", "(undercloud) USD ansible [HOSTS] -i /bin/tripleo-ansible-inventory [OTHER OPTIONS]", "source ~/stackrc (undercloud) USD openstack overcloud delete overcloud", "(undercloud) USD openstack stack list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/performing-basic-overcloud-administration-tasks
4.19. IBM BladeCenter
4.19. IBM BladeCenter Table 4.20, "IBM BladeCenter" lists the fence device parameters used by fence_bladecenter , the fence agent for IBM BladeCenter. Table 4.20. IBM BladeCenter luci Field cluster.conf Attribute Description Name name A name for the IBM BladeCenter device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP port (optional) ipport TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Figure 4.14, "IBM BladeCenter" shows the configuration screen for adding an IBM BladeCenter fence device. Figure 4.14. IBM BladeCenter The following command creates a fence device instance for an IBM BladeCenter device: The following is the cluster.conf entry for the fence_bladecenter device:
[ "ccs -f cluster.conf --addfencedev bladecentertest1 agent=fence_bladecenter ipaddr=192.168.0.1 login=root passwd=password123 power_wait=60", "<fencedevices> <fencedevice agent=\"fence_bladecenter\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"bladecentertest1\" passwd=\"password123\" power_wait=\"60\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-bladectr-ca
Chapter 2. Installation
Chapter 2. Installation This chapter guides you through the steps to install AMQ OpenWire JMS in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with AMQ OpenWire JMS, you must install Apache Maven . To use AMQ OpenWire JMS, you must install Java. 2.2. Using the Red Hat Maven repository Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-client</artifactId> <version>5.11.0.redhat-630416</version> </dependency> The client is now available in your Maven project. 2.3. Installing a local Maven repository As an alternative to the online repository, AMQ OpenWire JMS can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the AMQ Broker 7.9.0 Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-broker-7.9.0-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" . 2.4. Installing the examples Procedure Use your subscription to download the AMQ Broker 7.9.0 .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-broker-7.9.0.zip On Windows, right-click the .zip file and select Extract All . When you extract the contents of the .zip file, a directory named amq-broker-7.9.0 is created. This is the top-level directory of the installation and is referred to as <install-dir> throughout this document.
[ "<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>", "<dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-client</artifactId> <version>5.11.0.redhat-630416</version> </dependency>", "unzip amq-broker-7.9.0-maven-repository.zip", "unzip amq-broker-7.9.0.zip" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/installation
Chapter 3. Configuring tags and labels in cost management
Chapter 3. Configuring tags and labels in cost management You must configure tags in each integration before cost management can use the tags to automatically organize your cost data. After adding your integrations to cost management: Tag or label resources on each of your integrations. See Section 3.2, "Configuring tags on your integrations" . Refine and add to your tags to optimize your view of cost data. See Section 1.2, "Creating a strategy for tags" . Note See the Getting started with cost management guide for instructions on configuring integrations. 3.1. How cost management associates tags Tags in AWS and Microsoft Azure and labels in OpenShift consist of key:value pairs. When the key:value pairs match, the AWS/Azure and OpenShift costs are automatically associated by cost management. Tag matching in cost management is not case sensitive: for example, an AWS resource tagged APP and an OpenShift resource tagged app are a match: Table 3.1. Example: Tag matching Source and resource type Key Value AWS resource (RDS) APP Cost-Management OpenShift pod app cost-management If an AWS resource tag matches with multiple OpenShift projects, the cost and usage of that resource are split evenly between the matched projects. This is not the case with AWS compute resources that are matched through the instance ID-node relationship. In that case, cost and usage are broken down using information about a project's resource consumption within the OpenShift cluster. By default, cost management tracks AWS compute usage and costs by associating the Amazon EC2 instance ID or Microsoft Azure virtual machine instance ID with the OpenShift Container Platform node running on that instance. 3.1.1. Tag matching hierarchy in cost management To identify your OpenShift resources running on AWS or Azure instances, cost management matches tags between integrations in the following order: Direct resource matching (AWS EC2 instance ID or Azure virtual machine instance ID) Special OpenShift tags Custom tags 3.1.2. OpenShift label inheritance in cost management OpenShift labels follow an inheritance pattern from cluster to node, and from project to pod. You can associate costs at the node or project level without labeling every pod in your cluster. Key-value pairs from node and project labels are inherited at the pod level for pod metrics in cost management. Key-value pairs from the cluster and node labels are inherited at the project level by the persistent volume claims (PVC) at each level. You can group by cluster, node, or project labels to see relevant PVCs in those workloads. If a key already exists in the pod, then the value for that key in the pod remains. Cost management does not overwrite the pod value with the project or node value. A similar process happens from node to project. Consider the following examples. Example 1: Your organization assigns the label app and the value costpod1 to a pod. The project for this pod has the label app and the value cost-project . These resources are running on a node with the label us-east-1 . The label app and the value costpod1 remain associated with the pod. Example 2: Your organization has a project with the label app and the value cost-project . The project has three pods running and they are not labeled. Cost management associates the label app and the value cost-project with these pods. 3.1.3. Direct resource matching (instance ID) The integrations apply these identifiers automatically. This form of tagging provides a direct link between Microsoft Azure or AWS instances and OpenShift nodes. AWS assigns every EC2 instance a resource identifier (a number such as i-01f44b3d90ef90055 ). OpenShift nodes are matched directly to the AWS EC2 instance the cluster is running on using the AWS resource identifier. The OpenShift reports in cost management (generated from Prometheus data) include this identifier for nodes. Similarly in Microsoft Azure, each virtual machine instance ID is included in the OpenShift reports for cost management. 3.1.4. Special OpenShift tags There are three special-case AWS tags you can use to associate cost with OpenShift: openshift_cluster openshift_node openshift_project These tags have matching priority over custom tags, and are especially useful in differentiating the costs of different OpenShift clusters running on the same AWS instance. To use this tagging method to identify an OpenShift cluster, tag your AWS instance with the key openshift_cluster , and provide the OpenShift integration name as the value. In the following example, the name of OpenShift integration in the cost management application is dev-cluster : Table 3.2. Example: Special OpenShift tags Source and resource type Key Value AWS resource (RDS) openshift_cluster dev-cluster OpenShift cluster No tags needed. This will match if the name of the OpenShift integration in cost management is dev-cluster . No tags needed. 3.1.5. Custom tags You can use any key:value combination as tags, and cost management will associate identical tag key and values together. You can then group costs by tag key, account, service, region, and more to view your costs and charge for that tag. Table 3.3. Example: Custom tags Source and resource type Key Value AWS resource (RDS) team engineering OpenShift pod team engineering 3.2. Configuring tags on your integrations To control the tags that cost management imports, activate or enable the tags that you want to view for each integration: You must activate AWS tags, and are then selected and exported to cost management in the data export. For instructions, see Activating AWS tags for cost management in the Adding an Amazon Web Services (AWS) source guide. Microsoft Azure tags are exported to cost management in the cost export report configured in Configuring a daily Azure data export schedule . OpenShift Container Platform labels are exported by the Cost Management Metrics Operator and included in the metrics reports that cost management uses as input. 3.2.1. Adding tags to an AWS resource Amazon creates certain identifiers automatically, such as the EC2 instance resource identifier, or a number such as i-123456789 , which cost management uses similarly. You can also add your own tags at the individual resource level. These tags must be activated for Cost and Usage Reporting to export them to the cost management application. Configure AWS tags for cost management using the following steps: Procedure Create and apply tags to your AWS resources. Note If you converted from a compatible third-party Linux distribution to Red Hat Enterprise Linux (RHEL) and purchased the RHEL for third-party migration listing in AWS, activate the cost allocation tags for your RHEL systems on the AWS Cost Allocation tags page. Create com_redhat_rhel_conversion and set the tag value to true . If you are using ELS (Extended Lifecycle Support), create com_redhat_rhel_addon and set the value to ELS . Finally, create com_redhat_rhel and set the tag value to 7 or 8 to match your version of RHEL. The changes will be reflected in cost management the time cost management downloads data. Do not use host metering if you plan on tagging items for RHEL metering. Your instances could be double-billed. For instructions in the AWS documentation, see User-Defined Cost Allocation Tags . Activate the tags you want to be collected by the cost management application through the data export. In the AWS Billing console, select the tags that you want to activate from the Cost Allocation Tags area. For instructions in the AWS documentation, see Activating the AWS-Generated Cost Allocation Tags . 3.2.2. Adding tags to a Microsoft Azure resource To create identifiers for virtual machine instances automatically, add a Microsoft Azure integration, which cost management uses similarly to tags to associate Microsoft Azure resources to related OpenShift resources. Add your own tags in Microsoft Azure at the individual resource level. Note If you converted from a compatible third-party Linux distribution to Red Hat Enterprise Linux (RHEL) and purchased the RHEL for third party migration listing in Microsoft Azure, label the VMs for your RHEL systems. Create com_redhat_rhel_conversion and set the tag value to true . If you are using ELS (Extended Lifecycle Support), create com_redhat_rhel_addon and set the value to ELS . Finally, create com_redhat_rhel and set the tag value to 7 or 8 to match your version of RHEL. The changes will be reflected in cost management the time cost management downloads data. Do not use host metering if you plan on tagging items for RHEL metering. If you plan to tag, this could cause instances to be double-billed. Create and apply Microsoft Azure tags for cost management using the instructions in the Microsoft Azure documentation: Use tags to organize your Azure resources and management hierarchy . 3.2.3. Adding tags to a Google Cloud resource You can apply custom labels to Google cloud resources, such as virtual machine instances, images, and persistent disks. These labels are automatically added to your BigQuery export and sent to cost management. Procedure Create and apply labels to your Google Cloud resources. See Creating and managing labels in the Google Cloud documentation for instructions. 3.2.4. Viewing labels in an OpenShift namespace The AWS or Microsoft Azure tag equivalent in OpenShift is a label, which also consists of a key:value pair. The cost management service collects OpenShift tag data from nodes, pods, and persistent volumes (or persistent volume claims) using Prometheus metrics and Cost Management Metrics Operator. To view the available tags, navigate to a resource in the OpenShift web console. Any assigned labels are listed under the Labels heading, for example: openshift.io/cluster-monitoring=true . 3.2.5. Enabling and Disabling tags in cost management All cloud provider tags are activated in cost management by default. Sometimes having too many resource tags can affect cost management performance. Enabled tags are limited to 200 per account. Unnecessary tags can also make managing your costs more complicated when grouping tags and matching key:value pairs. Disable tags you are not actively using to reduce these potential issues. Prerequisites You must have Organization Administrator or Cost Price List Administrator privileges to change these settings in cost management. See Limiting access to cost management resources in Getting started with cost management for more information about user roles and access. Procedure From cost management , click Cost Management Settings . Click the Tags and labels tab. Select the tags you want to disable. Click Disable tags . This tag is now deactivated for the cost management application. You can enable tags you have disabled on the same page by selecting the tags you want to enable and clicking Enable tags . 3.2.6. Configuring Amazon Web Services cost categories in cost management You can enable or disable Amazon Web Services (AWS) cost categories in the cost management service. AWS cost categories allow your organization to group meaningful cost and usage information in addition to tags. In order to use cost categories in cost management, they must first be configured in the AWS Console. The following procedure describes how to enable cost categories in the cost management service. Prerequisites You must have Organization Administrator or Cost Price List Administrator privileges to change these settings in cost management. See Limiting access to cost management resources in Getting started with cost management for more information about user roles and access. You have an Amazon Web Services integration added to cost management with cost categories enabled through the Amazon Web Services Console. Procedure From cost management , click Cost Management Settings . Click the Cost categories tab. Select the cost category to enable. You can select more than one. Click Enable categories . The selected cost categories are now enabled for the cost management application. You can also disable cost categories by selecting the cost categories you want to disable and clicking Disable categories .
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/managing_cost_data_using_tagging/assembly-configuring-tags-and-labels-in-cost-management
Image APIs
Image APIs OpenShift Container Platform 4.18 Reference guide for image APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/image_apis/index
B.7. cifs-utils
B.7. cifs-utils B.7.1. RHBA-2011:0380 - cifs-utils bug fix update An updated cifs-utils package that fixes a bug is now available for Red Hat Enterprise Linux 6. The Server Message Block (SMB), also known as Common Internet File System (CIFS), is a standard file-sharing protocol widely deployed on Windows machines. The tools included in this package work in conjunction with support in the kernel to allow users to mount a SMB/CIFS share onto a client, and use it as if it were a standard Linux file system. Bug Fix BZ# 668366 Due to an error in the cifs.upcall utility, Generic Security Services Application Program Interface (GSSAPI) channel bindings in Kerberos authentication messages were not set properly. This would cause some servers to reject authentication requests. Consequent to this, an attempt to mount a CIFS share with the security mode set to "krb5" could fail with the following error: This update corrects the cifs.upcall utility to set the GSSAPI channel bindings properly, and such CIFS shares can now be mounted as expected. All users of cifs-utils are advised to upgrade to this updated package, which resolves this issue.
[ "mount error(5): Input/output error" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/cifs-utils
1.3. Implications for Resource Management
1.3. Implications for Resource Management Because a task can belong to only a single cgroup in any one hierarchy, there is only one way that a task can be limited or affected by any single subsystem. This is logical: a feature, not a limitation. You can group several subsystems together so that they affect all tasks in a single hierarchy. Because cgroups in that hierarchy have different parameters set, those tasks will be affected differently. It may sometimes be necessary to refactor a hierarchy. An example would be removing a subsystem from a hierarchy that has several subsystems attached, and attaching it to a new, separate hierarchy. Conversely, if the need for splitting subsystems among separate hierarchies is reduced, you can remove a hierarchy and attach its subsystems to an existing one. The design allows for simple cgroup usage, such as setting a few parameters for specific tasks in a single hierarchy, such as one with just the cpu and memory subsystems attached. The design also allows for highly specific configuration: each task (process) on a system could be a member of each hierarchy, each of which has a single attached subsystem. Such a configuration would give the system administrator absolute control over all parameters for every single task.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-implications_for_resource_management
Chapter 3. Enabling Linux control group version 1 (cgroup v1)
Chapter 3. Enabling Linux control group version 1 (cgroup v1) As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.16 will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 or later will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section. 3.1. Enabling Linux cgroup v1 during installation You can enable Linux control group version 1 (cgroup v1) when you install a cluster by creating installation manifests. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Procedure Create or edit the node.config object to specify the v1 cgroup: apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: "v2" Proceed with the installation as usual. Additional resources OpenShift Container Platform installation overview Configuring the Linux cgroup on your nodes
[ "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installation_configuration/enabling-cgroup-v1
Chapter 20. Monitoring your cluster using JMX
Chapter 20. Monitoring your cluster using JMX Collecting metrics is critical for understanding the health and performance of your Kafka deployment. By monitoring metrics, you can actively identify issues before they become critical and make informed decisions about resource allocation and capacity planning. Without metrics, you may be left with limited visibility into the behavior of your Kafka deployment, which can make troubleshooting more difficult and time-consuming. Setting up metrics can save you time and resources in the long run, and help ensure the reliability of your Kafka deployment. Kafka brokers, ZooKeeper, Kafka Connect, and Kafka clients use Java Management Extensions (JMX) to actively expose management information. This information primarily consists of metrics that help monitor the performance and condition of the Kafka cluster. Kafka, like other Java applications, relies on managed beans or MBeans to provide this information to monitoring tools and dashboards. JMX operates at the JVM level, allowing external tools to connect and retrieve management information from the ZooKeeper, Kafka broker, and so on. To connect to the JVM, these tools must be running on the same machine and as the same user by default. 20.1. Enabling the JMX agent Enable JMX monitoring of Kafka components using JVM system properties. Use the KAFKA_JMX_OPTS environment variable to set the JMX system properties required for enabling JMX monitoring. The scripts that run the Kafka component use these properties. Procedure Set the KAFKA_JMX_OPTS environment variable with the JMX properties for enabling JMX monitoring. export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=<port> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false Replace <port> with the name of the port on which you want the Kafka component to listen for JMX connections. Add org.apache.kafka.common.metrics.JmxReporter to metric.reporters in the server.properties file. metric.reporters=org.apache.kafka.common.metrics.JmxReporter Start the Kafka component using the appropriate script, such as bin/kafka-server-start.sh for a broker or bin/connect-distributed.sh for Kafka Connect. Important It is recommended that you configure authentication and SSL to secure a remote JMX connection. For more information about the system properties needed to do this, see the Oracle documentation . 20.2. Disabling the JMX agent Disable JMX monitoring for Kafka components by updating the KAFKA_JMX_OPTS environment variable. Procedure Set the KAFKA_JMX_OPTS environment variable to disable JMX monitoring. export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=false Note Other JMX properties, like port, authentication, and SSL properties do not need to be specified when disabling JMX monitoring. Set auto.include.jmx.reporter to false in the Kafka server.properties file. auto.include.jmx.reporter=false Note The auto.include.jmx.reporter property is deprecated. From Kafka 4, the JMXReporter is only enabled if org.apache.kafka.common.metrics.JmxReporter is added to the metric.reporters configuration in the properties file. Start the Kafka component using the appropriate script, such as bin/kafka-server-start.sh for a broker or bin/connect-distributed.sh for Kafka Connect. 20.3. Metrics naming conventions When working with Kafka JMX metrics, it's important to understand the naming conventions used to identify and retrieve specific metrics. Kafka JMX metrics use the following format: Metrics format <metric_group>:type=<type_name>,name=<metric_name><other_attribute>=<value> <metric_group> is the name of the metric group <type_name> is the name of the type of metric <metric_name> is the name of the specific metric <other_attribute> represents zero or more additional attributes For example, the BytesInPerSec metric is a BrokerTopicMetrics type in the kafka.server group: kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec In some cases, metrics may include the ID of an entity. For instance, when monitoring a specific client, the metric format includes the client ID: Metrics for a specific client kafka.consumer:type=consumer-fetch-manager-metrics,client-id=<client_id> Similarly, a metric can be further narrowed down to a specific client and topic: Metrics for a specific client and topic kafka.consumer:type=consumer-fetch-manager-metrics,client-id=<client_id>,topic=<topic_id> Understanding these naming conventions will allow you to accurately specify the metrics you want to monitor and analyze. Note To view the full list of available JMX metrics for a Strimzi installation, you can use a graphical tool like JConsole. JConsole is a Java Monitoring and Management Console that allows you to monitor and manage Java applications, including Kafka. By connecting to the JVM running the Kafka component using its process ID, the tool's user interface allows you to view the list of metrics. 20.4. Analyzing Kafka JMX metrics for troubleshooting JMX provides a way to gather metrics about Kafka brokers for monitoring and managing their performance and resource usage. By analyzing these metrics, common broker issues such as high CPU usage, memory leaks, thread contention, and slow response times can be diagnosed and resolved. Certain metrics can pinpoint the root cause of these issues. JMX metrics also provide insights into the overall health and performance of a Kafka cluster. They help monitor the system's throughput, latency, and availability, diagnose issues, and optimize performance. This section explores the use of JMX metrics to help identify common issues and provides insights into the performance of a Kafka cluster. Collecting and graphing these metrics using tools like Prometheus and Grafana allows you to visualize the information returned. This can be particularly helpful in detecting issues or optimizing performance. Graphing metrics over time can also help with identifying trends and forecasting resource consumption. 20.4.1. Checking for under-replicated partitions A balanced Kafka cluster is important for optimal performance. In a balanced cluster, partitions and leaders are evenly distributed across all brokers, and I/O metrics reflect this. As well as using metrics, you can use the kafka-topics.sh tool to get a list of under-replicated partitions and identify the problematic brokers. If the number of under-replicated partitions is fluctuating or many brokers show high request latency, this typically indicates a performance issue in the cluster that requires investigation. On the other hand, a steady (unchanging) number of under-replicated partitions reported by many of the brokers in a cluster normally indicates that one of the brokers in the cluster is offline. Use the describe --under-replicated-partitions option from the kafka-topics.sh tool to show information about partitions that are currently under-replicated in the cluster. These are the partitions that have fewer replicas than the configured replication factor. If the output is blank, the Kafka cluster has no under-replicated partitions. Otherwise, the output shows replicas that are not in sync or available. In the following example, only 2 of the 3 replicas are in sync for each partition, with a replica missing from the ISR (in-sync replica). Returning information on under-replicated partitions from the command line bin/kafka-topics.sh --bootstrap-server :9092 --describe --under-replicated-partitions Topic: topic-1 Partition: 0 Leader: 4 Replicas: 4,2,3 Isr: 4,3 Topic: topic-1 Partition: 1 Leader: 3 Replicas: 2,3,4 Isr: 3,4 Topic: topic-1 Partition: 2 Leader: 3 Replicas: 3,4,2 Isr: 3,4 Here are some metrics to check for I/O and under-replicated partitions: Metrics to check for under-replicated partitions kafka.server:type=ReplicaManager,name=PartitionCount 1 kafka.server:type=ReplicaManager,name=LeaderCount 2 kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec 3 kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec 4 kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions 5 kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount 6 1 Total number of partitions across all topics in the cluster. 2 Total number of leaders across all topics in the cluster. 3 Rate of incoming bytes per second for each broker. 4 Rate of outgoing bytes per second for each broker. 5 Number of under-replicated partitions across all topics in the cluster. 6 Number of partitions below the minimum ISR. If topic configuration is set for high availability, with a replication factor of at least 3 for topics and a minimum number of in-sync replicas being 1 less than the replication factor, under-replicated partitions can still be usable. Conversely, partitions below the minimum ISR have reduced availability. You can monitor these using the kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount metric and the under-min-isr-partitions option from the kafka-topics.sh tool. Tip Use Cruise Control to automate the task of monitoring and rebalancing a Kafka cluster to ensure that the partition load is evenly distributed. For more information, see Chapter 14, Using Cruise Control for cluster rebalancing . 20.4.2. Identifying performance problems in a Kafka cluster Spikes in cluster metrics may indicate a broker issue, which is often related to slow or failing storage devices or compute restraints from other processes. If there is no issue at the operating system or hardware level, an imbalance in the load of the Kafka cluster is likely, with some partitions receiving disproportionate traffic compared to others in the same Kafka topic. To anticipate performance problems in a Kafka cluster, it's useful to monitor the RequestHandlerAvgIdlePercent metric. RequestHandlerAvgIdlePercent provides a good overall indicator of how the cluster is behaving. The value of this metric is between 0 and 1. A value below 0.7 indicates that threads are busy 30% of the time and performance is starting to degrade. If the value drops below 50%, problems are likely to occur, especially if the cluster needs to scale or rebalance. At 30%, a cluster is barely usable. Another useful metric is kafka.network:type=Processor,name=IdlePercent , which you can use to monitor the extent (as a percentage) to which network processors in a Kafka cluster are idle. The metric helps identify whether the processors are over or underutilized. To ensure optimal performance, set the num.io.threads property equal to the number of processors in the system, including hyper-threaded processors. If the cluster is balanced, but a single client has changed its request pattern and is causing issues, reduce the load on the cluster or increase the number of brokers. It's important to note that a single disk failure on a single broker can severely impact the performance of an entire cluster. Since producer clients connect to all brokers that lead partitions for a topic, and those partitions are evenly spread over the entire cluster, a poorly performing broker will slow down produce requests and cause back pressure in the producers, slowing down requests to all brokers. A RAID (Redundant Array of Inexpensive Disks) storage configuration that combines multiple physical disk drives into a single logical unit can help prevent this issue. Here are some metrics to check the performance of a Kafka cluster: Metrics to check the performance of a Kafka cluster kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent 1 # attributes: OneMinuteRate, FifteenMinuteRate kafka.server:type=socket-server-metrics,listener=([-.\w]+),networkProcessor=([\d]+) 2 # attributes: connection-creation-rate kafka.network:type=RequestChannel,name=RequestQueueSize 3 kafka.network:type=RequestChannel,name=ResponseQueueSize 4 kafka.network:type=Processor,name=IdlePercent,networkProcessor=([-.\w]+) 5 kafka.server:type=KafkaServer,name=TotalDiskReadBytes 6 kafka.server:type=KafkaServer,name=TotalDiskWriteBytes 7 1 Average idle percentage of the request handler threads in the Kafka broker's thread pool. The OneMinuteRate and FifteenMinuteRate attributes show the request rate of the last one minute and fifteen minutes, respectively. 2 Rate at which new connections are being created on a specific network processor of a specific listener in the Kafka broker. The listener attribute refers to the name of the listener, and the networkProcessor attribute refers to the ID of the network processor. The connection-creation-rate attribute shows the rate of connection creation in connections per second. 3 Current size of the request queue. 4 Current sizes of the response queue. 5 Percentage of time the specified network processor is idle. The networkProcessor specifies the ID of the network processor to monitor. 6 Total number of bytes read from disk by a Kafka server. 7 Total number of bytes written to disk by a Kafka server. 20.4.3. Identifying performance problems with a Kafka controller The Kafka controller is responsible for managing the overall state of the cluster, such as broker registration, partition reassignment, and topic management. Problems with the controller in the Kafka cluster are difficult to diagnose and often fall into the category of bugs in Kafka itself. Controller issues might manifest as broker metadata being out of sync, offline replicas when the brokers appear to be fine, or actions on topics like topic creation not happening correctly. There are not many ways to monitor the controller, but you can monitor the active controller count and the controller queue size. Monitoring these metrics gives a high-level indicator if there is a problem. Although spikes in the queue size are expected, if this value continuously increases, or stays steady at a high value and does not drop, it indicates that the controller may be stuck. If you encounter this problem, you can move the controller to a different broker, which requires shutting down the broker that is currently the controller. Here are some metrics to check the performance of a Kafka controller: Metrics to check the performance of a Kafka controller kafka.controller:type=KafkaController,name=ActiveControllerCount 1 kafka.controller:type=KafkaController,name=OfflinePartitionsCount 2 kafka.controller:type=ControllerEventManager,name=EventQueueSize 3 1 Number of active controllers in the Kafka cluster. A value of 1 indicates that there is only one active controller, which is the desired state. 2 Number of partitions that are currently offline. If this value is continuously increasing or stays at a high value, there may be a problem with the controller. 3 Size of the event queue in the controller. Events are actions that must be performed by the controller, such as creating a new topic or moving a partition to a new broker. if the value continuously increases or stays at a high value, the controller may be stuck and unable to perform the required actions. 20.4.4. Identifying problems with requests You can use the RequestHandlerAvgIdlePercent metric to determine if requests are slow. Additionally, request metrics can identify which specific requests are experiencing delays and other issues. To effectively monitor Kafka requests, it is crucial to collect two key metrics: count and 99th percentile latency, also known as tail latency. The count metric represents the number of requests processed within a specific time interval. It provides insights into the volume of requests handled by your Kafka cluster and helps identify spikes or drops in traffic. The 99th percentile latency metric measures the request latency, which is the time taken for a request to be processed. It represents the duration within which 99% of requests are handled. However, it does not provide information about the exact duration for the remaining 1% of requests. In other words, the 99th percentile latency metric tells you that 99% of the requests are handled within a certain duration, and the remaining 1% may take even longer, but the precise duration for this remaining 1% is not known. The choice of the 99th percentile is primarily to focus on the majority of requests and exclude outliers that can skew the results. This metric is particularly useful for identifying performance issues and bottlenecks related to the majority of requests, but it does not give a complete picture of the maximum latency experienced by a small fraction of requests. By collecting and analyzing both count and 99th percentile latency metrics, you can gain an understanding of the overall performance and health of your Kafka cluster, as well as the latency of the requests being processed. Here are some metrics to check the performance of Kafka requests: Metrics to check the performance of requests # requests: EndTxn, Fetch, FetchConsumer, FetchFollower, FindCoordinator, Heartbeat, InitProducerId, # JoinGroup, LeaderAndIsr, LeaveGroup, Metadata, Produce, SyncGroup, UpdateMetadata 1 kafka.network:type=RequestMetrics,name=RequestsPerSec,request=([\w]+) 2 kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=([\w]+) 3 kafka.network:type=RequestMetrics,name=TotalTimeMs,request=([\w]+) 4 kafka.network:type=RequestMetrics,name=LocalTimeMs,request=([\w]+) 5 kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=([\w]+) 6 kafka.network:type=RequestMetrics,name=ThrottleTimeMs,request=([\w]+) 7 kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request=([\w]+) 8 kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=([\w]+) 9 # attributes: Count, 99thPercentile 10 1 Request types to break down the request metrics. 2 Rate at which requests are being processed by the Kafka broker per second. 3 Time (in milliseconds) that a request spends waiting in the broker's request queue before being processed. 4 Total time (in milliseconds) that a request takes to complete, from the time it is received by the broker to the time the response is sent back to the client. 5 Time (in milliseconds) that a request spends being processed by the broker on the local machine. 6 Time (in milliseconds) that a request spends being processed by other brokers in the cluster. 7 Time (in milliseconds) that a request spends being throttled by the broker. Throttling occurs when the broker determines that a client is sending too many requests too quickly and needs to be slowed down. 8 Time (in milliseconds) that a response spends waiting in the broker's response queue before being sent back to the client. 9 Time (in milliseconds) that a response takes to be sent back to the client after it has been generated by the broker. 10 For all of the requests metrics, the Count and 99thPercentile attributes show the total number of requests that have been processed and the time it takes for the slowest 1% of requests to complete, respectively. 20.4.5. Using metrics to check the performance of clients By analyzing client metrics, you can monitor the performance of the Kafka clients (producers and consumers) connected to a broker. This can help identify issues highlighted in broker logs, such as consumers being frequently kicked off their consumer groups, high request failure rates, or frequent disconnections. Here are some metrics to check the performance of Kafka clients: Metrics to check the performance of client requests kafka.consumer:type=consumer-metrics,client-id=([-.\w]+) 1 # attributes: time-between-poll-avg, time-between-poll-max kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+) 2 # attributes: heartbeat-response-time-max, heartbeat-rate, join-time-max, join-rate, rebalance-rate-per-hour kafka.producer:type=producer-metrics,client-id=([-.\w]+) 3 # attributes: buffer-available-bytes, bufferpool-wait-time, request-latency-max, requests-in-flight # attributes: txn-init-time-ns-total, txn-begin-time-ns-total, txn-send-offsets-time-ns-total, txn-commit-time-ns-total, txn-abort-time-ns-total # attributes: record-error-total, record-queue-time-avg, record-queue-time-max, record-retry-rate, record-retry-total, record-send-rate, record-send-total 1 (Consumer) Average and maximum time between poll requests, which can help determine if the consumers are polling for messages frequently enough to keep up with the message flow. The time-between-poll-avg and time-between-poll-max attributes show the average and maximum time in milliseconds between successive polls by a consumer, respectively. 2 (Consumer) Metrics to monitor the coordination process between Kafka consumers and the broker coordinator. Attributes relate to the heartbeat, join, and rebalance process. 3 (Producer) Metrics to monitor the performance of Kafka producers. Attributes relate to buffer usage, request latency, in-flight requests, transactional processing, and record handling. 20.4.6. Using metrics to check the performance of topics and partitions Metrics for topics and partitions can also be helpful in diagnosing issues in a Kafka cluster. You can also use them to debug issues with a specific client when you are unable to collect client metrics. Here are some metrics to check the performance of a specific topic and partition: Metrics to check the performance of topics and partitions #Topic metrics kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=([-.\w]+) 1 kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic=([-.\w]+) 2 kafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec,topic=([-.\w]+) 3 kafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec,topic=([-.\w]+) 4 kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=([-.\w]+) 5 kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec,topic=([-.\w]+) 6 kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec,topic=([-.\w]+) 7 #Partition metrics kafka.log:type=Log,name=Size,topic=([-.\w]+),partition=([\d]+)) 8 kafka.log:type=Log,name=NumLogSegments,topic=([-.\w]+),partition=([\d]+)) 9 kafka.log:type=Log,name=LogEndOffset,topic=([-.\w]+),partition=([\d]+)) 10 kafka.log:type=Log,name=LogStartOffset,topic=([-.\w]+),partition=([\d]+)) 11 1 Rate of incoming bytes per second for a specific topic. 2 Rate of outgoing bytes per second for a specific topic. 3 Rate of fetch requests that failed per second for a specific topic. 4 Rate of produce requests that failed per second for a specific topic. 5 Incoming message rate per second for a specific topic. 6 Total rate of fetch requests (successful and failed) per second for a specific topic. 7 Total rate of fetch requests (successful and failed) per second for a specific topic. 8 Size of a specific partition's log in bytes. 9 Number of log segments in a specific partition. 10 Offset of the last message in a specific partition's log. 11 Offset of the first message in a specific partition's log Additional resources Apache Kafka documentation for a full list of available metrics Prometheus documentation Grafana documentation
[ "export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=<port> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false", "metric.reporters=org.apache.kafka.common.metrics.JmxReporter", "export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=false", "auto.include.jmx.reporter=false", "<metric_group>:type=<type_name>,name=<metric_name><other_attribute>=<value>", "kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec", "kafka.consumer:type=consumer-fetch-manager-metrics,client-id=<client_id>", "kafka.consumer:type=consumer-fetch-manager-metrics,client-id=<client_id>,topic=<topic_id>", "bin/kafka-topics.sh --bootstrap-server :9092 --describe --under-replicated-partitions Topic: topic-1 Partition: 0 Leader: 4 Replicas: 4,2,3 Isr: 4,3 Topic: topic-1 Partition: 1 Leader: 3 Replicas: 2,3,4 Isr: 3,4 Topic: topic-1 Partition: 2 Leader: 3 Replicas: 3,4,2 Isr: 3,4", "kafka.server:type=ReplicaManager,name=PartitionCount 1 kafka.server:type=ReplicaManager,name=LeaderCount 2 kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec 3 kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec 4 kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions 5 kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount 6", "kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent 1 attributes: OneMinuteRate, FifteenMinuteRate kafka.server:type=socket-server-metrics,listener=([-.\\w]+),networkProcessor=([\\d]+) 2 attributes: connection-creation-rate kafka.network:type=RequestChannel,name=RequestQueueSize 3 kafka.network:type=RequestChannel,name=ResponseQueueSize 4 kafka.network:type=Processor,name=IdlePercent,networkProcessor=([-.\\w]+) 5 kafka.server:type=KafkaServer,name=TotalDiskReadBytes 6 kafka.server:type=KafkaServer,name=TotalDiskWriteBytes 7", "kafka.controller:type=KafkaController,name=ActiveControllerCount 1 kafka.controller:type=KafkaController,name=OfflinePartitionsCount 2 kafka.controller:type=ControllerEventManager,name=EventQueueSize 3", "requests: EndTxn, Fetch, FetchConsumer, FetchFollower, FindCoordinator, Heartbeat, InitProducerId, JoinGroup, LeaderAndIsr, LeaveGroup, Metadata, Produce, SyncGroup, UpdateMetadata 1 kafka.network:type=RequestMetrics,name=RequestsPerSec,request=([\\w]+) 2 kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=([\\w]+) 3 kafka.network:type=RequestMetrics,name=TotalTimeMs,request=([\\w]+) 4 kafka.network:type=RequestMetrics,name=LocalTimeMs,request=([\\w]+) 5 kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=([\\w]+) 6 kafka.network:type=RequestMetrics,name=ThrottleTimeMs,request=([\\w]+) 7 kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request=([\\w]+) 8 kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=([\\w]+) 9 attributes: Count, 99thPercentile 10", "kafka.consumer:type=consumer-metrics,client-id=([-.\\w]+) 1 attributes: time-between-poll-avg, time-between-poll-max kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\\w]+) 2 attributes: heartbeat-response-time-max, heartbeat-rate, join-time-max, join-rate, rebalance-rate-per-hour kafka.producer:type=producer-metrics,client-id=([-.\\w]+) 3 attributes: buffer-available-bytes, bufferpool-wait-time, request-latency-max, requests-in-flight attributes: txn-init-time-ns-total, txn-begin-time-ns-total, txn-send-offsets-time-ns-total, txn-commit-time-ns-total, txn-abort-time-ns-total attributes: record-error-total, record-queue-time-avg, record-queue-time-max, record-retry-rate, record-retry-total, record-send-rate, record-send-total", "#Topic metrics kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=([-.\\w]+) 1 kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic=([-.\\w]+) 2 kafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec,topic=([-.\\w]+) 3 kafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec,topic=([-.\\w]+) 4 kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=([-.\\w]+) 5 kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec,topic=([-.\\w]+) 6 kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec,topic=([-.\\w]+) 7 #Partition metrics kafka.log:type=Log,name=Size,topic=([-.\\w]+),partition=([\\d]+)) 8 kafka.log:type=Log,name=NumLogSegments,topic=([-.\\w]+),partition=([\\d]+)) 9 kafka.log:type=Log,name=LogEndOffset,topic=([-.\\w]+),partition=([\\d]+)) 10 kafka.log:type=Log,name=LogStartOffset,topic=([-.\\w]+),partition=([\\d]+)) 11" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/monitoring-str
Chapter 58. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks
Chapter 58. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. It includes support for Identity Management (IdM). Learn more about Identity Management (IdM) host-based access policies and how to define them using Ansible . 58.1. Host-based access control rules in IdM Host-based access control (HBAC) rules define which users or user groups can access which hosts or host groups by using which services or services in a service group. As a system administrator, you can use HBAC rules to achieve the following goals: Limit access to a specified system in your domain to members of a specific user group. Allow only a specific service to be used to access systems in your domain. By default, IdM is configured with a default HBAC rule named allow_all , which means universal access to every host for every user via every relevant service in the entire IdM domain. You can fine-tune access to different hosts by replacing the default allow_all rule with your own set of HBAC rules. For centralized and simplified access control management, you can apply HBAC rules to user groups, host groups, or service groups instead of individual users, hosts, or services. 58.2. Ensuring the presence of an HBAC rule in IdM using an Ansible playbook Follow this procedure to ensure the presence of a host-based access control (HBAC) rule in Identity Management (IdM) using an Ansible playbook. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The users and user groups you want to use for your HBAC rule exist in IdM. See Managing user accounts using Ansible playbooks and Ensuring the presence of IdM groups and group members using Ansible playbooks for details. The hosts and host groups to which you want to apply your HBAC rule exist in IdM. See Managing hosts using Ansible playbooks and Managing host groups using Ansible playbooks for details. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create your Ansible playbook file that defines the HBAC policy whose presence you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/hbacrule/ensure-hbacrule-allhosts-present.yml file: Run the playbook: Verification Log in to the IdM Web UI as administrator. Navigate to Policy Host-Based-Access-Control HBAC Test . In the Who tab, select idm_user. In the Accessing tab, select client.idm.example.com . In the Via service tab, select sshd . In the Rules tab, select login . In the Run test tab, click the Run test button. If you see ACCESS GRANTED, the HBAC rule is implemented successfully. Additional resources See the README-hbacsvc.md , README-hbacsvcgroup.md , and README-hbacrule.md files in the /usr/share/doc/ansible-freeipa directory. See the playbooks in the subdirectories of the /usr/share/doc/ansible-freeipa/playbooks directory.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hbacrules hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure idm_user can access client.idm.example.com via the sshd service - ipahbacrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: login user: idm_user host: client.idm.example.com hbacsvc: - sshd state: present", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-new-hbacrule-present.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/ensuring-the-presence-of-host-based-access-control-rules-in-idm-using-ansible-playbooks_configuring-and-managing-idm
Chapter 1. Getting started with Dev Spaces
Chapter 1. Getting started with Dev Spaces If your organization is already running a OpenShift Dev Spaces instance, you can get started as a new user by learning how to start a new workspace, manage your workspaces, and authenticate yourself to a Git server from a workspace: Section 1.1, "Starting a workspace from a Git repository URL" Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" Section 1.2, "Starting a workspace from a raw devfile URL" Section 1.3, "Basic actions you can perform on a workspace" Section 1.4, "Authenticating to a Git server from a workspace" Section 1.5, "Using the fuse-overlayfs storage driver for Podman and Buildah" 1.1. Starting a workspace from a Git repository URL With OpenShift Dev Spaces, you can use a URL in your browser to start a new workspace that contains a clone of a Git repository. This way, you can clone a Git repository that is hosted on GitHub, GitLab, Bitbucket or Microsoft Azure DevOps server instances. Tip You can also use the Git Repository URL field on the Create Workspace page of your OpenShift Dev Spaces dashboard to enter the URL of a Git repository to start a new workspace. Important If you use an SSH URL to start a new workspace, you must propagate the SSH key. See Configuring DevWorkspaces to use SSH keys for Git operations for more information. If the SSH URL points to a private repository, you must apply an access token to be able to fetch the devfile.yaml content. You can do this either by accepting an SCM authentication page or following a Personal Access Token procedure. Important Configure personal access token to access private repositories. See Section 6.1.2, "Using a Git-provider access token" . Prerequisites Your organization has a running instance of OpenShift Dev Spaces. You know the FQDN URL of your organization's OpenShift Dev Spaces instance: https:// <openshift_dev_spaces_fqdn> . Optional: You have authentication to the Git server configured. Your Git repository maintainer keeps the devfile.yaml or .devfile.yaml file in the root directory of the Git repository. (For alternative file names and file paths, see Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" .) Tip You can also start a new workspace by supplying the URL of a Git repository that contains no devfile. Doing so results in a workspace with Universal Developer Image and with Microsoft Visual Studio Code - Open Source as the workspace IDE. Procedure To start a new workspace with a clone of a Git repository: Optional: Visit your OpenShift Dev Spaces dashboard pages to authenticate to your organization's instance of OpenShift Dev Spaces. Visit the URL to start a new workspace using the basic syntax: Tip You can extend this URL with optional parameters: 1 See Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" . Tip You can use Git+SSH URLs to start a new workspace. See Configuring DevWorkspaces to use SSH keys for Git operations Example 1.1. A URL for starting a new workspace https:// <openshift_dev_spaces_fqdn> #https://github.com/che-samples/cpp-hello-world https:// <openshift_dev_spaces_fqdn> #[email protected]:che-samples/cpp-hello-world.git Example 1.2. The URL syntax for starting a new workspace with a clone of a GitHub instance repository https:// <openshift_dev_spaces_fqdn> #https:// <github_host> / <user_or_org> / <repository> starts a new workspace with a clone of the default branch. https:// <openshift_dev_spaces_fqdn> #https:// <github_host> / <user_or_org> / <repository> /tree/ <branch_name> starts a new workspace with a clone of the specified branch. https:// <openshift_dev_spaces_fqdn> #https:// <github_host> / <user_or_org> / <repository> /pull/ <pull_request_id> starts a new workspace with a clone of the branch of the pull request. https:// <openshift_dev_spaces_fqdn> #git@ <github_host> : <user_or_org> / <repository> .git starts a new workspace from Git+SSH URL. Example 1.3. The URL syntax for starting a new workspace with a clone of a GitLab instance repository https:// <openshift_dev_spaces_fqdn> #https:// <gitlab_host> / <user_or_org> / <repository> starts a new workspace with a clone of the default branch. https:// <openshift_dev_spaces_fqdn> #https:// <gitlab_host> / <user_or_org> / <repository> /-/tree/ <branch_name> starts a new workspace with a clone of the specified branch. https:// <openshift_dev_spaces_fqdn> #git@ <gitlab_host> : <user_or_org> / <repository> .git starts a new workspace from Git+SSH URL. Example 1.4. The URL syntax for starting a new workspace with a clone of a BitBucket Server repository https:// <openshift_dev_spaces_fqdn> #https:// <bb_host> /scm/ <project-key> / <repository> .git starts a new workspace with a clone of the default branch. https:// <openshift_dev_spaces_fqdn> #https:// <bb_host> /users/ <user_slug> /repos/ <repository> / starts a new workspace with a clone of the default branch, if a repository was created under the user profile. https:// <openshift_dev_spaces_fqdn> #https:// <bb_host> /users/ <user-slug> /repos/ <repository> /browse?at=refs%2Fheads%2F <branch-name> starts a new workspace with a clone of the specified branch. https:// <openshift_dev_spaces_fqdn> #git@ <bb_host> : <user_slug> / <repository> .git starts a new workspace from Git+SSH URL. Example 1.5. The URL syntax for starting a new workspace with a clone of a Microsoft Azure DevOps Git repository https:// <openshift_dev_spaces_fqdn> #https:// <organization> @dev.azure.com/ <organization> / <project> /_git/ <repository> starts a new workspace with a clone of the default branch. https:// <openshift_dev_spaces_fqdn> #https:// <organization> @dev.azure.com/ <organization> / <project> /_git/ <repository> ?version=GB <branch> starts a new workspace with a clone of the specific branch. https:// <openshift_dev_spaces_fqdn> #[email protected]:v3/ <organization> / <project> / <repository> starts a new workspace from Git+SSH URL. After you enter the URL to start a new workspace in a browser tab, the workspace starting page appears. When the new workspace is ready, the workspace IDE loads in the browser tab. A clone of the Git repository is present in the filesystem of the new workspace. The workspace has a unique URL: https:// <openshift_dev_spaces_fqdn> / <user_name> / <unique_url> . Additional resources Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" Section 1.3, "Basic actions you can perform on a workspace" Section 6.1.2, "Using a Git-provider access token" Section 6.2.1, "Mounting Git configuration" Configuring DevWorkspaces to use SSH keys for Git operations 1.1.1. Optional parameters for the URLs for starting a new workspace When you start a new workspace, OpenShift Dev Spaces configures the workspace according to the instructions in the devfile. When you use a URL to start a new workspace, you can append optional parameters to the URL that further configure the workspace. You can use these parameters to specify a workspace IDE, start duplicate workspaces, and specify a devfile file name or path. Section 1.1.1.1, "URL parameter concatenation" Section 1.1.1.2, "URL parameter for the IDE" Section 1.1.1.3, "URL parameter for the IDE image" Section 1.1.1.4, "URL parameter for starting duplicate workspaces" Section 1.1.1.5, "URL parameter for the devfile file name" Section 1.1.1.6, "URL parameter for the devfile file path" Section 1.1.1.7, "URL parameter for the workspace storage" Section 1.1.1.8, "URL parameter for additional remotes" Section 1.1.1.9, "URL parameter for a container image" 1.1.1.1. URL parameter concatenation The URL for starting a new workspace supports concatenation of multiple optional URL parameters by using & with the following URL syntax: https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ? <url_parameter_1> & <url_parameter_2> & <url_parameter_3> Example 1.6. A URL for starting a new workspace with the URL of a Git repository and optional URL parameters The complete URL for the browser: https:// <openshift_dev_spaces_fqdn> #https://github.com/che-samples/cpp-hello-world?new&che-editor=che-incubator/intellij-community/latest&devfilePath=tests/testdevfile.yaml Explanation of the parts of the URL: 1 OpenShift Dev Spaces URL. 2 The URL of the Git repository to be cloned into the new workspace. 3 The concatenated optional URL parameters. 1.1.1.2. URL parameter for the IDE You can use the che-editor= URL parameter to specify a supported IDE when starting a workspace. Tip Use the che-editor= parameter when you cannot add or edit a /.che/che-editor.yaml file in the source-code Git repository to be cloned for workspaces. Note The che-editor= parameter overrides the /.che/che-editor.yaml file. This parameter accepts two types of values: che-editor= <editor_key> Table 1.1. The URL parameter <editor_key> values for supported IDEs IDE <editor_key> value Note Microsoft Visual Studio Code - Open Source che-incubator/che-code/latest This is the default IDE that loads in a new workspace when the URL parameter or che-editor.yaml is not used. JetBrains IntelliJ IDEA Community Edition che-incubator/che-idea/latest Technology Preview . Use the Dashboard to select this IDE. che-editor= <url_to_a_file> 1 URL to a file with devfile content . Tip The URL must point to the raw file content. To use this parameter with a che-editor.yaml file, copy the file with another name or path, and remove the line with inline from the file. The che-editors.yaml file features the devfiles of all supported IDEs. 1.1.1.3. URL parameter for the IDE image You can use the editor-image parameter to set the custom IDE image for the workspace. Important If the Git repository contains /.che/che-editor.yaml file, the custom editor will be overridden with the new IDE image. If there is no /.che/che-editor.yaml file in the Git repository, the default editor will be overridden with the new IDE image. If you want to override the supported IDE and change the target editor image, you can use both parameters together: che-editor and editor-image URL parameters. The URL parameter to override the IDE image is editor-image= : Example: https:// <openshift_dev_spaces_fqdn> #https://github.com/eclipse-che/che-docs?editor-image=quay.io/che-incubator/che-code: or https:// <openshift_dev_spaces_fqdn> #https://github.com/eclipse-che/che-docs?che-editor=che-incubator/che-code/latest&editor-image=quay.io/che-incubator/che-code: 1.1.1.4. URL parameter for starting duplicate workspaces Visiting a URL for starting a new workspace results in a new workspace according to the devfile and with a clone of the linked Git repository. In some situations, you might need to have multiple workspaces that are duplicates in terms of the devfile and the linked Git repository. You can do this by visiting the same URL for starting a new workspace with a URL parameter. The URL parameter for starting a duplicate workspace is new : Note If you currently have a workspace that you started using a URL, then visiting the URL again without the new URL parameter results in an error message. 1.1.1.5. URL parameter for the devfile file name When you visit a URL for starting a new workspace, OpenShift Dev Spaces searches the linked Git repository for a devfile with the file name .devfile.yaml or devfile.yaml . The devfile in the linked Git repository must follow this file-naming convention. In some situations, you might need to specify a different, unconventional file name for the devfile. The URL parameter for specifying an unconventional file name of the devfile is df= <filename> .yaml : 1 <filename> .yaml is an unconventional file name of the devfile in the linked Git repository. Tip The df= <filename> .yaml parameter also has a long version: devfilePath= <filename> .yaml . 1.1.1.6. URL parameter for the devfile file path When you visit a URL for starting a new workspace, OpenShift Dev Spaces searches the root directory of the linked Git repository for a devfile with the file name .devfile.yaml or devfile.yaml . The file path of the devfile in the linked Git repository must follow this path convention. In some situations, you might need to specify a different, unconventional file path for the devfile in the linked Git repository. The URL parameter for specifying an unconventional file path of the devfile is devfilePath= <relative_file_path> : 1 <relative_file_path> is an unconventional file path of the devfile in the linked Git repository. 1.1.1.7. URL parameter for the workspace storage If the URL for starting a new workspace does not contain a URL parameter specifying the storage type, the new workspace is created in ephemeral or persistent storage, whichever is defined as the default storage type in the CheCluster Custom Resource. The URL parameter for specifying a storage type for a workspace is storageType= <storage_type> : 1 Possible <storage_type> values: ephemeral per-user (persistent) per-workspace (persistent) Tip With the ephemeral or per-workspace storage type, you can run multiple workspaces concurrently, which is not possible with the default per-user storage type. Additional resources Chapter 7, Requesting persistent storage for workspaces 1.1.1.8. URL parameter for additional remotes When you visit a URL for starting a new workspace, OpenShift Dev Spaces configures the origin remote to be the Git repository that you specified with # after the FQDN URL of your organization's OpenShift Dev Spaces instance. The URL parameter for cloning and configuring additional remotes for the workspace is remotes= : Important If you do not enter the name origin for any of the additional remotes, the remote from <git_repository_url> will be cloned and named origin by default, and its expected branch will be checked out automatically. If you enter the name origin for one of the additional remotes, its default branch will be checked out automatically, but the remote from <git_repository_url> will NOT be cloned for the workspace. 1.1.1.9. URL parameter for a container image You can use the image parameter to use a custom reference to a container image in the following scenarios: The Git repository contains no devfile, and you want to start a new workspace with the custom image. The Git repository contains a devfile, and you want to override the first container image listed in the components section of the devfile. The URL parameter for the path to the container image is image= : Example https:// <openshift_dev_spaces_fqdn> #https://github.com/eclipse-che/che-docs?image=quay.io/devfile/universal-developer-image:ubi8-latest 1.2. Starting a workspace from a raw devfile URL With OpenShift Dev Spaces, you can open a devfile URL in your browser to start a new workspace. Tip You can use the Git Repo URL field on the Create Workspace page of your OpenShift Dev Spaces dashboard to enter the URL of a devfile to start a new workspace. Important To initiate a clone of the Git repository in the filesystem of a new workspace, the devfile must contain project info. See https://devfile.io/docs/2.2.0/adding-projects . Prerequisites Your organization has a running instance of OpenShift Dev Spaces. You know the FQDN URL of your organization's OpenShift Dev Spaces instance: https:// <openshift_dev_spaces_fqdn> . Procedure To start a new workspace from a devfile URL: Optional: Visit your OpenShift Dev Spaces dashboard pages to authenticate to your organization's instance of OpenShift Dev Spaces. Visit the URL to start a new workspace from a public repository using the basic syntax: You can pass your personal access token to the URL to access a devfile from private repositories: 1 Your personal access token that you generated on the Git provider's website. This works for GitHub, GitLab, Bitbucket, Microsoft Azure, and other providers that support Personal Access Token. Important Automated Git credential injection does not work in this case. To configure the Git credentials, use the configure personal access token guide. Tip You can extend this URL with optional parameters: 1 See Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" . Example 1.7. A URL for starting a new workspace from a public repository https:// <openshift_dev_spaces_fqdn> #https://raw.githubusercontent.com/che-samples/cpp-hello-world/main/devfile.yaml Example 1.8. A URL for starting a new workspace from a private repository https:// <openshift_dev_spaces_fqdn> #https:// <token> @raw.githubusercontent.com/che-samples/cpp-hello-world/main/devfile.yaml Verification After you enter the URL to start a new workspace in a browser tab, the workspace starting page appears. When the new workspace is ready, the workspace IDE loads in the browser tab. The workspace has a unique URL: https:// <openshift_dev_spaces_fqdn> / <user_name> / <unique_url> . Additional resources Section 1.1.1, "Optional parameters for the URLs for starting a new workspace" Section 1.3, "Basic actions you can perform on a workspace" Section 6.1.2, "Using a Git-provider access token" Section 6.2.1, "Mounting Git configuration" Configuring DevWorkspaces to use SSH keys for Git operations 1.3. Basic actions you can perform on a workspace You manage your workspaces and verify their current states in the Workspaces page ( https:// <openshift_dev_spaces_fqdn> /dashboard/#/workspaces ) of your OpenShift Dev Spaces dashboard. After you start a new workspace, you can perform the following actions on it in the Workspaces page: Table 1.2. Basic actions you can perform on a workspace Action GUI steps in the Workspaces page Reopen a running workspace Click Open . Restart a running workspace Go to ... > Restart Workspace . Stop a running workspace Go to ... > Stop Workspace . Start a stopped workspace Click Open . Delete a workspace Go to ... > Delete Workspace . 1.4. Authenticating to a Git server from a workspace In a workspace, you can run Git commands that require user authentication like cloning a remote private Git repository or pushing to a remote public or private Git repository. User authentication to a Git server from a workspace is configured by the administrator or, in some cases, by the individual user: Your administrator sets up an OAuth application on GitHub, GitLab, Bitbucket, or Microsoft Azure Repos for your organization's Red Hat OpenShift Dev Spaces instance. As a workaround, some users create and apply their own Kubernetes Secrets for their personal Git-provider access tokens or configure SSH keys for Git operations . Additional resources Administration Guide: Configuring OAuth for Git providers User Guide: Using a Git-provider access token Configuring DevWorkspaces to use SSH keys for Git operations 1.5. Using the fuse-overlayfs storage driver for Podman and Buildah By default, newly created workspaces that do not specify a devfile will use the Universal Developer Image (UDI). The UDI contains common development tools and dependencies commonly used by developers. Podman and Buildah are included in the UDI, allowing developers to build and push container images from their workspace. By default, Podman and Buildah in the UDI are configured to use the vfs storage driver. For more efficient image management, use the fuse-overlayfs storage driver which supports copy-on-write in rootless environments. You must meet the following requirements to fuse-overlayfs in a workspace: For OpenShift versions older than 4.15, the administrator has enabled /dev/fuse access on the cluster by following https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.14/html-single/administration_guide/index#administration-guide:configuring-fuse . The workspace has the necessary annotations for using the /dev/fuse device. See Section 1.5.1, "Accessing /dev/fuse" . The storage.conf file in the workspace container has been configured to use fuse-overlayfs. See Section 1.5.2, "Enabling fuse-overlayfs with a ConfigMap" . Additional resources Universal Developer Image 1.5.1. Accessing /dev/fuse You must have access to /dev/fuse to use fuse-overlayfs. This section describes how to make /dev/fuse accessible to workspace containers. Prerequisites For OpenShift versions older than 4.15, the administrator has enabled access to /dev/fuse by following https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.14/html-single/administration_guide/index#administration-guide:configuring-fuse . Determine a workspace to use fuse-overlayfs with. Procedure Use the pod-overrides attribute to add the required annotations defined in https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.14/html-single/administration_guide/index#administration-guide:configuring-fuse to the workspace. The pod-overrides attribute allows merging certain fields in the workspace pod's spec . For OpenShift versions older than 4.15: For OpenShift version 4.15 and later: Verification steps Start the workspace and verify that /dev/fuse is available in the workspace container. After completing this procedure, follow the steps in Section 1.5.2, "Enabling fuse-overlayfs with a ConfigMap" to use fuse-overlayfs for Podman. 1.5.2. Enabling fuse-overlayfs with a ConfigMap You can define the storage driver for Podman and Buildah in the ~/.config/containers/storage.conf file. Here are the default contents of the /home/user/.config/containers/storage.conf file in the UDI container: storage.conf To use fuse-overlayfs, storage.conf can be set to the following: storage.conf 1 The absolute path to the fuse-overlayfs binary. The /usr/bin/fuse-overlayfs path is the default for the UDI. You can do this manually after starting a workspace. Another option is to build a new image based on the UDI with changes to storage.conf and use the new image for workspaces. Otherwise, you can update the /home/user/.config/containers/storage.conf for all workspaces in your project by creating a ConfigMap that mounts the updated file. See Section 6.2, "Mounting ConfigMaps" . Prerequisites For OpenShift versions older than 4.15, the administrator has enabled access to /dev/fuse by following https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.14/html-single/administration_guide/index#administration-guide:configuring-fuse . A workspace with the required annotations are set by following Section 1.5.1, "Accessing /dev/fuse" Note Since ConfigMaps mounted by following this guide mounts the ConfigMap's data to all workspaces, following this procedure will set the storage driver to fuse-overlayfs for all workspaces. Ensure that your workspaces contain the required annotations to use fuse-overlayfs by following Section 1.5.1, "Accessing /dev/fuse" . Procedure Apply a ConfigMap that mounts a /home/user/.config/containers/storage.conf file in your project. kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: file controller.devfile.io/mount-path: /home/user/.config/containers data: storage.conf: | [storage] driver = "overlay" [storage.options.overlay] mount_program="/usr/bin/fuse-overlayfs" Verification steps Start the workspace containing the required annotations and verify that the storage driver is overlay . Example output:
[ "https:// <openshift_dev_spaces_fqdn> # <git_repository_url>", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ? <optional_parameters> 1", "https:// <openshift_dev_spaces_fqdn> 1 #https://github.com/che-samples/cpp-hello-world 2 ?new&che-editor=che-incubator/intellij-community/latest&devfilePath=tests/testdevfile.yaml 3", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?che-editor= <editor_key>", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?che-editor= <url_to_a_file> 1", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?editor-image= <container_registry/image_name:image_tag>", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?new", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?df= <filename> .yaml 1", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?devfilePath= <relative_file_path> 1", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?storageType= <storage_type> 1", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?remotes={{ <name_1> , <url_1> },{ <name_2> , <url_2> },{ <name_3> , <url_3> },...}", "https:// <openshift_dev_spaces_fqdn> # <git_repository_url> ?image= <container_image_url>", "https:// <openshift_dev_spaces_fqdn> # <devfile_url>", "https:// <openshift_dev_spaces_fqdn> # https:// <token> @ <host> / <path_to_devfile> 1", "https:// <openshift_dev_spaces_fqdn> # <devfile_url> ? <optional_parameters> 1", "oc patch devworkspace <DevWorkspace_name> --patch '{\"spec\":{\"template\":{\"attributes\":{\"pod-overrides\":{\"metadata\":{\"annotations\":{\"io.kubernetes.cri-o.Devices\":\"/dev/fuse\",\"io.openshift.podman-fuse\":\"\"}}}}}}}' --type=merge", "oc patch devworkspace <DevWorkspace_name> --patch '{\"spec\":{\"template\":{\"attributes\":{\"pod-overrides\":{\"metadata\":{\"annotations\":{\"io.kubernetes.cri-o.Devices\":\"/dev/fuse\"}}}}}}}' --type=merge", "stat /dev/fuse", "[storage] driver = \"vfs\"", "[storage] driver = \"overlay\" [storage.options.overlay] mount_program=\"/usr/bin/fuse-overlayfs\" 1", "kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: file controller.devfile.io/mount-path: /home/user/.config/containers data: storage.conf: | [storage] driver = \"overlay\" [storage.options.overlay] mount_program=\"/usr/bin/fuse-overlayfs\"", "podman info | grep overlay", "graphDriverName: overlay overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.12-1.module+el8.9.0+20326+387084d0.x86_64 fuse-overlayfs: version 1.12 Backing Filesystem: overlayfs" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/user_guide/getting-started-with-devspaces
Chapter 9. Installing a cluster on Azure into a government region
Chapter 9. Installing a cluster on Azure into a government region In OpenShift Container Platform version 4.16, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated government region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 9.2. Azure government regions OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization . Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment. Note The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud , based on the region specified. 9.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.3.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 9.3.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 9.3.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. 9.4. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 9.4.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 9.4.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 9.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 9.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 9.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 9.4.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 9.4.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 9.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 9.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 9.8.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.8.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 9.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 9.8.3. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 9.8.4. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 9.8.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: usgovvirginia resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzureUSGovernmentCloud 20 pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24 1 10 21 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 15 If you use an existing VNet, specify the name of the resource group that contains it. 16 If you use an existing VNet, specify its name. 17 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 18 If you use an existing VNet, specify the name of the subnet to host the compute machines. 19 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 20 Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud . 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 9.8.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 9.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: usgovvirginia resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzureUSGovernmentCloud 20 pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure/installing-azure-government-region
Automation controller user guide
Automation controller user guide Red Hat Ansible Automation Platform 2.4 User Guide for Automation Controller Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/index
probe::ipmib.InNoRoutes
probe::ipmib.InNoRoutes Name probe::ipmib.InNoRoutes - Count an arriving packet with no matching socket Synopsis ipmib.InNoRoutes Values op value to be added to the counter (default value of 1) skb pointer to the struct sk_buff being acted on Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global InNoRoutes (equivalent to SNMP's MIB IPSTATS_MIB_INNOROUTES)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-innoroutes
Chapter 20. Policy APIs
Chapter 20. Policy APIs 20.1. Policy APIs 20.1.1. Eviction [policy/v1] Description Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to ... /pods/<pod name>/evictions. Type object 20.1.2. PodDisruptionBudget [policy/v1] Description PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods Type object 20.2. Eviction [policy/v1] Description Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to ... /pods/<pod name>/evictions. Type object 20.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deleteOptions DeleteOptions DeleteOptions may be provided kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta ObjectMeta describes the pod that is being evicted. 20.2.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/pods/{name}/eviction POST : create eviction of a Pod 20.2.2.1. /api/v1/namespaces/{namespace}/pods/{name}/eviction Table 20.1. Global path parameters Parameter Type Description name string name of the Eviction namespace string object name and auth scope, such as for teams and projects Table 20.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create eviction of a Pod Table 20.3. Body parameters Parameter Type Description body Eviction schema Table 20.4. HTTP responses HTTP code Reponse body 200 - OK Eviction schema 201 - Created Eviction schema 202 - Accepted Eviction schema 401 - Unauthorized Empty 20.3. PodDisruptionBudget [policy/v1] Description PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods Type object 20.3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. status object PodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget. Status may trail the actual state of a system. 20.3.1.1. .spec Description PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. Type object Property Type Description maxUnavailable IntOrString An eviction is allowed if at most "maxUnavailable" pods selected by "selector" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with "minAvailable". minAvailable IntOrString An eviction is allowed if at least "minAvailable" pods selected by "selector" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying "100%". selector LabelSelector Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace. unhealthyPodEvictionPolicy string UnhealthyPodEvictionPolicy defines the criteria for when unhealthy pods should be considered for eviction. Current implementation considers healthy pods, as pods that have status.conditions item with type="Ready",status="True". Valid policies are IfHealthyBudget and AlwaysAllow. If no policy is specified, the default behavior will be used, which corresponds to the IfHealthyBudget policy. IfHealthyBudget policy means that running pods (status.phase="Running"), but not yet healthy can be evicted only if the guarded application is not disrupted (status.currentHealthy is at least equal to status.desiredHealthy). Healthy pods will be subject to the PDB for eviction. AlwaysAllow policy means that all running pods (status.phase="Running"), but not yet healthy are considered disrupted and can be evicted regardless of whether the criteria in a PDB is met. This means perspective running pods of a disrupted application might not get a chance to become healthy. Healthy pods will be subject to the PDB for eviction. Additional policies may be added in the future. Clients making eviction decisions should disallow eviction of unhealthy pods if they encounter an unrecognized policy in this field. This field is beta-level. The eviction API uses this field when the feature gate PDBUnhealthyPodEvictionPolicy is enabled (enabled by default). Possible enum values: - "AlwaysAllow" policy means that all running pods (status.phase="Running"), but not yet healthy are considered disrupted and can be evicted regardless of whether the criteria in a PDB is met. This means perspective running pods of a disrupted application might not get a chance to become healthy. Healthy pods will be subject to the PDB for eviction. - "IfHealthyBudget" policy means that running pods (status.phase="Running"), but not yet healthy can be evicted only if the guarded application is not disrupted (status.currentHealthy is at least equal to status.desiredHealthy). Healthy pods will be subject to the PDB for eviction. 20.3.1.2. .status Description PodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget. Status may trail the actual state of a system. Type object Required disruptionsAllowed currentHealthy desiredHealthy expectedPods Property Type Description conditions array (Condition) Conditions contain conditions for PDB. The disruption controller sets the DisruptionAllowed condition. The following are known values for the reason field (additional reasons could be added in the future): - SyncFailed: The controller encountered an error and wasn't able to compute the number of allowed disruptions. Therefore no disruptions are allowed and the status of the condition will be False. - InsufficientPods: The number of pods are either at or below the number required by the PodDisruptionBudget. No disruptions are allowed and the status of the condition will be False. - SufficientPods: There are more pods than required by the PodDisruptionBudget. The condition will be True, and the number of allowed disruptions are provided by the disruptionsAllowed property. currentHealthy integer current number of healthy pods desiredHealthy integer minimum desired number of healthy pods disruptedPods object (Time) DisruptedPods contains information about pods whose eviction was processed by the API server eviction subresource handler but has not yet been observed by the PodDisruptionBudget controller. A pod will be in this map from the time when the API server processed the eviction request to the time when the pod is seen by PDB controller as having been marked for deletion (or after a timeout). The key in the map is the name of the pod and the value is the time when the API server processed the eviction request. If the deletion didn't occur and a pod is still there it will be removed from the list automatically by PodDisruptionBudget controller after some time. If everything goes smooth this map should be empty for the most of the time. Large number of entries in the map may indicate problems with pod deletions. disruptionsAllowed integer Number of pod disruptions that are currently allowed. expectedPods integer total number of pods counted by this disruption budget observedGeneration integer Most recent generation observed when updating this PDB status. DisruptionsAllowed and other status information is valid only if observedGeneration equals to PDB's object generation. 20.3.2. API endpoints The following API endpoints are available: /apis/policy/v1/poddisruptionbudgets GET : list or watch objects of kind PodDisruptionBudget /apis/policy/v1/watch/poddisruptionbudgets GET : watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets DELETE : delete collection of PodDisruptionBudget GET : list or watch objects of kind PodDisruptionBudget POST : create a PodDisruptionBudget /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets GET : watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name} DELETE : delete a PodDisruptionBudget GET : read the specified PodDisruptionBudget PATCH : partially update the specified PodDisruptionBudget PUT : replace the specified PodDisruptionBudget /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets/{name} GET : watch changes to an object of kind PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name}/status GET : read status of the specified PodDisruptionBudget PATCH : partially update status of the specified PodDisruptionBudget PUT : replace status of the specified PodDisruptionBudget 20.3.2.1. /apis/policy/v1/poddisruptionbudgets Table 20.5. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind PodDisruptionBudget Table 20.6. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudgetList schema 401 - Unauthorized Empty 20.3.2.2. /apis/policy/v1/watch/poddisruptionbudgets Table 20.7. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. Table 20.8. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 20.3.2.3. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets Table 20.9. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 20.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PodDisruptionBudget Table 20.11. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 20.12. Body parameters Parameter Type Description body DeleteOptions schema Table 20.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PodDisruptionBudget Table 20.14. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 20.15. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudgetList schema 401 - Unauthorized Empty HTTP method POST Description create a PodDisruptionBudget Table 20.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.17. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 20.18. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 202 - Accepted PodDisruptionBudget schema 401 - Unauthorized Empty 20.3.2.4. /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets Table 20.19. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 20.20. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. Table 20.21. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 20.3.2.5. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name} Table 20.22. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget namespace string object name and auth scope, such as for teams and projects Table 20.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PodDisruptionBudget Table 20.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 20.25. Body parameters Parameter Type Description body DeleteOptions schema Table 20.26. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodDisruptionBudget Table 20.27. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodDisruptionBudget Table 20.28. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 20.29. Body parameters Parameter Type Description body Patch schema Table 20.30. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodDisruptionBudget Table 20.31. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.32. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 20.33. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty 20.3.2.6. /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets/{name} Table 20.34. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget namespace string object name and auth scope, such as for teams and projects Table 20.35. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 20.36. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 20.3.2.7. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name}/status Table 20.37. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget namespace string object name and auth scope, such as for teams and projects Table 20.38. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PodDisruptionBudget Table 20.39. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PodDisruptionBudget Table 20.40. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 20.41. Body parameters Parameter Type Description body Patch schema Table 20.42. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PodDisruptionBudget Table 20.43. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.44. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 20.45. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/policy-apis-1
Chapter 6. Customizing the Learning Paths in Red Hat Developer Hub
Chapter 6. Customizing the Learning Paths in Red Hat Developer Hub In Red Hat Developer Hub, you can configure Learning Paths by passing the data into the app-config.yaml file as a proxy. The base URL must include the /developer-hub/learning-paths proxy. Note Due to the use of overlapping pathRewrites for both the learning-path and homepage quick access proxies, you must create the learning-paths configuration ( ^api/proxy/developer-hub/learning-paths ) before you create the homepage configuration ( ^/api/proxy/developer-hub ). For more information about customizing the Home page in Red Hat Developer Hub, see Customizing the Home page in Red Hat Developer Hub . You can provide data to the Learning Path from the following sources: JSON files hosted on GitHub or GitLab. A dedicated service that provides the Learning Path data in JSON format using an API. 6.1. Using hosted JSON files to provide data to the Learning Paths Prerequisites You have installed Red Hat Developer Hub by using either the Operator or Helm chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform . Procedure To access the data from the JSON files, complete the following step: Add the following code to the app-config.yaml file: proxy: endpoints: '/developer-hub': target: https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub/learning-paths': '/janus-idp/backstage-showcase/main/packages/app/public/learning-paths/data.json' '^/api/proxy/developer-hub/tech-radar': '/janus-idp/backstage-showcase/main/packages/app/public/tech-radar/data-default.json' '^/api/proxy/developer-hub': '/janus-idp/backstage-showcase/main/packages/app/public/homepage/data.json' changeOrigin: true secure: true 6.2. Using a dedicated service to provide data to the Learning Paths When using a dedicated service, you can do the following: Use the same service to provide the data to all configurable Developer Hub pages or use a different service for each page. Use the red-hat-developer-hub-customization-provider as an example service, which provides data for both the Home and Tech Radar pages. The red-hat-developer-hub-customization-provider service provides the same data as default Developer Hub data. You can fork the red-hat-developer-hub-customization-provider service repository from GitHub and modify it with your own data, if required. Deploy the red-hat-developer-hub-customization-provider service and the Developer Hub Helm chart on the same cluster. Prerequisites You have installed the Red Hat Developer Hub using Helm chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform . Procedure To use a dedicated service to provide the Learning Path data, complete the following steps: Add the following code to the app-config-rhdh.yaml file: proxy: endpoints: # Other Proxies '/developer-hub/learning-paths': target: USD{LEARNING_PATH_DATA_URL} changeOrigin: true # Change to "false" in case of using self hosted cluster with a self-signed certificate secure: true where the LEARNING_PATH_DATA_URL is defined as http://<SERVICE_NAME>/learning-paths , for example, http://rhdh-customization-provider/learning-paths . Note You can define the LEARNING_PATH_DATA_URL by adding it to rhdh-secrets or by directly replacing it with its value in your custom ConfigMap. Delete the Developer Hub pod to ensure that the new configurations are loaded correctly.
[ "proxy: endpoints: '/developer-hub': target: https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub/learning-paths': '/janus-idp/backstage-showcase/main/packages/app/public/learning-paths/data.json' '^/api/proxy/developer-hub/tech-radar': '/janus-idp/backstage-showcase/main/packages/app/public/tech-radar/data-default.json' '^/api/proxy/developer-hub': '/janus-idp/backstage-showcase/main/packages/app/public/homepage/data.json' changeOrigin: true secure: true", "proxy: endpoints: # Other Proxies '/developer-hub/learning-paths': target: USD{LEARNING_PATH_DATA_URL} changeOrigin: true # Change to \"false\" in case of using self hosted cluster with a self-signed certificate secure: true" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/getting_started_with_red_hat_developer_hub/proc-customize-rhdh-learning-paths_rhdh-getting-started
Chapter 5. Scaling storage of VMware OpenShift Data Foundation cluster
Chapter 5. Scaling storage of VMware OpenShift Data Foundation cluster 5.1. Scaling up storage on a VMware cluster To increase the storage capacity in a dynamically created storage cluster on a VMware user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.2. Scaling up a cluster created using local storage devices In order to scale up an OpenShift Data Foundation cluster which was created using local storage devices, a new disk needs to be added to the storage node. It is recommended to have the new disks of the same size as used earlier during the deployment as OpenShift Data Foundation does not support heterogeneous disks/OSD's. For deployments having three failure domains, you can scale up the storage by adding disks in the multiple of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is flexibility in adding the number of disks. In this case, you can add any number of disks. In order to check if flexible scaling is enabled or not, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Disks to be used for scaling are already attached to the storage node LocalVolumeDiscovery and LocalVolumeSet objects are already created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.3. Scaling out storage capacity on a VMware cluster 5.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.3. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.4. Scaling up storage capacity To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . For local storage devices, see Scaling up a cluster created using local storage devices
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/scaling_storage/scaling_storage_of_vmware_openshift_data_foundation_cluster
Chapter 10. Publishing your project to Red Hat Fuse
Chapter 10. Publishing your project to Red Hat Fuse This tutorial walks you through the process of publishing your project to Red Hat Fuse. It assumes that you have an instance of Red Hat Fuse installed on the same machine on which you are running the Red Hat Fuse Tooling. Goals In this tutorial you complete the following tasks: Define a Red Hat Fuse server Configure the publishing options Start up the Red Hat Fuse server and publish the ZooOrderApp project Connect to the Red Hat Fuse server Verify whether the ZooOrderApp project's bundle was successfully built and published Uninstall the ZooOrderApp project Prerequisites Before you start this tutorial you need: Access to a Red Hat Fuse instance Java 8 installed on your computer The ZooOrderApp project resulting from one of the following: Complete the Chapter 9, Testing a route with JUnit tutorial. or Complete the Chapter 2, Setting up your environment tutorial and replace your project's blueprint.xml file with the provided blueprintContexts/blueprint3.xml file, as described in the section called "About the resource files" . Defining a Red Hat Fuse Server To define a server: Open the Fuse Integration perspective. Click the Servers tab in the lower, right panel to open the Servers view. Click the No servers are available. Click this link to create a new server... link to open the Define a New Server page. Note To define a new server when one is already defined, right-click inside the Servers view and then select New Server . Expand the Red Hat JBoss Middleware node to expose the available server options: Select a Red Hat Fuse server. Accept the defaults for Server's host name ( localhost ) and Server name (Fuse n.n Runtime Server), and then click to open the Runtime page: Note If you do not have Fuse already installed, you can download it now using the Download and install runtime link. If you have already defined a server, the tooling skips this page, and instead displays the configuration details page. Accept the default for Name . Click Browse to the Home Directory field, to navigate to the installation and select it. Select the runtime JRE from the drop-down menu to Execution Environment . Select JavaSE-1.8 (recommended). If necessary, click the Environments button to select it from the list. Note The Fuse server requires Java 8 (recommended). To select it for the Execution Environment , you must have previously installed it. Leave the Alternate JRE option as is. Click to save the runtime definition for the Fuse Server and open the Fuse server configuration details page: Accept the default for SSH Port ( 8101 ). The runtime uses the SSH port to connect to the server's Karaf shell. If this default is incorrect, you can discover the correct port number by looking in the Red Hat Fuse installDir /etc/org.apache.karaf.shell.cfg file. In User Name , enter the name used to log into the server. This is a user name stored in the Red Hat Fuse installDir `/etc/users.properties` file. Note If the default user has been activated (uncommented) in the /etc/users.properties file, the tooling autofills User Name and Password with the default user's name and password. If one has not been set, you can either add one to that file using the format user=password,role (for example, joe=secret,Administrator ), or you can set one using the karaf jaas command set: jaas:realms - to list the realms jaas:manage --index 1 - to edit the first (server) realm jaas:useradd <username> <password> - to add a user and associated password jaas:roleadd <username> Administrator - to specify the new user's role jaas:update - to update the realm with the new user information If a jaas realm has already been selected for the server, you can discover the user name by issuing the command JBossFuse:karaf@root> jaas:users . In Password , type the password required for User name to log into the server. This is the password set either in Red Hat Fuse's installDir /etc/users.properties file or by the karaf jaas commands. Click Finish . Runtime Server [stopped, Synchronized] appears in the Servers view. In the Servers view, expand the Runtime Server: JMX[Disconnected] appears as a node under the Runtime Server [stopped, Synchronized] entry. Configuring the publishing options Using publishing options, you can configure how and when your ZooOrderApp project is published to a running server: Automatically, immediately upon saving changes made to the project Automatically, at configured intervals after you have changed and saved the project Manually, when you select a publish operation In this tutorial, you configure immediate publishing upon saving changes to the ZooOrderApp project. To do so: In the Servers view, double-click the Runtime Server [stopped, Synchronized] entry to display its overview. On the server's Overview page, expand the Publishing section to expose the options. Make sure that the option Automatically publish when resources change is enabled. Optionally, change the value of Publishing interval to speed up or delay publishing the project when changes have been made. In the Servers view, click . Wait a few seconds for the server to start. When it does: The Terminal view displays the splash screen: The Servers view displays: The JMX Navigator displays n.n Runtime Server[Disconnected : In the Servers view, right-click n.n Runtime Server [Started] and then select Add and Remove to open the Add and Remove page: Make sure the option If server is started, publish changes immediately is checked. Select ZooOrderApp and click Add to assign it to the Fuse server: Click Finish . The Servers view should show the following: Runtime Server [Started, Synchronized] Note For a server, synchronized means that all modules published on the server are identical to their local counterparts. ZooOrderApp [Started, Synchronized] Note For a module, synchronized means that the published module is identical to its local counterpart. Because automatic publishing is enabled, changes made to the ZooOrderApp project are published in seconds (according to the value of the Publishing interval ). JMX[Disconnected] Connecting to the runtime server After you connect to the runtime server, you can see the published elements of your ZooOrderApp project and interact with them. In the Servers view, double-click JMX[Disconnected] to connect to the runtime server. In the JMX Navigator , expand the Camel folder to expose the elements of the ZooOrderApp . Click the Bundles node to populate the Properties view with the list of bundles installed on the runtime server: In the Search field, type ZooOrderApp. The corresponding bundle is shown: Note Alternatively, you can issue the osgi:list command in the Terminal view to see a generated list of bundles installed on the server runtime. The tooling uses a different naming scheme for OSGi bundles displayed by the osgi:list command. In this case, the command returns Camel Blueprint Quickstart , which appears at the end of the list of installed bundles. In the <build> section of project's pom.xml file, you can find the bundle's symbolic name and its bundle name (OSGi) listed in the maven-bundle-plugin entry: Uninstalling the ZooOrderApp project Note You do not need to disconnect the JMX connection or stop the server to uninstall a published resource. To remove the ZooOrderApp resource from the runtime server: In the Servers view, right-click n.n Runtime Server to open the context menu. Select Add and Remove : In the Configured column, select ZooOrderApp , and then click Remove to move the ZooOrderApp resource to the Available column. Click Finish . In the Servers view, right-click JMX[Connected] and then click Refresh . The Camel tree under JMX[Connected] disappears. Note In JMX Navigator , the Camel tree under Server Connections > n.n Runtime Server[Connected] also disappears. With the Bundles page displayed in the Properties view, scroll down to the end of the list to verify that the ZooOrderApp's bundle is no longer listed.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_tutorials/RiderTutDeployFESB
3.4. cpuset
3.4. cpuset The cpuset subsystem assigns individual CPUs and memory nodes to cgroups. Each cpuset can be specified according to the following parameters, each one in a separate pseudofile within the cgroup virtual file system: Important Some subsystems have mandatory parameters that must be set before you can move a task into a cgroup which uses any of those subsystems. For example, before you move a task into a cgroup which uses the cpuset subsystem, the cpuset.cpus and cpuset.mems parameters must be defined for that cgroup. cpuset.cpus (mandatory) specifies the CPUs that tasks in this cgroup are permitted to access. This is a comma-separated list, with dashes (" - ") to represent ranges. For example, represents CPUs 0, 1, 2, and 16. cpuset.mems (mandatory) specifies the memory nodes that tasks in this cgroup are permitted to access. This is a comma-separated list in the ASCII format, with dashes (" - ") to represent ranges. For example, represents memory nodes 0, 1, 2, and 16. cpuset.memory_migrate contains a flag ( 0 or 1 ) that specifies whether a page in memory should migrate to a new node if the values in cpuset.mems change. By default, memory migration is disabled ( 0 ) and pages stay on the node to which they were originally allocated, even if the node is no longer among the nodes specified in cpuset.mems . If enabled ( 1 ), the system migrates pages to memory nodes within the new parameters specified by cpuset.mems , maintaining their relative placement if possible - for example, pages on the second node on the list originally specified by cpuset.mems are allocated to the second node on the new list specified by cpuset.mems , if the place is available. cpuset.cpu_exclusive contains a flag ( 0 or 1 ) that specifies whether cpusets other than this one and its parents and children can share the CPUs specified for this cpuset. By default ( 0 ), CPUs are not allocated exclusively to one cpuset. cpuset.mem_exclusive contains a flag ( 0 or 1 ) that specifies whether other cpusets can share the memory nodes specified for the cpuset. By default ( 0 ), memory nodes are not allocated exclusively to one cpuset. Reserving memory nodes for the exclusive use of a cpuset ( 1 ) is functionally the same as enabling a memory hardwall with the cpuset.mem_hardwall parameter. cpuset.mem_hardwall contains a flag ( 0 or 1 ) that specifies whether kernel allocations of memory page and buffer data should be restricted to the memory nodes specified for the cpuset. By default ( 0 ), page and buffer data is shared across processes belonging to multiple users. With a hardwall enabled ( 1 ), each tasks' user allocation can be kept separate. cpuset.memory_pressure a read-only file that contains a running average of the memory pressure created by the processes in the cpuset. The value in this pseudofile is automatically updated when cpuset.memory_pressure_enabled is enabled, otherwise, the pseudofile contains the value 0 . cpuset.memory_pressure_enabled contains a flag ( 0 or 1 ) that specifies whether the system should compute the memory pressure created by the processes in the cgroup. Computed values are output to cpuset.memory_pressure and represent the rate at which processes attempt to free in-use memory, reported as an integer value of attempts to reclaim memory per second, multiplied by 1000. cpuset.memory_spread_page contains a flag ( 0 or 1 ) that specifies whether file system buffers should be spread evenly across the memory nodes allocated to the cpuset. By default ( 0 ), no attempt is made to spread memory pages for these buffers evenly, and buffers are placed on the same node on which the process that created them is running. cpuset.memory_spread_slab contains a flag ( 0 or 1 ) that specifies whether kernel slab caches for file input/output operations should be spread evenly across the cpuset. By default ( 0 ), no attempt is made to spread kernel slab caches evenly, and slab caches are placed on the same node on which the process that created them is running. cpuset.sched_load_balance contains a flag ( 0 or 1 ) that specifies whether the kernel will balance loads across the CPUs in the cpuset. By default ( 1 ), the kernel balances loads by moving processes from overloaded CPUs to less heavily used CPUs. Note, however, that setting this flag in a cgroup has no effect if load balancing is enabled in any parent cgroup, as load balancing is already being carried out at a higher level. Therefore, to disable load balancing in a cgroup, disable load balancing also in each of its parents in the hierarchy. In this case, you should also consider whether load balancing should be enabled for any siblings of the cgroup in question. cpuset.sched_relax_domain_level contains an integer between -1 and a small positive value, which represents the width of the range of CPUs across which the kernel should attempt to balance loads. This value is meaningless if cpuset.sched_load_balance is disabled. The precise effect of this value varies according to system architecture, but the following values are typical: Values of cpuset.sched_relax_domain_level Value Effect -1 Use the system default value for load balancing 0 Do not perform immediate load balancing; balance loads only periodically 1 Immediately balance loads across threads on the same core 2 Immediately balance loads across cores in the same package or book (in case of s390x architectures) 3 Immediately balance loads across books in the same package (available only for s390x architectures) 4 Immediately balance loads across CPUs on the same node or blade 5 Immediately balance loads across several CPUs on architectures with non-uniform memory access (NUMA) 6 Immediately balance loads across all CPUs on architectures with NUMA Note With the release of Red Hat Enterprise Linux 6.1 the BOOK scheduling domain has been added to the list of supported domain levels. This change affected the meaning of cpuset.sched_relax_domain_level values. Please note that the effect of values from 3 to 5 changed. For example, to get the old effect of value 3, which was "Immediately balance loads across CPUs on the same node or blade" the value 4 needs to be selected. Similarly, the old 4 is now 5, and the old 5 is now 6.
[ "0-2,16", "0-2,16" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-cpuset
20.17. Guest Virtual Machine Retrieval Commands
20.17. Guest Virtual Machine Retrieval Commands 20.17.1. Displaying the Host Physical Machine Name The virsh domhostname domain command displays the specified guest virtual machine's physical host name provided the hypervisor can publish it. Example 20.39. How to display the host physical machine name The following example displays the host physical machine name for the guest1 virtual machine, if the hypervisor makes it available: # virsh domhostname guest1 20.17.2. Displaying General Information about a Virtual Machine The virsh dominfo domain command displays basic information about a specified guest virtual machine. This command may also be used with the option [--domain] guestname . Example 20.40. How to display general information about the guest virtual machine The following example displays general information about the guest virtual machine named guest1 : 20.17.3. Displaying a Virtual Machine's ID Number Although virsh list includes the ID in its output, the virsh domid domain>|<ID displays the ID for the guest virtual machine, provided it is running. An ID will change each time you run the virtual machine. If guest virtual machine is shut off, the machine name will be displayed as a series of dashes ('-----'). This command may also be used with the [--domain guestname ] option. Example 20.41. How to display a virtual machine's ID number In order to run this command and receive any usable output, the virtual machine should be running. The following example produces the ID number of the guest1 virtual machine: 20.17.4. Aborting Running Jobs on a Guest Virtual Machine The virsh domjobabort domain command aborts the currently running job on the specified guest virtual machine. This command may also be used with the [--domain guestname ] option. Example 20.42. How to abort a running job on a guest virtual machine In this example, there is a job running on the guest1 virtual machine that you want to abort. When running the command, change guest1 to the name of your virtual machine: # virsh domjobabort guest1 20.17.5. Displaying Information about Jobs Running on the Guest Virtual Machine The virsh domjobinfo domain command displays information about jobs running on the specified guest virtual machine, including migration statistics. This command may also be used with the [--domain guestname ] option, or with the --completed option to return information on the statistics of a recently completed job. Example 20.43. How to display statistical feedback The following example lists statistical information about the guest1 virtual machine: 20.17.6. Displaying the Guest Virtual Machine's Name The virsh domname domainID command displays the name guest virtual machine name, given its ID or UUID. Although the virsh list --all command will also display the guest virtual machine's name, this command only lists the guest's name. Example 20.44. How to display the name of the guest virtual machine The following example displays the name of the guest virtual machine with domain ID 8 : 20.17.7. Displaying the Virtual Machine's State The virsh domstate domain command displays the state of the given guest virtual machine. Using the --reason argument will also display the reason for the displayed state. This command may also be used with the [--domain guestname ] option, as well as the --reason option, which displays the reason for the state. If the command reveals an error, you should run the command virsh domblkerror . See Section 20.12.7, "Displaying Errors on Block Devices" for more details. Example 20.45. How to display the guest virtual machine's current state The following example displays the current state of the guest1 virtual machine: 20.17.8. Displaying the Connection State to the Virtual Machine virsh domcontrol domain displays the state of an interface to the hypervisor that is used to control a specified guest virtual machine. For states that are not OK or Error, it will also print the number of seconds that have elapsed since the control interface entered the displayed state. Example 20.46. How to display the guest virtual machine's interface state The following example displays the current state of the guest1 virtual machine's interface.
[ "virsh dominfo guest1 Id: 8 Name: guest1 UUID: 90e0d63e-d5c1-4735-91f6-20a32ca22c40 OS Type: hvm State: running CPU(s): 1 CPU time: 271.9s Max memory: 1048576 KiB Used memory: 1048576 KiB Persistent: yes Autostart: disable Managed save: no Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c422,c469 (enforcing)", "virsh domid guest1 8", "virsh domjobinfo guest1 Job type: Unbounded Time elapsed: 1603 ms Data processed: 47.004 MiB Data remaining: 658.633 MiB Data total: 1.125 GiB Memory processed: 47.004 MiB Memory remaining: 658.633 MiB Memory total: 1.125 GiB Constant pages: 114382 Normal pages: 12005 Normal data: 46.895 MiB Expected downtime: 0 ms Compression cache: 64.000 MiB Compressed data: 0.000 B Compressed pages: 0 Compression cache misses: 12005 Compression overflows: 0", "virsh domname 8 guest1", "virsh domstate guest1 running", "virsh domcontrol guest1 ok" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Domain_Commands-Domain_Retrieval_Commands