title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 21. MachineConfiguration [operator.openshift.io/v1]
Chapter 21. MachineConfiguration [operator.openshift.io/v1] Description MachineConfiguration provides information to configure an operator to manage Machine Configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 21.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Machine Config Operator status object status is the most recently observed status of the Machine Config Operator 21.1.1. .spec Description spec is the specification of the desired behavior of the Machine Config Operator Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 21.1.2. .status Description status is the most recently observed status of the Machine Config Operator Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 21.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 21.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 21.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 21.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 21.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 21.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Required nodeName Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 21.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/machineconfigurations DELETE : delete collection of MachineConfiguration GET : list objects of kind MachineConfiguration POST : create a MachineConfiguration /apis/operator.openshift.io/v1/machineconfigurations/{name} DELETE : delete a MachineConfiguration GET : read the specified MachineConfiguration PATCH : partially update the specified MachineConfiguration PUT : replace the specified MachineConfiguration /apis/operator.openshift.io/v1/machineconfigurations/{name}/status GET : read status of the specified MachineConfiguration PATCH : partially update status of the specified MachineConfiguration PUT : replace status of the specified MachineConfiguration 21.2.1. /apis/operator.openshift.io/v1/machineconfigurations HTTP method DELETE Description delete collection of MachineConfiguration Table 21.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfiguration Table 21.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfiguration Table 21.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.4. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 202 - Accepted MachineConfiguration schema 401 - Unauthorized Empty 21.2.2. /apis/operator.openshift.io/v1/machineconfigurations/{name} Table 21.6. Global path parameters Parameter Type Description name string name of the MachineConfiguration HTTP method DELETE Description delete a MachineConfiguration Table 21.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 21.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfiguration Table 21.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfiguration Table 21.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfiguration Table 21.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.13. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 401 - Unauthorized Empty 21.2.3. /apis/operator.openshift.io/v1/machineconfigurations/{name}/status Table 21.15. Global path parameters Parameter Type Description name string name of the MachineConfiguration HTTP method GET Description read status of the specified MachineConfiguration Table 21.16. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfiguration Table 21.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfiguration Table 21.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.20. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/machineconfiguration-operator-openshift-io-v1
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Streams entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2021-08-18 09:24:31 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_streams_on_openshift_overview/using_your_subscription
Chapter 3. Example: Automate Red Hat Enterprise Linux firewall configuration
Chapter 3. Example: Automate Red Hat Enterprise Linux firewall configuration This example demonstrates how the Ansible plug-ins can help Ansible users of all skill levels create quality Ansible content. As an infrastructure engineer new to Ansible, you have been tasked to create a playbook to configure a Red Hat Enterprise Linux (RHEL) host firewall. The following procedures show you how to use the Ansible plug-ins and Dev Spaces to develop a playbook. 3.1. Learning more about playbooks The first step is to learn more about Ansible playbooks using the available learning paths. Click the Ansible A icon in the Red Hat Developer Hub navigation panel. Click Learn and select the Getting Started with Ansible Playbooks learning path. This redirects you to the Red Hat Developer website. If you are prompted to log in, create a Red Hat Developer account, or enter your details. Complete the learning path. 3.2. Discovering existing Ansible content for RHEL system roles Red Hat recommends that you use trusted automation content that has been tested and approved by Red Hat or your organization. Automation hub is a central repository for discovering, downloading, and managing trusted content collections from Red Hat and its partners. Private automation hub provides an on-premise solution for managing content collections. Click on the Ansible A icon in the Red Hat Developer Hub navigation panel. Click Discover existing collections . Click Go to Automation Hub . If private automation hub has been configured in the Ansible plug-ins, you are redirected to your PrivateHubName instance. If private automation hub has not been configured in the Ansible plug-ins installation configuration, you will be redirected to the Red Hat Hybrid Console (RHCC) automation hub. In this example, you are redirected to the RHCC automation hub. If you are prompted to log in, provide your Red Hat Customer Portal credentials. Filter the collections with the rhel firewall keywords. The search returns the rhel_system_roles collection. The RHEL System Roles collection contains certified Ansible content that you can reuse to configure your firewall. 3.3. Create a new playbook project to configure a firewall Use the Ansible plug-ins to create a new Ansible Playbook project. Click the Ansible A icon in the Red Hat Developer Hub navigation panel. From the Create dropdown menu on the landing page, select Create Ansible Git Project . Click Choose in the Create Ansible Playbook Project software template. Fill in the following information in the Create Ansible Playbook Project page: Field Required Description Example value Source code repository organization name or username Yes The name of your source code repository username or organization name. my_github_username Playbook repository name Yes The name of your new Git repository. rhel_firewall_config Playbook description No A description of the new playbook project. This playbook configures firewalls on Red Hat Enterprise Linux systems Playbook project's collection namespace Yes The new playbook Git project creates an example collection folder for you. Enter a value for the collection namespace. my_galaxy_username Playbook project's collection name Yes This is the name of the example collection. rhel_firewall_config Catalog Owner Name Yes The name of the Developer Hub catalog item owner. It is a Red Hat Developer Hub field. my_rhdh_username System No This is a Red Hat Developer Hub field. my_rhdh_linux_system Click Review . Click Create to provision your new playbook project. Click Open in catalog to view your project. 3.4. Creating a new playbook to automate the firewall configuration Create a new playbook and use the RHEL System Role collection to automate your Red Hat Enterprise Linux firewall configuration. In your Dev Spaces instance, click File New File . Enter firewall.yml for the filename and click OK to save it in the root directory. Add the following lines to your firewall.yml file: --- - name: Open HTTPS and SSH on firewall hosts: rhel become: true tasks: - name: Use rhel system roles to allow https and ssh traffic vars: firewall: - service: https state: enabled permanent: true immediate: true zone: public - service: ssh state: enabled permanent: true immediate: true zone: public ansible.builtin.include_role: name: redhat.rhel_system_roles.firewall Note You can use Ansible Lightspeed with IBM watsonx Code Assistant from the Ansible VS Code extension to help you generate playbooks. For more information, refer to the Ansible Lightspeed with IBM watsonx Code Assistant User Guide . 3.5. Editing your firewall playbook project The Ansible plug-ins integrate OpenShift Dev Spaces to edit your Ansible projects. OpenShift Dev Spaces provides on-demand, web-based Integrated Development Environments (IDEs). Ansible Git projects provisioned using the Ansible plug-ins include best practice configurations for OpenShift Dev Spaces. These configurations include installing the Ansible VS Code extension and providing access from the IDE terminal to Ansible development tools, such as Ansible Navigator and Ansible Lint. Note OpenShift Dev Spaces is optional and it is not required to run the Ansible plug-ins. It is a separate Red Hat product and it is not included in the Ansible Automation Platform or Red Hat Developer Hub subscription. This example assumes that OpenShift Dev Spaces has been configured in the Ansible plug-ins installation. Procedure In the catalog item view of your playbook project, click Open Ansible project in OpenShift Dev Spaces . A VS Code instance of OpenShift Dev Spaces opens in a new browser tab. It automatically loads your new Ansible Playbook Git project.
[ "--- - name: Open HTTPS and SSH on firewall hosts: rhel become: true tasks: - name: Use rhel system roles to allow https and ssh traffic vars: firewall: - service: https state: enabled permanent: true immediate: true zone: public - service: ssh state: enabled permanent: true immediate: true zone: public ansible.builtin.include_role: name: redhat.rhel_system_roles.firewall" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_ansible_plug-ins_for_red_hat_developer_hub/rhdh-example_aap-plugin-rhdh-using
Chapter 7. 4.4 Release Notes
Chapter 7. 4.4 Release Notes 7.1. New Features This following major enhancements have been introduced in Red Hat Update Infrastructure 4.4. New rhui-installer argument: --pulp-workers COUNT The rhui-installer command now supports the --pulp-workers COUNT argument. RHUI administrators can use this argument to set up any number of Pulp workers by re-running the rhui-installer command with this argument. CDS nodes can now be configured to never fetch unexported content With this update, Content Delivery Server (CDS) nodes can now be configured to never fetch unexported content from the RHUA node. To use this feature, re-run the rhui-installer command with the --fetch-missing-symlinks False argument, and reapply the configuration to all CDS nodes by running the rhui-manager cds reinstall --all command. If you configure your CDS nodes this way, ensure you export the content before RHUI clients start consuming it. By default, cron jobs running regularly on the RHUA node export content automatically. However, you can manually export the content by running the rhui-manager repo export --repo_id REPOSITORY_ID command. Container support is now disabled by default With this update, support for containers in RHUI is disabled by default. If you want to use containers, you must manually enable container support by re-running the rhui-installer command with --container-support-enabled True argument, and reapplying the configuration to all CDS nodes by running the rhui-manager cds reinstall --all command. TLS 1.3 and HSTS is now available on RHUI With this update, Transport Layer Security (TLS) 1.3 and HTTP Strict Transport Security (HSTS) is now enabled in RHUI. This update improves overall RHUI security and also removes unsafe ciphers from the nginx configuration on Content Delivery Server (CDS) nodes. Packages can now be removed from custom repositories With this update, you can now remove packages from custom repositories using the text user interface (TUI) and the command line. ACS configuration is now available With this update, you can set up the Alternate Content Source (ACS) configuration in RHUI. You can use this configuration to quickly synchronize new repositories and content by substituting the remote content with matching content that is available either locally or geographically closer to your instance of RHUI. For more information, see CLI options for RHUI clients Custom repository prefixes are now available With this update, you can use a custom prefix, or no prefix at all, when naming your RHUI repositories. You can change the prefix by re-running the rhui-installer command with the --client-repo-prefix PREFIX argument. To remove the prefix entirely, use two quotation marks, --client-repo-prefix "" . 7.2. Bug Fixes The following bugs have been fixed in Red Hat Update Infrastructure 4.4 that have a significant impact on users. rhui-services-restart command restarts all pulpcore-worker services Previously, when the rhui-services-restart command was run, it restarted only those pulpcore-worker services that were already running and ignored services that were not running. With this update, the rhui-services-restart command restarts all pulpcore-worker services irrespective of their status. rhui-manager status command no longer indicates an incorrect status Previously, the rhui-manager status command returned an incorrect exit status when there was a problem. For example, even when a pulpcore-worker service was not running, the rhui-manager status command exited and incorrectly indicated that there was no problem by returning the 0 exit status. With this update, the issue has been fixed and the command now returns the correct exit status if there is a problem. rhui-installer now uses the --rhua-mount-options parameter Previously, rhui-installer ignored the --rhua-mount-options parameter and only used the read-write ( rw ) mount option when setting up RHUI remote share. With this update, rhui-installer can now set up remote share using the --rhua-mount-options parameter and other specified options. If you do not use the --rhua-mount-options , rhui-installer uses the read-write ( rw ) option by default. rhui-installer no longer rewrites container-related settings Previously, when you ran the rhui-installer command, it rewrote the /etc/rhui/rhui-tools.conf file, resetting all container-related settings. With this update, the command saves the container-related settings from the /etc/rhui/rhui-tools.conf file. As a result, the settings are restored after the file is rewritten.
null
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-4-release-notes_release-notes
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) using Red Hat OpenStack Platform clusters. Note Both internal and external OpenShift Data Foundation clusters are supported on Red Hat OpenStack Platform. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Internal mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploy standalone Multicloud Object Gateway component External mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/preface-ocs-osp
Chapter 13. Setting throughput and storage limits on brokers
Chapter 13. Setting throughput and storage limits on brokers Important This feature is a technology preview and not intended for a production environment. For more information see the release notes . This procedure describes how to set throughput and storage limits on brokers in your Kafka cluster. Enable the Strimzi Quotas plugin and configure limits using quota properties The plugin provides storage utilization quotas and dynamic distribution of throughput limits. Storage quotas throttle Kafka producers based on disk storage utilization. Limits can be specified in bytes ( storage.per.volume.limit.min.available.bytes ) or percentage ( storage.per.volume.limit.min.available.ratio ) of available disk space, applying to each disk individually. When any broker in the cluster exceeds the configured disk threshold, clients are throttled to prevent disks from filling up too quickly and exceeding capacity. A total throughput limit is distributed dynamically across all clients. For example, if you set a 40 MBps producer byte-rate threshold, the distribution across two producers is not static. If one producer is using 10 MBps, the other can use up to 30 MBps. Specific users (clients) can be excluded from the restrictions. Note With the plugin, you see only aggregated quota metrics, not per-client metrics. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Edit the Kafka configuration properties file. Example plugin configuration # ... client.quota.callback.class=io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce=1000000 2 client.quota.callback.static.fetch=1000000 3 client.quota.callback.static.storage.per.volume.limit.min.available.bytes=500000000000 4 client.quota.callback.static.storage.check-interval=5 5 client.quota.callback.static.kafka.admin.bootstrap.servers=localhost:9092 6 client.quota.callback.static.excluded.principal.name.list=User:my-user-1;User:my-user-2 7 # ... 1 Loads the plugin. 2 Sets the producer byte-rate threshold of 1 MBps. 3 Sets the consumer byte-rate threshold. 1 MBps. 4 Sets an available bytes limit of 500 GB. 5 Sets the interval in seconds between checks on storage to 5 seconds. The default is 60 seconds. Set this property to 0 to disable the check. 6 Kafka cluster bootstrap servers address. This property is required if storage.check-interval is >0. All configuration properties starting with client.quota.callback.static.kafka.admin. prefix are passed to the Kafka Admin client configuration. 7 Excludes my-user-1 and my-user-2 from the restrictions. Each principal must be be prefixed with User: . storage.per.volume.limit.min.available.bytes and storage.per.volume.limit.min.available.ratio are mutually exclusive. Only configure one of these parameters. Note The full list of supported configuration properties can be found in the plugin documentation . Start the Kafka broker with the default configuration file. ./bin/kafka-server-start.sh -daemon ./config/kraft/server.properties Verify that the Kafka broker is running. jcmd | grep Kafka
[ "client.quota.callback.class=io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce=1000000 2 client.quota.callback.static.fetch=1000000 3 client.quota.callback.static.storage.per.volume.limit.min.available.bytes=500000000000 4 client.quota.callback.static.storage.check-interval=5 5 client.quota.callback.static.kafka.admin.bootstrap.servers=localhost:9092 6 client.quota.callback.static.excluded.principal.name.list=User:my-user-1;User:my-user-2 7", "./bin/kafka-server-start.sh -daemon ./config/kraft/server.properties", "jcmd | grep Kafka" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/proc-setting-broker-limits-str
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_in_external_mode/providing-feedback-on-red-hat-documentation_rhodf
Installing Ansible plug-ins for Red Hat Developer Hub
Installing Ansible plug-ins for Red Hat Developer Hub Red Hat Ansible Automation Platform 2.5 Install and configure Ansible plug-ins for Red Hat Developer Hub Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_ansible_plug-ins_for_red_hat_developer_hub/index
Chapter 43. ContainerEnvVar schema reference
Chapter 43. ContainerEnvVar schema reference Used in: ContainerTemplate Property Description name The environment variable key. string value The environment variable value. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-ContainerEnvVar-reference
Part VII. Administration: Managing Network Services
Part VII. Administration: Managing Network Services This part discusses how to manage the Domain Name Service (DNS) integrated with Identity Management and how to manage, organize, and access directories across multiple systems using Automount .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.administration-guide-network-services
4.2. Model Classes and Types
4.2. Model Classes and Types Teiid Designer can be used to model a variety of classes of models. Each of these represent a conceptually different classification of models. Relational - Model data that can be represented in table columns and records form. Relational models can represent structures found in relational databases, spreadsheets, text files, or simple Web services. XML - Model that represents the basic structures of XML documents. These can be backed by XML Schemas. XML models represent nested structures, including recursive hierarchies. XML Schema - W3C standard for formally defining the structure and constraints of XML documents, as well as the datatypes defining permissible values in XML documents. Web Services - which define Web service interfaces, operations, and operation input and output parameters (in the form of XML Schemas). Function - The Function metamodel supports the capability to provide user defined functions, including binary source jars, to use in custom transformation SQL statements.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/model_classes_and_types
Chapter 10. Migrating your applications
Chapter 10. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or from the command line . You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. During migration, MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 10.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.17 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. Additional resources for migration prerequisites Manually exposing a secure registry for OpenShift Container Platform 3 Updating deprecated internal images 10.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 10.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 10.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an {OCP} cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 10.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 10.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources MTC file system copy method MTC snapshot copy method 10.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc create token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry", "az group list", "{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" }," ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/migrating-applications-3-4
Chapter 4. Resolved issues and known issues
Chapter 4. Resolved issues and known issues 4.1. Resolved issues See Resolved issues for JBoss EAP XP 3.0.0 to view the list of issues that have been resolved for this release. 4.2. Known issues See Known issues for JBoss EAP XP 3.0.0 to view the list of known issues for this release.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/red_hat_jboss_eap_xp_3.0.0_release_notes/resolved_issues_and_known_issues
Chapter 9. Verifying connectivity to an endpoint
Chapter 9. Verifying connectivity to an endpoint The Cluster Network Operator (CNO) runs a controller, the connectivity check controller, that performs a connection health check between resources within your cluster. By reviewing the results of the health checks, you can diagnose connection problems or eliminate network connectivity as the cause of an issue that you are investigating. 9.1. Connection health checks performed To verify that cluster resources are reachable, a TCP connection is made to each of the following cluster API services: Kubernetes API server service Kubernetes API server endpoints OpenShift API server service OpenShift API server endpoints Load balancers To verify that services and service endpoints are reachable on every node in the cluster, a TCP connection is made to each of the following targets: Health check target service Health check target endpoints 9.2. Implementation of connection health checks The connectivity check controller orchestrates connection verification checks in your cluster. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. The Cluster Network Operator (CNO) deploys several resources to the cluster to send and receive connectivity health checks: Health check source This program deploys in a single pod replica set managed by a Deployment object. The program consumes PodNetworkConnectivity objects and connects to the spec.targetEndpoint specified in each object. Health check target A pod deployed as part of a daemon set on every node in the cluster. The pod listens for inbound health checks. The presence of this pod on every node allows for the testing of connectivity to each node. 9.3. PodNetworkConnectivityCheck object fields The PodNetworkConnectivityCheck object fields are described in the following tables. Table 9.1. PodNetworkConnectivityCheck object fields Field Type Description metadata.name string The name of the object in the following format: <source>-to-<target> . The destination described by <target> includes one of following strings: load-balancer-api-external load-balancer-api-internal kubernetes-apiserver-endpoint kubernetes-apiserver-service-cluster network-check-target openshift-apiserver-endpoint openshift-apiserver-service-cluster metadata.namespace string The namespace that the object is associated with. This value is always openshift-network-diagnostics . spec.sourcePod string The name of the pod where the connection check originates, such as network-check-source-596b4c6566-rgh92 . spec.targetEndpoint string The target of the connection check, such as api.devcluster.example.com:6443 . spec.tlsClientCert object Configuration for the TLS certificate to use. spec.tlsClientCert.name string The name of the TLS certificate used, if any. The default value is an empty string. status object An object representing the condition of the connection test and logs of recent connection successes and failures. status.conditions array The latest status of the connection check and any statuses. status.failures array Connection test logs from unsuccessful attempts. status.outages array Connect test logs covering the time periods of any outages. status.successes array Connection test logs from successful attempts. The following table describes the fields for objects in the status.conditions array: Table 9.2. status.conditions Field Type Description lastTransitionTime string The time that the condition of the connection transitioned from one status to another. message string The details about last transition in a human readable format. reason string The last status of the transition in a machine readable format. status string The status of the condition. type string The type of the condition. The following table describes the fields for objects in the status.conditions array: Table 9.3. status.outages Field Type Description end string The timestamp from when the connection failure is resolved. endLogs array Connection log entries, including the log entry related to the successful end of the outage. message string A summary of outage details in a human readable format. start string The timestamp from when the connection failure is first detected. startLogs array Connection log entries, including the original failure. Connection log fields The fields for a connection log entry are described in the following table. The object is used in the following fields: status.failures[] status.successes[] status.outages[].startLogs[] status.outages[].endLogs[] Table 9.4. Connection log object Field Type Description latency string Records the duration of the action. message string Provides the status in a human readable format. reason string Provides the reason for status in a machine readable format. The value is one of TCPConnect , TCPConnectError , DNSResolve , DNSError . success boolean Indicates if the log entry is a success or failure. time string The start time of connection check. 9.4. Verifying network connectivity for an endpoint As a cluster administrator, you can verify the connectivity of an endpoint, such as an API server, load balancer, service, or pod. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitycheck -n openshift-network-diagnostics Example output NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m View the connection test logs: From the output of the command, identify the endpoint that you want to review the connectivity logs for. To view the object, enter the following command: USD oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml where <name> specifies the name of the PodNetworkConnectivityCheck object. Example output apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics ... spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: "" status: conditions: - lastTransitionTime: "2021-01-13T20:11:34Z" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: "True" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" outages: - end: "2021-01-13T20:11:34Z" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T20:11:34Z" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" message: Connectivity restored after 2m59.999789186s start: "2021-01-13T20:08:34Z" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:14:34Z" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:13:34Z" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:12:34Z" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:11:34Z" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:10:34Z" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:09:34Z" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:08:34Z" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:07:34Z" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:06:34Z" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:05:34Z"
[ "oc get podnetworkconnectivitycheck -n openshift-network-diagnostics", "NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m", "oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml", "apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/verifying-connectivity-endpoint
3.9. RHEA-2011:1731 - new package: perl-Test-Inter
3.9. RHEA-2011:1731 - new package: perl-Test-Inter A new perl-Test-Inter package is now available for Red Hat Enterprise Linux 6. The Test::Inter module provides a framework for writing interactive test scripts in Perl. It is inspired by the Test::More framework. This enhancement update adds the perl-Test-Inter package to Red Hat Enterprise Linux 6. (BZ# 705752 ) All users who require perl-Test-Inter should install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/perl-test-inter
11.12. Procedure Options
11.12. Procedure Options You can use the following options when creating procedures. Any others properties defined will be considered as extension metadata. Property Data Type or Allowed Values Description UUID string Unique Identifier NAMEINSOURCE string In the case of source ANNOTATION string Description of the procedure UPDATECOUNT int if this procedure updates the underlying sources, what is the update count, when update count is >1 the XA protocol for execution is enforced
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/procedure_options
Chapter 3. GSettings and dconf
Chapter 3. GSettings and dconf One of the major changes in Red Hat Enterprise Linux 7 is the transition from GConf (for storing user preferences) to the combination of the GSettings high-level configuration system and the dconf back end. GConf As mentioned above, the GConf configuration system has been replaced by two systems: the GSettings API, and the dconf back end which serves as a low-level configuration system and program that collects system hardware and software configuration details in a single compact binary format. Both the gsettings command-line tool and the dconf utility are used to view and change user settings. The gsettings utility does so directly in the terminal, while the dconf utility uses the dconf-editor GUI for editing a configuration database. See Chapter 9, Configuring Desktop with GSettings and dconf for more information on dconf-editor and the gsettings utility. gconftool The gconftool-2 tool has been replaced by gsettings and dconf . Likewise, gconf-editor has been replaced by dconf-editor . Overriding The concept of keyfiles has been introduced in Red Hat Enterprise Linux 7: the dconf utility allows the system administrator to override the default settings by directly installing defaults overrides . For example, setting the default background for all users is now executed by using a dconf override placed in a keyfile in the keyfile directory, such as /etc/dconf/db/local.d/) . To learn more about default values and overriding, see Section 9.5, "Configuring Custom Default Values" . Locking the Settings The dconf system now allows individual settings or entire settings subpaths to be locked down to prevent user customization. For more information on how to lock settings, see Section 9.5.1, "Locking Down Specific Settings" . NFS and dconf Using the dconf utility on home directories shared over NFS requires additional configuration. See Section 9.7, "Storing User Settings Over NFS" for information on this topic. Getting More Information See Chapter 9, Configuring Desktop with GSettings and dconf for more information on using GSettings and dconf to configure user settings.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/gsettings-dconf
Chapter 8. Configuring distributed caches
Chapter 8. Configuring distributed caches Red Hat build of Keycloak is designed for high availability and multi-node clustered setups. The current distributed cache implementation is built on top of Infinispan , a high-performance, distributable in-memory data grid. 8.1. Enable distributed caching When you start Red Hat build of Keycloak in production mode, by using the start command, caching is enabled and all Red Hat build of Keycloak nodes in your network are discovered. By default, caches are using a UDP transport stack so that nodes are discovered using IP multicast transport based on UDP. For most production environments, there are better discovery alternatives to UDP available. Red Hat build of Keycloak allows you to either choose from a set of pre-defined default transport stacks, or to define your own custom stack, as you will see later in this chapter. To explicitly enable distributed infinispan caching, enter this command: bin/kc.[sh|bat] build --cache=ispn When you start Red Hat build of Keycloak in development mode, by using the start-dev command, Red Hat build of Keycloak uses only local caches and distributed caches are completely disabled by implicitly setting the --cache=local option. The local cache mode is intended only for development and testing purposes. 8.2. Configuring caches Red Hat build of Keycloak provides a cache configuration file with sensible defaults located at conf/cache-ispn.xml . The cache configuration is a regular Infinispan configuration file . The following table gives an overview of the specific caches Red Hat build of Keycloak uses. You configure these caches in conf/cache-ispn.xml : Cache name Cache Type Description realms Local Cache persisted realm data users Local Cache persisted user data authorization Local Cache persisted authorization data keys Local Cache external public keys work Replicated Propagate invalidation messages across nodes authenticationSessions Distributed Caches authentication sessions, created/destroyed/expired during the authentication process sessions Distributed Caches user sessions, created upon successful authentication and destroyed during logout, token revocation, or due to expiration clientSessions Distributed Caches client sessions, created upon successful authentication to a specific client and destroyed during logout, token revocation, or due to expiration offlineSessions Distributed Caches offline user sessions, created upon successful authentication and destroyed during logout, token revocation, or due to expiration offlineClientSessions Distributed Caches client sessions, created upon successful authentication to a specific client and destroyed during logout, token revocation, or due to expiration loginFailures Distributed keep track of failed logins, fraud detection actionTokens Distributed Caches action Tokens 8.2.1. Cache types and defaults Local caches Red Hat build of Keycloak caches persistent data locally to avoid unnecessary round-trips to the database. The following data is kept local to each node in the cluster using local caches: realms and related data like clients, roles, and groups. users and related data like granted roles and group memberships. authorization and related data like resources, permissions, and policies. keys Local caches for realms, users, and authorization are configured to hold up to 10,000 entries per default. The local key cache can hold up to 1,000 entries per default and defaults to expire every one hour. Therefore, keys are forced to be periodically downloaded from external clients or identity providers. In order to achieve an optimal runtime and avoid additional round-trips to the database you should consider looking at the configuration for each cache to make sure the maximum number of entries is aligned with the size of your database. More entries you can cache, less often the server needs to fetch data from the database. You should evaluate the trade-offs between memory utilization and performance. Invalidation of local caches Local caching improves performance, but adds a challenge in multi-node setups. When one Red Hat build of Keycloak node updates data in the shared database, all other nodes need to be aware of it, so they invalidate that data from their caches. The work cache is a replicated cache and used for sending these invalidation messages. The entries/messages in this cache are very short-lived, and you should not expect this cache growing in size over time. Authentication sessions Authentication sessions are created whenever a user tries to authenticate. They are automatically destroyed once the authentication process completes or due to reaching their expiration time. The authenticationSessions distributed cache is used to store authentication sessions and any other data associated with it during the authentication process. By relying on a distributable cache, authentication sessions are available to any node in the cluster so that users can be redirected to any node without losing their authentication state. However, production-ready deployments should always consider session affinity and favor redirecting users to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve CPU, memory, and network utilization. User sessions Once the user is authenticated, a user session is created. The user session tracks your active users and their state so that they can seamlessly authenticate to any application without being asked for their credentials again. For each application, the user authenticates with a client session is created too, so that the server can track the applications the user is authenticated with and their state on a per-application basis. User and client sessions are automatically destroyed whenever the user performs a logout, the client performs a token revocation, or due to reaching their expiration time. The following caches are used to store both user and client sessions: sessions clientSessions By relying on a distributable cache, user and client sessions are available to any node in the cluster so that users can be redirected to any node without loosing their state. However, production-ready deployments should always consider session affinity and favor redirecting users to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve CPU, memory, and network utilization. As an OpenID Connect Provider, the server is also capable of authenticating users and issuing offline tokens. Similarly to regular user and client sessions, when an offline token is issued by the server upon successful authentication, the server also creates a user and client sessions. However, due to the nature of offline tokens, offline sessions are handled differently as they are long-lived and should survive a complete cluster shutdown. Because of that, they are also persisted to the database. The following caches are used to store offline sessions: offlineSessions offlineClientSessions Upon a cluster restart, offline sessions are lazily loaded from the database and kept in a shared cache using the two caches above. Password brute force detection The loginFailures distributed cache is used to track data about failed login attempts. This cache is needed for the Brute Force Protection feature to work in a multi-node Red Hat build of Keycloak setup. Action tokens Action tokens are used for scenarios when a user needs to confirm an action asynchronously, for example in the emails sent by the forgot password flow. The actionTokens distributed cache is used to track metadata about action tokens. 8.2.2. Configuring caches for availability Distributed caches replicate cache entries on a subset of nodes in a cluster and assigns entries to fixed owner nodes. Each distributed cache has two owners per default, which means that two nodes have a copy of the specific cache entries. Non-owner nodes query the owners of a specific cache to obtain data. When both owner nodes are offline, all data is lost. This situation usually leads to users being logged out at the request and having to log in again. The default number of owners is enough to survive 1 node (owner) failure in a cluster setup with at least three nodes. You are free to change the number of owners accordingly to better fit into your availability requirements. To change the number of owners, open conf/cache-ispn.xml and change the value for owners=<value> for the distributed caches to your desired value. 8.2.3. Specify your own cache configuration file To specify your own cache configuration file, enter this command: bin/kc.[sh|bat] build --cache-config-file=my-cache-file.xml The configuration file is relative to the conf/ directory. 8.3. Transport stacks Transport stacks ensure that distributed cache nodes in a cluster communicate in a reliable fashion. Red Hat build of Keycloak supports a wide range of transport stacks: tcp udp kubernetes ec2 azure google To apply a specific cache stack, enter this command: bin/kc.[sh|bat] build --cache-stack=<stack> The default stack is set to UDP when distributed caches are enabled. 8.3.1. Available transport stacks The following table shows transport stacks that are available without any further configuration than using the --cache-stack build option: Stack name Transport protocol Discovery tcp TCP MPING (uses UDP multicast). udp UDP UDP multicast The following table shows transport stacks that are available using the --cache-stack build option and a minimum configuration: Stack name Transport protocol Discovery kubernetes TCP DNS_PING (requires -Djgroups.dns.query=<headless-service-FQDN> to be added to JAVA_OPTS or JAVA_OPTS_APPEND environment variable). 8.3.2. Additional transport stacks The following table shows transport stacks that are supported by Red Hat build of Keycloak, but need some extra steps to work. Note that none of these stacks are Kubernetes / OpenShift stacks, so no need exists to enable the "google" stack if you want to run Red Hat build of Keycloak on top of the Google Kubernetes engine. In that case, use the kubernetes stack. Instead, when you have a distributed cache setup running on AWS EC2 instances, you would need to set the stack to ec2 , because ec2 does not support a default discovery mechanism such as UDP . Stack name Transport protocol Discovery ec2 TCP NATIVE_S3_PING google TCP GOOGLE_PING2 azure TCP AZURE_PING Cloud vendor specific stacks have additional dependencies for Red Hat build of Keycloak. For more information and links to repositories with these dependencies, see the Infinispan documentation . To provide the dependencies to Red Hat build of Keycloak, put the respective JAR in the providers directory and build Keycloak by entering this command: bin/kc.[sh|bat] build --cache-stack=<ec2|google|azure> 8.3.3. Custom transport stacks If none of the available transport stacks are enough for your deployment, you are able to change your cache configuration file and define your own transport stack. For more details, see Using inline JGroups stacks . defining a custom transport stack By default, the value set to the cache-stack option has precedence over the transport stack you define in the cache configuration file. If you are defining a custom stack, make sure the cache-stack option is not used for the custom changes to take effect. 8.4. Securing cache communication The current Infinispan cache implementation should be secured by various security measures such as RBAC, ACLs, and Transport stack encryption. For more information about securing cache communication, see the Infinispan security guide . 8.5. Exposing metrics from caches By default, metrics from caches are not automatically exposed when the metrics are enabled. For more details about how to enable metrics, see Enabling Red Hat build of Keycloak Metrics . To enable global metrics for all caches within the cache-container , you need to change your cache configuration file (e.g.: conf/cache-ispn.xml ) to enable statistics at the cache-container level as follows: enabling metrics for all caches Similarly, you can enable metrics individually for each cache by enabling statistics as follows: enabling metrics for a specific cache 8.6. Relevant options Value cache 🛠 Defines the cache mechanism for high-availability. By default in production mode, a ispn cache is used to create a cluster between multiple server nodes. By default in development mode, a local cache disables clustering and is intended for development and testing purposes. CLI: --cache Env: KC_CACHE ispn (default), local cache-config-file 🛠 Defines the file from which cache configuration should be loaded from. The configuration file is relative to the conf/ directory. CLI: --cache-config-file Env: KC_CACHE_CONFIG_FILE cache-stack 🛠 Define the default stack to use for cluster communication and node discovery. This option only takes effect if cache is set to ispn . Default: udp. CLI: --cache-stack Env: KC_CACHE_STACK tcp , udp , kubernetes , ec2 , azure , google
[ "bin/kc.[sh|bat] build --cache=ispn", "bin/kc.[sh|bat] build --cache-config-file=my-cache-file.xml", "bin/kc.[sh|bat] build --cache-stack=<stack>", "bin/kc.[sh|bat] build --cache-stack=<ec2|google|azure>", "<jgroups> <stack name=\"my-encrypt-udp\" extends=\"udp\"> <SSL_KEY_EXCHANGE keystore_name=\"server.jks\" keystore_password=\"password\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> <ASYM_ENCRYPT asym_keylength=\"2048\" asym_algorithm=\"RSA\" change_key_on_coord_leave = \"false\" change_key_on_leave = \"false\" use_external_key_exchange = \"true\" stack.combine=\"INSERT_BEFORE\" stack.position=\"pbcast.NAKACK2\"/> </stack> </jgroups> <cache-container name=\"keycloak\"> <transport lock-timeout=\"60000\" stack=\"my-encrypt-udp\"/> </cache-container>", "<cache-container name=\"keycloak\" statistics=\"true\"> </cache-container>", "<local-cache name=\"realms\" statistics=\"true\"> </local-cache>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/caching-
Chapter 4. HorizontalPodAutoscaler [autoscaling/v2]
Chapter 4. HorizontalPodAutoscaler [autoscaling/v2] Description HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. status object HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. 4.1.1. .spec Description HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. Type object Required scaleTargetRef maxReplicas Property Type Description behavior object HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). maxReplicas integer maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. It cannot be less that minReplicas. metrics array metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. metrics[] object MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). minReplicas integer minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. Scaling is active as long as at least one metric value is available. scaleTargetRef object CrossVersionObjectReference contains enough information to let you identify the referred resource. 4.1.2. .spec.behavior Description HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). Type object Property Type Description scaleDown object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. scaleUp object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. 4.1.3. .spec.behavior.scaleDown Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.4. .spec.behavior.scaleDown.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.5. .spec.behavior.scaleDown.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string type is used to specify the scaling policy. value integer value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.6. .spec.behavior.scaleUp Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.7. .spec.behavior.scaleUp.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.8. .spec.behavior.scaleUp.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string type is used to specify the scaling policy. value integer value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.9. .spec.metrics Description metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. Type array 4.1.10. .spec.metrics[] Description MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). Type object Required type Property Type Description containerResource object ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. external object ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). object object ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. resource object ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. type string type is the type of metric source. It should be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each mapping to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.11. .spec.metrics[].containerResource Description ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target container Property Type Description container string container is the name of the container in the pods of the scaling target name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.12. .spec.metrics[].containerResource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.13. .spec.metrics[].external Description ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.14. .spec.metrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.15. .spec.metrics[].external.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.16. .spec.metrics[].object Description ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required describedObject target metric Property Type Description describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.17. .spec.metrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.18. .spec.metrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.19. .spec.metrics[].object.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.20. .spec.metrics[].pods Description PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.21. .spec.metrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.22. .spec.metrics[].pods.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.23. .spec.metrics[].resource Description ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target Property Type Description name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.24. .spec.metrics[].resource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.25. .spec.scaleTargetRef Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.26. .status Description HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. Type object Required desiredReplicas Property Type Description conditions array conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. conditions[] object HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. currentMetrics array currentMetrics is the last read state of the metrics used by this autoscaler. currentMetrics[] object MetricStatus describes the last-read state of a single metric. currentReplicas integer currentReplicas is current number of replicas of pods managed by this autoscaler, as last seen by the autoscaler. desiredReplicas integer desiredReplicas is the desired number of replicas of pods managed by this autoscaler, as last calculated by the autoscaler. lastScaleTime Time lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods, used by the autoscaler to control how often the number of pods is changed. observedGeneration integer observedGeneration is the most recent generation observed by this autoscaler. 4.1.27. .status.conditions Description conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. Type array 4.1.28. .status.conditions[] Description HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another message string message is a human-readable explanation containing details about the transition reason string reason is the reason for the condition's last transition. status string status is the status of the condition (True, False, Unknown) type string type describes the current condition 4.1.29. .status.currentMetrics Description currentMetrics is the last read state of the metrics used by this autoscaler. Type array 4.1.30. .status.currentMetrics[] Description MetricStatus describes the last-read state of a single metric. Type object Required type Property Type Description containerResource object ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. external object ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. object object ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). resource object ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. type string type is the type of metric source. It will be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each corresponds to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.31. .status.currentMetrics[].containerResource Description ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current container Property Type Description container string container is the name of the container in the pods of the scaling target current object MetricValueStatus holds the current value for a metric name string name is the name of the resource in question. 4.1.32. .status.currentMetrics[].containerResource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.33. .status.currentMetrics[].external Description ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.34. .status.currentMetrics[].external.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.35. .status.currentMetrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.36. .status.currentMetrics[].object Description ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required metric current describedObject Property Type Description current object MetricValueStatus holds the current value for a metric describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.37. .status.currentMetrics[].object.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.38. .status.currentMetrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string apiVersion is the API version of the referent kind string kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.39. .status.currentMetrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.40. .status.currentMetrics[].pods Description PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.41. .status.currentMetrics[].pods.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.42. .status.currentMetrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.43. .status.currentMetrics[].resource Description ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current Property Type Description current object MetricValueStatus holds the current value for a metric name string name is the name of the resource in question. 4.1.44. .status.currentMetrics[].resource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.2. API endpoints The following API endpoints are available: /apis/autoscaling/v2/horizontalpodautoscalers GET : list or watch objects of kind HorizontalPodAutoscaler /apis/autoscaling/v2/watch/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers DELETE : delete collection of HorizontalPodAutoscaler GET : list or watch objects of kind HorizontalPodAutoscaler POST : create a HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} DELETE : delete a HorizontalPodAutoscaler GET : read the specified HorizontalPodAutoscaler PATCH : partially update the specified HorizontalPodAutoscaler PUT : replace the specified HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} GET : watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status GET : read status of the specified HorizontalPodAutoscaler PATCH : partially update status of the specified HorizontalPodAutoscaler PUT : replace status of the specified HorizontalPodAutoscaler 4.2.1. /apis/autoscaling/v2/horizontalpodautoscalers HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.1. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty 4.2.2. /apis/autoscaling/v2/watch/horizontalpodautoscalers HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers HTTP method DELETE Description delete collection of HorizontalPodAutoscaler Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.5. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a HorizontalPodAutoscaler Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 202 - Accepted HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.4. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.10. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method DELETE Description delete a HorizontalPodAutoscaler Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HorizontalPodAutoscaler Table 4.13. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HorizontalPodAutoscaler Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HorizontalPodAutoscaler Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.6. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.19. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method GET Description watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.7. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status Table 4.21. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler HTTP method GET Description read status of the specified HorizontalPodAutoscaler Table 4.22. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HorizontalPodAutoscaler Table 4.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.24. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HorizontalPodAutoscaler Table 4.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.26. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.27. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/autoscale_apis/horizontalpodautoscaler-autoscaling-v2
Chapter 2. Troubleshooting Red Hat Discovery
Chapter 2. Troubleshooting Red Hat Discovery 2.1. Determining the version of the Red Hat Discovery server Prerequisites You must be logged in to the command line interface as the Discovery server administrator. Procedure To determine the version of the Discovery server, use the following steps: Enter the dsc server status command. The expected output provides the version of the server that you are using: If you cannot get the server status command to run, or you cannot log in to the server, use the following Podman images command: 2.2. Uninstalling Discovery Prerequisites You must be logged in to the system that is running Red Hat Discovery. You will need sudo access to perform certain functions in dnf . Procedure To uninstall Red Hat Discovery server, use the following steps: Run the uninstall command. Uninstall the installer package. Uninstall the command line interface, if installed. 2.3. Getting help with the command line interface Prerequisites You must be logged in to the command line interface as the Discovery server administrator. Procedure For help on general topics, see the man page information. For help on a specific subcommand, use the -h option. For example: 2.4. SSH credential configuration If you receive an error message that includes text similar to not a valid file on the filesystem , that message might indicate an issue with the mount point on the file system that enables access to the SSH keyfiles. 2.5. Log file locations Prerequisites You must be logged in to the system that is running Red Hat Discovery. You will need sudo access to perform certain functions in dnf . Procedure Log files for the Discovery server that are on the local file system are located in the following path: "{HOME}"/.local/share/discovery/log . Log data is also copied to stdout and can be accessed through Podman logs. To follow the log output, include the -f option as shown in the following command: 2.6. Backing up or restoring the server encryption key Passwords are not stored as plain text. They are encrypted and decrypted by using the content of the secret.txt file as a secret key. If you need to back up and restore the secret.txt file, use these steps. Prerequisites You must be logged in to the system that is running Red Hat Discovery. You will need sudo access to perform certain functions in dnf . Procedure To back up the encrypted SSH credentials, navigate to "USD{HOME}"/.local/share/discovery/data directory and copy the secret.txt file. To restore the secret.txt file, enter the following command, where path_to_backup is the path where the secret.txt file is backed up: 2.7. Restarting the Discovery server after a reboot Prerequisites You must be logged in to the system that is running Red Hat Discovery. You will need sudo access to perform certain functions in dnf . Note If you installed Discovery using the standard process, the system should start automatically after a reboot. If it does not automatically restart, use the following procedure: Procedure To restart the Discovery application after a reboot, use the following command:
[ "\"server_address\": \"127.0.0.1:9443\", \"server_id\": \"45a8ea20-2ec4-4113-b459-234fed505b0d\", \"server_version\": \"1.0.0.3e15fa8786a974c9eafe6376ff31ae0211972c36\"", "images --filter 'reference=registry.redhat.io/discovery/discovery-server-rhel9' --format '{{.Labels.url}}'", "discovery-installer uninstall", "sudo dnf remove discovery-installer", "sudo dnf remove discovery-cli", "dsc cred -h dsc source -h dsc scan -h", "logs -f discovery-server logs -f discovery-celery-worker", "cp -p __path_to_backup__/secret.txt \"USD{HOME}\"/.local/share/discovery/data/", "systemctl --user restart discovery-app" ]
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/troubleshooting_red_hat_discovery/assembly-troubleshooting-discovery
Chapter 7. Observing the network traffic
Chapter 7. Observing the network traffic As an administrator, you can observe the network traffic in the OpenShift Container Platform console for detailed troubleshooting and analysis. This feature helps you get insights from different graphical representations of traffic flow. There are several available views to observe the network traffic. 7.1. Observing the network traffic from the Overview view The Overview view displays the overall aggregated metrics of the network traffic flow on the cluster. As an administrator, you can monitor the statistics with the available display options. 7.1.1. Working with the Overview view As an administrator, you can navigate to the Overview view to see the graphical representation of the flow rate statistics. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Overview tab. You can configure the scope of each flow rate data by clicking the menu icon. 7.1.2. Configuring advanced options for the Overview view You can customize the graphical view by using advanced options. To access the advanced options, click Show advanced options . You can configure the details in the graph by using the Display options drop-down menu. The options available are as follows: Scope : Select to view the components that network traffic flows between. You can set the scope to Node , Namespace , Owner , Zones , Cluster or Resource . Owner is an aggregation of resources. Resource can be a pod, service, node, in case of host-network traffic, or an unknown IP address. The default value is Namespace . Truncate labels : Select the required width of the label from the drop-down list. The default value is M . 7.1.2.1. Managing panels and display You can select the required panels to be displayed, reorder them, and focus on a specific panel. To add or remove panels, click Manage panels . The following panels are shown by default: Top X average bytes rates Top X bytes rates stacked with total Other panels can be added in Manage panels : Top X average packets rates Top X packets rates stacked with total Query options allows you to choose whether to show the Top 5 , Top 10 , or Top 15 rates. 7.1.2.2. DNS tracking You can configure graphical representation of Domain Name System (DNS) tracking of network flows in the Overview view. Using DNS tracking with extended Berkeley Packet Filter (eBPF) tracepoint hooks can serve various purposes: Network Monitoring: Gain insights into DNS queries and responses, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues. Security Analysis: Detect suspicious DNS activities, such as domain name generation algorithms (DGA) used by malware, or identify unauthorized DNS resolutions that might indicate a security breach. Troubleshooting: Debug DNS-related issues by tracing DNS resolution steps, tracking latency, and identifying misconfigurations. By default, when DNS tracking is enabled, you can see the following non-empty metrics represented in a donut or line chart in the Overview : Top X DNS Response Code Top X average DNS latencies with overall Top X 90th percentile DNS latencies Other DNS tracking panels can be added in Manage panels : Bottom X minimum DNS latencies Top X maximum DNS latencies Top X 99th percentile DNS latencies This feature is supported for IPv4 and IPv6 UDP and TCP protocols. See the Additional resources in this section for more information about enabling and working with this view. Additional resources Working with DNS tracking Network Observability metrics 7.1.3. Round-Trip Time You can use TCP smoothed Round-Trip Time (sRTT) to analyze network flow latencies. You can use RTT captured from the fentry/tcp_rcv_established eBPF hookpoint to read sRTT from the TCP socket to help with the following: Network Monitoring: Gain insights into TCP latencies, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues. Troubleshooting: Debug TCP-related issues by tracking latency and identifying misconfigurations. By default, when RTT is enabled, you can see the following TCP RTT metrics represented in the Overview : Top X 90th percentile TCP Round Trip Time with overall Top X average TCP Round Trip Time with overall Bottom X minimum TCP Round Trip Time with overall Other RTT panels can be added in Manage panels : Top X maximum TCP Round Trip Time with overall Top X 99th percentile TCP Round Trip Time with overall See the Additional resources in this section for more information about enabling and working with this view. Additional resources Working with RTT tracing 7.1.4. eBPF flow rule filter You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be recorded. Then only the packets that match the filter are cached and the rest are not cached. 7.1.4.1. Ingress and egress traffic filtering CIDR notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation. After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the peerIP to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached. 7.1.4.2. Dashboard and metrics integrations When this option is enabled, the Netobserv/Health dashboard for eBPF agent statistics now has the Filtered flows rate view. Additionally, in Observe Metrics you can query netobserv_agent_filtered_flows_total to observe metrics with the reason in FlowFilterAcceptCounter , FlowFilterNoMatchCounter or FlowFilterRecjectCounter . 7.1.4.3. Flow filter configuration parameters The flow filter rules consist of required and optional parameters. Table 7.1. Required configuration parameters Parameter Description enable Set enable to true to enable the eBPF flow filtering feature. cidr Provides the IP address and CIDR mask for the flow filter rule. Supports both IPv4 and IPv6 address format. If you want to match against any IP, you can use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. action Describes the action that is taken for the flow filter rule. The possible values are Accept or Reject . For the Accept action matching rule, the flow data is cached in the eBPF table and updated with the global metric, FlowFilterAcceptCounter . For the Reject action matching rule, the flow data is dropped and not cached in the eBPF table. The flow data is updated with the global metric, FlowFilterRejectCounter . If the rule is not matched, the flow is cached in the eBPF table and updated with the global metric, FlowFilterNoMatchCounter . Table 7.2. Optional configuration parameters Parameter Description direction Defines the direction of the flow filter rule. Possible values are Ingress or Egress . protocol Defines the protocol of the flow filter rule. Possible values are TCP , UDP , SCTP , ICMP , and ICMPv6 . tcpFlags Defines the TCP flags to filter flows. Possible values are SYN , SYN-ACK , ACK , FIN , RST , PSH , URG , ECE , CWR , FIN-ACK , and RST-ACK . ports Defines the ports to use for filtering flows. It can be used for either source or destination ports. To filter a single port, set a single port as an integer value. For example ports: 80 . To filter a range of ports, use a "start-end" range in string format. For example ports: "80-100" sourcePorts Defines the source port to use for filtering flows. To filter a single port, set a single port as an integer value, for example sourcePorts: 80 . To filter a range of ports, use a "start-end" range, string format, for example sourcePorts: "80-100" . destPorts DestPorts defines the destination ports to use for filtering flows. To filter a single port, set a single port as an integer value, for example destPorts: 80 . To filter a range of ports, use a "start-end" range in string format, for example destPorts: "80-100" . icmpType Defines the ICMP type to use for filtering flows. icmpCode Defines the ICMP code to use for filtering flows. peerIP Defines the IP address to use for filtering flows, for example: 10.10.10.10 . Additional resources Filtering eBPF flow data with rules Network Observability metrics Health dashboards 7.2. Observing the network traffic from the Traffic flows view The Traffic flows view displays the data of the network flows and the amount of traffic in a table. As an administrator, you can monitor the amount of traffic across the application by using the traffic flow table. 7.2.1. Working with the Traffic flows view As an administrator, you can navigate to Traffic flows table to see network flow information. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Traffic flows tab. You can click on each row to get the corresponding flow information. 7.2.2. Configuring advanced options for the Traffic flows view You can customize and export the view by using Show advanced options . You can set the row size by using the Display options drop-down menu. The default value is Normal . 7.2.2.1. Managing columns You can select the required columns to be displayed, and reorder them. To manage columns, click Manage columns . 7.2.2.2. Exporting the traffic flow data You can export data from the Traffic flows view. Procedure Click Export data . In the pop-up window, you can select the Export all data checkbox to export all the data, and clear the checkbox to select the required fields to be exported. Click Export . 7.2.3. Working with conversation tracking As an administrator, you can group network flows that are part of the same conversation. A conversation is defined as a grouping of peers that are identified by their IP addresses, ports, and protocols, resulting in an unique Conversation Id . You can query conversation events in the web console. These events are represented in the web console as follows: Conversation start : This event happens when a connection is starting or TCP flag intercepted Conversation tick : This event happens at each specified interval defined in the FlowCollector spec.processor.conversationHeartbeatInterval parameter while the connection is active. Conversation end : This event happens when the FlowCollector spec.processor.conversationEndTimeout parameter is reached or the TCP flag is intercepted. Flow : This is the network traffic flow that occurs within the specified interval. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that spec.processor.logTypes , conversationEndTimeout , and conversationHeartbeatInterval parameters are set according to your observation needs. A sample configuration is as follows: Configure FlowCollector for conversation tracking apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3 1 When logTypes is set to Flows , only the Flow event is exported. If you set the value to All , both conversation and flow events are exported and visible in the Network Traffic page. To focus only on conversation events, you can specify Conversations which exports the Conversation start , Conversation tick and Conversation end events; or EndedConversations exports only the Conversation end events. Storage requirements are highest for All and lowest for EndedConversations . 2 The Conversation end event represents the point when the conversationEndTimeout is reached or the TCP flag is intercepted. 3 The Conversation tick event represents each specified interval defined in the FlowCollector conversationHeartbeatInterval parameter while the network connection is active. Note If you update the logType option, the flows from the selection do not clear from the console plugin. For example, if you initially set logType to Conversations for a span of time until 10 AM and then move to EndedConversations , the console plugin shows all conversation events before 10 AM and only ended conversations after 10 AM. Refresh the Network Traffic page on the Traffic flows tab. Notice there are two new columns, Event/Type and Conversation Id . All the Event/Type fields are Flow when Flow is the selected query option. Select Query Options and choose the Log Type , Conversation . Now the Event/Type shows all of the desired conversation events. you can filter on a specific conversation ID or switch between the Conversation and Flow log type options from the side panel. 7.2.4. Working with DNS tracking Using DNS tracking, you can monitor your network, conduct security analysis, and troubleshoot DNS issues. You can track DNS by editing the FlowCollector to the specifications in the following YAML example. Important CPU and memory usage increases are observed in the eBPF agent when this feature is enabled. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource. A sample configuration is as follows: Configure FlowCollector for DNS tracking apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2 1 You can set the spec.agent.ebpf.features parameter list to enable DNS tracking of each network flow in the web console. 2 You can set sampling to a value of 1 for more accurate metrics and to capture DNS latency . For a sampling value greater than 1, you can observe flows with DNS Response Code and DNS Id , and it is unlikely that DNS Latency can be observed. When you refresh the Network Traffic page, there are new DNS representations you can choose to view in the Overview and Traffic Flow views and new filters you can apply. Select new DNS choices in Manage panels to display graphical visualizations and DNS metrics in the Overview . Select new choices in Manage columns to add DNS columns to the Traffic Flows view. Filter on specific DNS metrics, such as DNS Id , DNS Error DNS Latency and DNS Response Code , and see more information from the side panel. The DNS Latency and DNS Response Code columns are shown by default. Note TCP handshake packets do not have DNS headers. TCP protocol flows without DNS headers are shown in the traffic flow data with DNS Latency , ID , and Response code values of "n/a". You can filter out flow data to view only flows that have DNS headers using the Common filter "DNSError" equal to "0". 7.2.5. Working with RTT tracing You can track RTT by editing the FlowCollector to the specifications in the following YAML example. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for RTT tracing, for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1 1 You can start tracing RTT network flows by listing the FlowRTT parameter in the spec.agent.ebpf.features specification list. Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about RTT: In the Overview , select new choices in Manage panels to choose which graphical visualizations of RTT to display. In the Traffic flows table, the Flow RTT column can be seen, and you can manage display in Manage columns . In the Traffic Flows view, you can also expand the side panel to view more information about RTT. Example filtering Click the Common filters Protocol . Filter the network flow data based on TCP , Ingress direction, and look for FlowRTT values greater than 10,000,000 nanoseconds (10ms). Remove the Protocol filter. Filter for Flow RTT values greater than 0 in the Common filters. In the Topology view, click the Display option dropdown. Then click RTT in the edge labels drop-down list. 7.2.5.1. Using the histogram You can click Show histogram to display a toolbar view for visualizing the history of flows as a bar chart. The histogram shows the number of logs over time. You can select a part of the histogram to filter the network flow data in the table that follows the toolbar. 7.2.6. Working with availability zones You can configure the FlowCollector to collect information about the cluster availability zones. This allows you to enrich network flow data with the topology.kubernetes.io/zone label value applied to the nodes. Procedure In the web console, go to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that the spec.processor.addZone parameter is set to true . A sample configuration is as follows: Configure FlowCollector for availability zones collection apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: # ... processor: addZone: true # ... Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about availability zones: In the Overview tab, you can see Zones as an available Scope . In Network Traffic Traffic flows , Zones are viewable under the SrcK8S_Zone and DstK8S_Zone fields. In the Topology view, you can set Zones as Scope or Group . 7.2.7. Filtering eBPF flow data using a global rule You can configure the FlowCollector to filter eBPF flows using a global rule to control the flow of packets cached in the eBPF flow table. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster , then select the YAML tab. Configure the FlowCollector custom resource, similar to the following sample configurations: Example 7.1. Filter Kubernetes service traffic to a specific Pod IP endpoint apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3 1 The required action parameter describes the action that is taken for the flow filter rule. Possible values are Accept or Reject . 2 The required cidr parameter provides the IP address and CIDR mask for the flow filter rule and supports IPv4 and IPv6 address formats. If you want to match against any IP address, you can use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. 3 You must set spec.agent.ebpf.flowFilter.enable to true to enable this feature. Example 7.2. See flows to any addresses outside the cluster apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4 1 You can Accept flows based on the criteria in the flowFilter specification. 2 The cidr value of 0.0.0.0/0 matches against any IP address. 3 See flows after peerIP is configured with 192.168.127.12 . 4 You must set spec.agent.ebpf.flowFilter.enable to true to enable the feature. 7.2.8. Endpoint translation (xlat) You can gain visibility into the endpoints serving traffic in a consolidated view using Network Observability and extended Berkeley Packet Filter (eBPF). Typically, when traffic flows through a service, egressIP, or load balancer, the traffic flow information is abstracted as it is routed to one of the available pods. If you try to get information about the traffic, you can only view service related info, such as service IP and port, and not information about the specific pod that is serving the request. Often the information for both the service traffic and the virtual service endpoint is captured as two separate flows, which complicates troubleshooting. To solve this, endpoint xlat can help in the following ways: Capture the network flows at the kernel level, which has a minimal impact on performance. Enrich the network flows with translated endpoint information, showing not only the service but also the specific backend pod, so you can see which pod served a request. As network packets are processed, the eBPF hook enriches flow logs with metadata about the translated endpoint that includes the following pieces of information that you can view in the Network Traffic page in a single row: Source Pod IP Source Port Destination Pod IP Destination Port Conntrack Zone ID 7.2.9. Working with endpoint translation (xlat) You can use Network Observability and eBPF to enrich network flows from a Kubernetes service with translated endpoint information, gaining insight into the endpoints serving traffic. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for PacketTranslation , for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1 1 You can start enriching network flows with translated packet information by listing the PacketTranslation parameter in the spec.agent.ebpf.features specification list. Example filtering When you refresh the Network Traffic page you can filter for information about translated packets: Filter the network flow data based on Destination kind: Service . You can see the xlat column, which distinguishes where translated information is displayed, and the following default columns: Xlat Zone ID Xlat Src Kubernetes Object Xlat Dst Kubernetes Object You can manage the display of additional xlat columns in Manage columns . 7.3. Observing the network traffic from the Topology view The Topology view provides a graphical representation of the network flows and the amount of traffic. As an administrator, you can monitor the traffic data across the application by using the Topology view. 7.3.1. Working with the Topology view As an administrator, you can navigate to the Topology view to see the details and metrics of the component. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Topology tab. You can click each component in the Topology to view the details and metrics of the component. 7.3.2. Configuring the advanced options for the Topology view You can customize and export the view by using Show advanced options . The advanced options view has the following features: Find in view : To search the required components in the view. Display options : To configure the following options: Edge labels : To show the specified measurements as edge labels. The default is to show the Average rate in Bytes . Scope : To select the scope of components between which the network traffic flows. The default value is Namespace . Groups : To enhance the understanding of ownership by grouping the components. The default value is None . Layout : To select the layout of the graphical representation. The default value is ColaNoForce . Show : To select the details that need to be displayed. All the options are checked by default. The options available are: Edges , Edges label , and Badges . Truncate labels : To select the required width of the label from the drop-down list. The default value is M . Collapse groups : To expand or collapse the groups. The groups are expanded by default. This option is disabled if Groups has the value of None . 7.3.2.1. Exporting the topology view To export the view, click Export topology view . The view is downloaded in PNG format. 7.4. Filtering the network traffic By default, the Network Traffic page displays the traffic flow data in the cluster based on the default filters configured in the FlowCollector instance. You can use the filter options to observe the required data by changing the preset filter. Query Options You can use Query Options to optimize the search results, as listed below: Log Type : The available options Conversation and Flows provide the ability to query flows by log type, such as flow log, new conversation, completed conversation, and a heartbeat, which is a periodic record with updates for long conversations. A conversation is an aggregation of flows between the same peers. Match filters : You can determine the relation between different filter parameters selected in the advanced filter. The available options are Match all and Match any . Match all provides results that match all the values, and Match any provides results that match any of the values entered. The default value is Match all . Datasource : You can choose the datasource to use for queries: Loki , Prometheus , or Auto . Notable performance improvements can be realized when using Prometheus as a datasource rather than Loki, but Prometheus supports a limited set of filters and aggregations. The default datasource is Auto , which uses Prometheus on supported queries or uses Loki if the query does not support Prometheus. Drops filter : You can view different levels of dropped packets with the following query options: Fully dropped shows flow records with fully dropped packets. Containing drops shows flow records that contain drops but can be sent. Without drops shows records that contain sent packets. All shows all the aforementioned records. Limit : The data limit for internal backend queries. Depending upon the matching and the filter settings, the number of traffic flow data is displayed within the specified limit. Quick filters The default values in Quick filters drop-down menu are defined in the FlowCollector configuration. You can modify the options from console. Advanced filters You can set the advanced filters, Common , Source , or Destination , by selecting the parameter to be filtered from the dropdown list. The flow data is filtered based on the selection. To enable or disable the applied filter, you can click on the applied filter listed below the filter options. You can toggle between One way and Back and forth filtering. The One way filter shows only Source and Destination traffic according to your filter selections. You can use Swap to change the directional view of the Source and Destination traffic. The Back and forth filter includes return traffic with the Source and Destination filters. The directional flow of network traffic is shown in the Direction column in the Traffic flows table as Ingress`or `Egress for inter-node traffic and `Inner`for traffic inside a single node. You can click Reset defaults to remove the existing filters, and apply the filter defined in FlowCollector configuration. Note To understand the rules of specifying the text value, click Learn More . Alternatively, you can access the traffic flow data in the Network Traffic tab of the Namespaces , Services , Routes , Nodes , and Workloads pages which provide the filtered data of the corresponding aggregations. Additional resources Configuring Quick Filters Flow Collector sample resource
[ "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: addZone: true", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_observability/nw-observe-network-traffic
Chapter 8. Troubleshooting resource problems
Chapter 8. Troubleshooting resource problems In case of resource failure, you must investigate the cause and location of the problem, fix the failed resource, and optionally clean up the resource. There are many possible causes of resource failures depending on your deployment, and you must investigate the resource to determine how to fix the problem. For example, you can check the resource constraints to ensure that the resources are not interrupting each other, and that the resources can connect to each other. You can also examine a Controller node that is fenced more often than other Controller nodes to identify possible communication problems. 8.1. Viewing resource constraints You can view constraints on how services are launched, including constraints related to where each resource is located, the order in which the resource starts, and whether the resource must be colocated with another resource. View all resource constraints On any Controller node, run the pcs constraint show command. The following example shows a truncated output from the pcs constraint show command on a Controller node: This output displays the following main constraint types: Location Constraints Lists the locations to which resources can be assigned: The first constraint defines a rule that sets the galera-bundle resource to run on nodes with the galera-role attribute set to true . The second location constraint specifies that the IP resource ip-192.168.24.15 runs only on nodes with the haproxy-role attribute set to true . This means that the cluster associates the IP address with the haproxy service, which is necessary to make the services reachable. The third location constraint shows that the ipmilan resource is disabled on each of the Controller nodes. Ordering Constraints Lists the order in which resources can launch. This example shows a constraint that sets the virtual IP address resources IPaddr2 to start before the HAProxy service. Note Ordering constraints only apply to IP address resources and to HAproxy. Systemd manages all other resources, because services such as Compute are expected to withstand an interruption of a dependent service, such as Galera. Colocation Constraints Lists which resources must be located together. All virtual IP addresses are linked to the haproxy-bundle resource. View Galera location constraints On any Controller node, run the pcs property show command. Example output: In this output, you can verify that the galera-role attribute is true for all Controller nodes. This means that the galera-bundle resource runs only on these nodes. The same concept applies to the other attributes associated with the other location constraints. 8.2. Investigating Controller node resource problems Depending on the type and location of the problem, there are different approaches you can take to investigate and fix the resource. Investigating Controller node problems If health checks to a Controller node are failing, this can indicate a communication problem between Controller nodes. To investigate, log in to the Controller node and check if the services can start correctly. Investigating individual resource problems If most services on a Controller are running correctly, you can run the pcs status command and check the output for information about a specific service failure. You can also log in to the Controller where the resource is failing and investigate the resource behavior on the Controller node. Procedure The following procedure shows how to investigate the openstack-cinder-volume resource. Locate and log in to the Controller node on which the resource is failing. Run the systemctl status command to show the resource status and recent log events: Correct the failed resource based on the information from the output. Run the pcs resource cleanup command to reset the status and the fail count of the resource.
[ "sudo pcs constraint show", "Location Constraints: Resource: galera-bundle Constraint: location-galera-bundle (resource-discovery=exclusive) Rule: score=0 Expression: galera-role eq true [...] Resource: ip-192.168.24.15 Constraint: location-ip-192.168.24.15 (resource-discovery=exclusive) Rule: score=0 Expression: haproxy-role eq true [...] Resource: my-ipmilan-for-controller-0 Disabled on: overcloud-controller-0 (score:-INFINITY) Resource: my-ipmilan-for-controller-1 Disabled on: overcloud-controller-1 (score:-INFINITY) Resource: my-ipmilan-for-controller-2 Disabled on: overcloud-controller-2 (score:-INFINITY) Ordering Constraints: start ip-172.16.0.10 then start haproxy-bundle (kind:Optional) start ip-10.200.0.6 then start haproxy-bundle (kind:Optional) start ip-172.19.0.10 then start haproxy-bundle (kind:Optional) start ip-192.168.1.150 then start haproxy-bundle (kind:Optional) start ip-172.16.0.11 then start haproxy-bundle (kind:Optional) start ip-172.18.0.10 then start haproxy-bundle (kind:Optional) Colocation Constraints: ip-172.16.0.10 with haproxy-bundle (score:INFINITY) ip-172.18.0.10 with haproxy-bundle (score:INFINITY) ip-10.200.0.6 with haproxy-bundle (score:INFINITY) ip-172.19.0.10 with haproxy-bundle (score:INFINITY) ip-172.16.0.11 with haproxy-bundle (score:INFINITY) ip-192.168.1.150 with haproxy-bundle (score:INFINITY)", "sudo pcs property show", "Cluster Properties: cluster-infrastructure: corosync cluster-name: tripleo_cluster dc-version: 2.0.1-4.el8-0eb7991564 have-watchdog: false redis_REPL_INFO: overcloud-controller-0 stonith-enabled: false Node Attributes: overcloud-controller-0: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-0 overcloud-controller-1: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-1 overcloud-controller-2: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-2", "[heat-admin@overcloud-controller-0 ~]USD sudo systemctl status openstack-cinder-volume ● openstack-cinder-volume.service - Cluster Controlled openstack-cinder-volume Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; disabled; vendor preset: disabled) Drop-In: /run/systemd/system/openstack-cinder-volume.service.d └─50-pacemaker.conf Active: active (running) since Tue 2016-11-22 09:25:53 UTC; 2 weeks 6 days ago Main PID: 383912 (cinder-volume) CGroup: /system.slice/openstack-cinder-volume.service ├─383912 /usr/bin/python3 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log └─383985 /usr/bin/python3 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log Nov 22 09:25:55 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:55.798 383912 WARNING oslo_config.cfg [req-8f32db96-7ca2-4fc5-82ab-271993b28174 - - - -...e future. Nov 22 09:25:55 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:55.799 383912 WARNING oslo_config.cfg [req-8f32db96-7ca2-4fc5-82ab-271993b28174 - - - -...e future. Nov 22 09:25:55 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:55.926 383985 INFO cinder.coordination [-] Coordination backend started successfully. Nov 22 09:25:55 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:55.926 383985 INFO cinder.volume.manager [req-cb07b35c-af01-4c45-96f1-3d2bfc98ecb5 - - ...r (1.2.0) Nov 22 09:25:56 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:56.047 383985 WARNING oslo_config.cfg [req-cb07b35c-af01-4c45-96f1-3d2bfc98ecb5 - - - -...e future. Nov 22 09:25:56 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:56.048 383985 WARNING oslo_config.cfg [req-cb07b35c-af01-4c45-96f1-3d2bfc98ecb5 - - - -...e future. Nov 22 09:25:56 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:56.048 383985 WARNING oslo_config.cfg [req-cb07b35c-af01-4c45-96f1-3d2bfc98ecb5 - - - -...e future. Nov 22 09:25:56 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:56.063 383985 INFO cinder.volume.manager [req-cb07b35c-af01-4c45-96f1-3d2bfc98ecb5 - - ...essfully. Nov 22 09:25:56 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:56.111 383985 INFO cinder.volume.manager [req-cb07b35c-af01-4c45-96f1-3d2bfc98ecb5 - - ...r (1.2.0) Nov 22 09:25:56 overcloud-controller-0.localdomain cinder-volume[383912]: 2016-11-22 09:25:56.146 383985 INFO cinder.volume.manager [req-cb07b35c-af01-4c45-96f1-3d2bfc98ecb5 - - ...essfully. Hint: Some lines were ellipsized, use -l to show in full.", "sudo pcs resource cleanup openstack-cinder-volume Resource: openstack-cinder-volume successfully cleaned up" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/high_availability_deployment_and_usage/assembly_troubleshoot
Chapter 4. Configuration
Chapter 4. Configuration Camel Quarkus automatically configures and deploys a Camel Context bean which by default is started/stopped according to the Quarkus Application lifecycle. The configuration step happens at build time during Quarkus' augmentation phase and it is driven by the Camel Quarkus extensions which can be tuned using Camel Quarkus specific quarkus.camel.* properties. Note quarkus.camel.* configuration properties are documented on the individual extension pages - for example see Camel Quarkus Core . After the configuration is done, a minimal Camel Runtime is assembled and started in the RUNTIME_INIT phase. 4.1. Configuring Camel components 4.1.1. application.properties To configure components and other aspects of Apache Camel through properties, make sure that your application depends on camel-quarkus-core directly or transitively. Because most Camel Quarkus extensions depend on camel-quarkus-core , you typically do not need to add it explicitly. camel-quarkus-core brings functionalities from Camel Main to Camel Quarkus. In the example below, you set a specific ExchangeFormatter configuration on the LogComponent via application.properties : camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false 4.1.2. CDI You can also configure a component programmatically using CDI. The recommended method is to observe the ComponentAddEvent and configure the component before the routes and the CamelContext are started: import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } } 4.1.2.1. Producing a @Named component instance Alternatively, you can create and configure the component yourself in a @Named producer method. This works as Camel uses the component URI scheme to look-up components from its registry. For example, in the case of a LogComponent Camel looks for a log named bean. Warning While producing a @Named component bean will usually work, it may cause subtle issues with some components. Camel Quarkus extensions may do one or more of the following: Pass custom subtype of the default Camel component type. See the Vert.x WebSocket extension example. Perform some Quarkus specific customization of the component. See the JPA extension example. These actions are not performed when you produce your own component instance, therefore, configuring components in an observer method is the recommended method. import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named("log") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } } 1 The "log" argument of the @Named annotation can be omitted if the name of the method is the same. 4.2. Configuration by convention In addition to support configuring Camel through properties, camel-quarkus-core allows you to use conventions to configure the Camel behavior. For example, if there is a single ExchangeFormatter instance in the CDI container, then it will automatically wire that bean to the LogComponent . Additional resources Configuring and using Metering in OpenShift Container Platform
[ "camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } }", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named(\"log\") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-configuration
20.8. Checking Account Availability for Passwordless Access
20.8. Checking Account Availability for Passwordless Access Most of the time, for the Directory Server to return authentication information about a user account, a client actually binds (or attempts to bind) as that user. And a bind attempt requires some sort of user credentials, usually a password or a certificate. While the Directory Server allows unauthenticated binds and anonymous binds, neither of those binds returns any user account information. There are some situations where a client requires information about a user account - specifically whether an account should be allowed to authenticate - in order to perform some other operation, but the client either does not have or does use any credentials for the user account in Directory Server. Essentially, the client needs to perform a credential-less yet authenticated bind operation to retrieve the user account information (including password expiration information, if the account has a password). This can be done through an ldapsearch by passing the Account Usability Extension Control . This control acts as if it performs an authenticated bind operation for a given user and returns the account status for that user - but without actually binding to the server. This allows a client to determine whether that account can be used to log in and then to pass that account information to another application, like PAM. For example, using the Account Usability Extension Control can allow a system to use the Directory Server as its identity back end to store user data but to employ password-less authentication methods, like smart cards or SSH keys, where the authentication operation is performed outside Directory Server. 20.8.1. Searching for Entries Using the Account Usability Extension Control The Account Usability Extension Control is an extension for an ldapsearch . It returns an extra line for each returned entry that gives the account status and some information about the password policy for that account. A client or application can then use that status to evaluate authentication attempts made outside Directory Server for that user account. Basically, this control signals whether a user should be allowed to authenticate without having to perform an authentication operation. Note The OpenLDAP tools used by Directory Server do not support the Account Usability Extension Control. Other LDAP utilities, like OpenDS, can be used or other clients which do support the control. For example, using the OpenDS tools, the control can be specified using the -J with the control OID (1.3.6.1.4.1.42.2.27.9.5.8) or with the accountusability:true flag: This can also be run for a specific entry: Note By default, only the Directory Manager can use the Account Usability Extension Control. To allow other users to use the Account Usability Extension Control, set on ACI on the supported control entry under cn=features . See Section 20.8.2, "Changing What Users Can Perform an Account Usability Search" . The control returns different messages, depending on the actual status of the account and (if the user has a password) the password policy settings for the user account. Table 20.1. Possible Account Usability Control Result Messages Account Status Control Result Message Active account with a valid password The account is usable Active account with no password set The account is usable Expired password Password expired The password policy for the account is modified Password expired The account is locked and there is no lockout duration Password reset The account is locked and there is a lockout duration Time (in seconds) for automatic unlock of the account The password for the account should be reset at the first login Password reset The password has expired and grace logins are allowed Password expired and X grace login is allowed The password has expired and the number of grace logins is exhausted Password expired The password will expire (expiration warning) Password will expire in X number of seconds 20.8.2. Changing What Users Can Perform an Account Usability Search By default, only the Directory Manager can use the Account Usability Extension Control. Other users can use the Account Usability Extension Control by setting the appropriate ACI on the supported control entry. The control entry is named for the Account Usability Extension Control OID, 1.3.6.1.4.1.42.2.27.9.5.8. For example, to enable members of the cn=Administrators,ou=groups,dc=example,dc=com group to read the Account Usability Extension Control of all users:
[ "ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -b \"dc=example,dc=com\" -s sub -J \"accountusability:true\" \"(objectclass=*)\" Account Usability Response Control # The account is usable dn: dc=example,dc=com objectClass: domain objectClass: top dc: example", "ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -b \"uid=bjensen,ou=people,dc=example,dc=com\" -s base -J \"accountusability:true\" \"(objectclass=*)\" Account Usability Response Control # The account is usable dn: uid=bjensen,ou=people,dc=example,dc=com", "ldapmodify -D \"cn=Directory Manager\" -W -x dn: oid=1.3.6.1.4.1.42.2.27.9.5.8,cn=features,cn=config changetype: modify add: aci aci: (targetattr = \"*\")(version 3.0; acl \"Account Usable\"; allow (read)(groupdn = \"ldap:///cn=Administrators,ou=groups,dc=example,dc=com\");)" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/account-usability
7.28. corosync
7.28. corosync 7.28.1. RHBA-2015:1389 - corosync bug fix and enhancement update Updated corosync packages that fix one bug and add two enhancements are now available for Red Hat Enterprise Linux 6. The corosync packages provide the Corosync Cluster Engine and C Application Programming Interfaces (APIs) for Red Hat Enterprise Linux cluster software. Bug Fix BZ# 1136431 When the corosync utility was configured with the IPv6 network and packet fragmentation was disabled on the Network Interface Controller (NIC) or switch, no packets were delivered. This update implements a correct calculation of the data fragment size, and packets are delivered as intended. Enhancements BZ# 1163846 Earlier when using the UDP unicast (UDPU) protocol, all messages were sent to all the configured members, instead of being sent to only the active members. This makes sense for merge detection messages, otherwise it creates unnecessary traffic to missing members and can trigger excessive Address Resolution Protocol (ARP) requests on the network. The corosync code has been modified to only send messages to the missing members when it is required, otherwise to only send messages to the active ring members. Thus, most of the UDPU messages are now sent only to the active members with an exception of the messages required for proper detection of a merge or a new member (1-2 pkts/sec). BZ# 742999 With this update, the corosync packages have been modified to test whether the network interface has different IP address, port, and IP version when using the Redundant Ring Protocol (RRP) mode. Now, corosync properly checks correctness of the configuration file and prevents failures when using the RRP mode. Users of corosync are advised to upgrade to these updated packages, which fix this bug and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-corosync
Chapter 7. Available BPF Features
Chapter 7. Available BPF Features This chapter provides the complete list of Berkeley Packet Filter ( BPF ) features available in the kernel of this minor version of Red Hat Enterprise Linux 9. The tables include the lists of: System configuration and other options Available program types and supported helpers Available map types This chapter contains automatically generated output of the bpftool feature command. Table 7.1. System configuration and other options Option Value unprivileged_bpf_disabled 2 (bpf() syscall restricted to privileged users, admin can change) JIT compiler 1 (enabled) JIT compiler hardening 1 (enabled for unprivileged users) JIT compiler kallsyms exports 1 (enabled for root) Memory limit for JIT for unprivileged users 528482304 CONFIG_BPF y CONFIG_BPF_SYSCALL y CONFIG_HAVE_EBPF_JIT y CONFIG_BPF_JIT y CONFIG_BPF_JIT_ALWAYS_ON y CONFIG_DEBUG_INFO_BTF y CONFIG_DEBUG_INFO_BTF_MODULES y CONFIG_CGROUPS y CONFIG_CGROUP_BPF y CONFIG_CGROUP_NET_CLASSID y CONFIG_SOCK_CGROUP_DATA y CONFIG_BPF_EVENTS y CONFIG_KPROBE_EVENTS y CONFIG_UPROBE_EVENTS y CONFIG_TRACING y CONFIG_FTRACE_SYSCALLS y CONFIG_FUNCTION_ERROR_INJECTION y CONFIG_BPF_KPROBE_OVERRIDE n CONFIG_NET y CONFIG_XDP_SOCKETS y CONFIG_LWTUNNEL_BPF y CONFIG_NET_ACT_BPF m CONFIG_NET_CLS_BPF m CONFIG_NET_CLS_ACT y CONFIG_NET_SCH_INGRESS m CONFIG_XFRM y CONFIG_IP_ROUTE_CLASSID y CONFIG_IPV6_SEG6_BPF y CONFIG_BPF_LIRC_MODE2 n CONFIG_BPF_STREAM_PARSER y CONFIG_NETFILTER_XT_MATCH_BPF m CONFIG_BPFILTER n CONFIG_BPFILTER_UMH n CONFIG_TEST_BPF m CONFIG_HZ 1000 bpf() syscall available Large program size limit available Bounded loop support available ISA extension v2 available ISA extension v3 available Table 7.2. Available program types and supported helpers Program type Available helpers socket_filter bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete kprobe bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_copy_from_user, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_copy_from_user_task, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete sched_cls bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_skb_set_tstamp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_tcp_raw_gen_syncookie_ipv4, bpf_tcp_raw_gen_syncookie_ipv6, bpf_tcp_raw_check_syncookie_ipv4, bpf_tcp_raw_check_syncookie_ipv6, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete sched_act bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_skb_set_tstamp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_tcp_raw_gen_syncookie_ipv4, bpf_tcp_raw_gen_syncookie_ipv6, bpf_tcp_raw_check_syncookie_ipv4, bpf_tcp_raw_check_syncookie_ipv6, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_copy_from_user, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_copy_from_user_task, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete xdp bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_redirect, bpf_perf_event_output, bpf_csum_diff, bpf_get_current_task, bpf_get_numa_node_id, bpf_xdp_adjust_head, bpf_redirect_map, bpf_xdp_adjust_meta, bpf_xdp_adjust_tail, bpf_fib_lookup, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_strtol, bpf_strtoul, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_xdp_get_buff_len, bpf_xdp_load_bytes, bpf_xdp_store_bytes, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_tcp_raw_gen_syncookie_ipv4, bpf_tcp_raw_gen_syncookie_ipv6, bpf_tcp_raw_check_syncookie_ipv4, bpf_tcp_raw_check_syncookie_ipv6, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete perf_event bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_copy_from_user, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_copy_from_user_task, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete cgroup_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_skb_cgroup_id, bpf_get_local_storage, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete cgroup_sock bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_get_retval, bpf_set_retval, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete lwt_in bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete lwt_out bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete lwt_xmit bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete sock_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_sock_map_update, bpf_getsockopt, bpf_sock_ops_cb_flags_set, bpf_sock_hash_update, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete sk_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_adjust_room, bpf_sk_redirect_map, bpf_sk_redirect_hash, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete cgroup_device bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete sk_msg bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_msg_redirect_hash, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete raw_tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_copy_from_user, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_copy_from_user_task, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete cgroup_sock_addr bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_getsockopt, bpf_bind, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_get_retval, bpf_set_retval, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete lwt_seg6local bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_lwt_seg6_store_bytes, bpf_lwt_seg6_adjust_srh, bpf_lwt_seg6_action, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete lirc_mode2 not supported sk_reuseport bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_skb_load_bytes_relative, bpf_sk_select_reuseport, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete flow_dissector bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete cgroup_sysctl bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete raw_tracepoint_writable bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_copy_from_user, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_copy_from_user_task, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete cgroup_sockopt bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_loop, bpf_strncmp, bpf_get_retval, bpf_set_retval, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete tracing not supported struct_ops not supported ext not supported lsm not supported sk_lookup bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock, bpf_loop, bpf_strncmp, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete syscall bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_get_socket_cookie, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_send_signal, bpf_skb_output, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_xdp_output, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_get_task_stack, bpf_d_path, bpf_copy_from_user, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_sock_from_file, bpf_for_each_map_elem, bpf_snprintf, bpf_sys_bpf, bpf_btf_find_by_name_kind, bpf_sys_close, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_skc_to_unix_sock, bpf_kallsyms_lookup_name, bpf_find_vma, bpf_loop, bpf_strncmp, bpf_xdp_get_buff_len, bpf_copy_from_user_task, bpf_kptr_xchg, bpf_map_lookup_percpu_elem, bpf_skc_to_mptcp_sock, bpf_dynptr_from_mem, bpf_ringbuf_reserve_dynptr, bpf_ringbuf_submit_dynptr, bpf_ringbuf_discard_dynptr, bpf_dynptr_read, bpf_dynptr_write, bpf_dynptr_data, bpf_ktime_get_tai_ns, bpf_user_ringbuf_drain, bpf_cgrp_storage_get, bpf_cgrp_storage_delete Table 7.3. Available map types Map type Available hash yes array yes prog_array yes perf_event_array yes percpu_hash yes percpu_array yes stack_trace yes cgroup_array yes lru_hash yes lru_percpu_hash yes lpm_trie yes array_of_maps yes hash_of_maps yes devmap yes sockmap yes cpumap yes xskmap yes sockhash yes cgroup_storage yes reuseport_sockarray yes percpu_cgroup_storage yes queue yes stack yes sk_storage yes devmap_hash yes struct_ops yes ringbuf yes inode_storage yes task_storage yes bloom_filter yes user_ringbuf yes cgrp_storage yes
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.3_release_notes/available_bpf_features
About
About Red Hat Advanced Cluster Security for Kubernetes 4.6 Welcome to Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/about/index
Chapter 9. Installing a cluster on IBM Cloud in a restricted network
Chapter 9. Installing a cluster on IBM Cloud in a restricted network In OpenShift Container Platform 4.15, you can install a cluster in a restricted network by creating an internal mirror of the installation release content that is accessible to an existing Virtual Private Cloud (VPC) on IBM Cloud(R). 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You configured an IBM Cloud account to host the cluster. You have a container image registry that is accessible to the internet and your restricted network. The container image registry should mirror the contents of the OpenShift image registry and contain the installation media. For more information, see Mirroring images for a disconnected installation using the oc-mirror plugin . You have an existing VPC on IBM Cloud(R) that meets the following requirements: The VPC contains the mirror registry or has firewall rules or a peering connection to access the mirror registry that is hosted elsewhere. The VPC can access IBM Cloud(R) service endpoints using a public endpoint. If network restrictions limit access to public service endpoints, evaluate those services for alternate endpoints that might be available. For more information see Access to IBM service endpoints . You cannot use the VPC that the installation program provisions by default. If you plan on configuring endpoint gateways to use IBM Cloud(R) Virtual Private Endpoints, consider the following requirements: Endpoint gateway support is currently limited to the us-east and us-south regions. The VPC must allow traffic to and from the endpoint gateways. You can use the VPC's default security group, or a new security group, to allow traffic on port 443. For more information, see Allowing endpoint gateway traffic . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 9.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. 9.2.1. Required internet access and an installation host You complete the installation using a bastion host or portable device that can access both the internet and your closed network. You must use a host with internet access to: Download the installation program, the OpenShift CLI ( oc ), and the CCO utility ( ccoctl ). Use the installation program to locate the Red Hat Enterprise Linux CoreOS (RHCOS) image and create the installation configuration file. Use oc to extract ccoctl from the CCO container image. Use oc and ccoctl to configure IAM for IBM Cloud(R). 9.2.2. Access to a mirror registry To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your restricted network, or by using other methods that meet your organization's security restrictions. For more information on mirroring images for a disconnected installation, see "Additional resources". 9.2.3. Access to IBM service endpoints The installation program requires access to the following IBM Cloud(R) service endpoints: Cloud Object Storage DNS Services Global Search Global Tagging Identity Services Resource Controller Resource Manager VPC Note If you are specifying an IBM(R) Key Protect for IBM Cloud(R) root key as part of the installation process, the service endpoint for Key Protect is also required. By default, the public endpoint is used to access the service. If network restrictions limit access to public service endpoints, you can override the default behavior. Before deploying the cluster, you can update the installation configuration file ( install-config.yaml ) to specify the URI of an alternate service endpoint. For more information on usage, see "Additional resources". 9.2.4. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. Additional resources Mirroring images for a disconnected installation using the oc-mirror plugin Additional IBM Cloud configuration parameters 9.3. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 9.3.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 9.3.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to the existing VPC. As part of the installation, specify the following in the install-config.yaml file: The name of the existing resource group that contains the VPC and subnets ( networkResourceGroupName ) The name of the existing VPC ( vpcName ) The subnets that were created for control plane machines and compute machines ( controlPlaneSubnets and computeSubnets ) Note Additional installer-provisioned cluster resources are deployed to a separate resource group ( resourceGroupName ). You can specify this resource group before installing the cluster. If undefined, a new resource group is created for the cluster. To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 9.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 9.3.4. Allowing endpoint gateway traffic If you are using IBM Cloud(R) Virtual Private endpoints, your Virtual Private Cloud (VPC) must be configured to allow traffic to and from the endpoint gateways. A VPC's default security group is configured to allow all outbound traffic to endpoint gateways. Therefore, the simplest way to allow traffic between your VPC and endpoint gateways is to modify the default security group to allow inbound traffic on port 443. Note If you choose to configure a new security group, the security group must be configured to allow both inbound and outbound traffic. Prerequisites You have installed the IBM Cloud(R) Command Line Interface utility ( ibmcloud ). Procedure Obtain the identifier for the default security group by running the following command: USD DEFAULT_SG=USD(ibmcloud is vpc <your_vpc_name> --output JSON | jq -r '.default_security_group.id') Add a rule that allows inbound traffic on port 443 by running the following command: USD ibmcloud is security-group-rule-add USDDEFAULT_SG inbound tcp --remote 0.0.0.0/0 --port-min 443 --port-max 443 Note Be sure that your endpoint gateways are configured to use this security group. 9.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 9.6. Downloading the RHCOS cluster image The installation program requires the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. While optional, downloading the Red Hat Enterprise Linux CoreOS (RHCOS) before deploying removes the need for internet access when creating the cluster. Use the installation program to locate and download the Red Hat Enterprise Linux CoreOS (RHCOS) image. Prerequisites The host running the installation program has internet access. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the IBM Cloud(R) image. .Example output ---- "release": "415.92.202311241643-0", "formats": { "qcow2.gz": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.15-9.2/builds/415.92.202311241643-0/x86_64/rhcos-415.92.202311241643-0-ibmcloud.x86_64.qcow2.gz", "sha256": "6b562dee8431bec3b93adeac1cfefcd5e812d41e3b7d78d3e28319870ffc9eae", "uncompressed-sha256": "5a0f9479505e525a30367b6a6a6547c86a8f03136f453c1da035f3aa5daa8bc9" ---- Download and extract the image archive. Make the image available on the host that the installation program uses to create the cluster. 9.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have obtained the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . When customizing the sample template, be sure to provide the information that is required for an installation in a restricted network: Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.ibmcloud field: vpcName: <existing_vpc> controlPlaneSubnets: <control_plane_subnet> computeSubnets: <compute_subnet> For platform.ibmcloud.vpcName , specify the name for the existing IBM Cloud VPC. For platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. If network restrictions limit the use of public endpoints to access the required IBM Cloud(R) services, add the serviceEndpoints stanza to platform.ibmcloud to specify an alternate service endpoint. Note You can specify only one alternate service endpoint for each service. Example of using alternate services endpoints # ... serviceEndpoints: - name: IAM url: <iam_alternate_endpoint_url> - name: VPC url: <vpc_alternate_endpoint_url> - name: ResourceController url: <resource_controller_alternate_endpoint_url> - name: ResourceManager url: <resource_manager_alternate_endpoint_url> - name: DNSServices url: <dns_services_alternate_endpoint_url> - name: COS url: <cos_alternate_endpoint_url> - name: GlobalSearch url: <global_search_alternate_endpoint_url> - name: GlobalTagging url: <global_tagging_alternate_endpoint_url> # ... Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Note If you use the default value of External , your network must be able to access the public endpoint for IBM Cloud(R) Internet Services (CIS). CIS is not enabled for Virtual Private Endpoints. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 9.7.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources Installation configuration parameters for IBM Cloud(R) 9.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 9.7.3. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 9.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 mx2-8x64 mx2d-4x32 mx3d-2x20 ox2-4x32 ox2-8x64 ux2d-2x56 vx2d-4x56 9.7.4. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibm-cloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-east 12 resourceGroupName: us-east-example-cluster-rg 13 serviceEndpoints: 14 - name: IAM url: https://private.us-east.iam.cloud.ibm.com - name: VPC url: https://us-east.private.iaas.cloud.ibm.com/v1 - name: ResourceController url: https://private.us-east.resource-controller.cloud.ibm.com - name: ResourceManager url: https://private.us-east.resource-controller.cloud.ibm.com - name: DNSServices url: https://api.private.dns-svcs.cloud.ibm.com/v1 - name: COS url: https://s3.direct.us-east.cloud-object-storage.appdomain.cloud - name: GlobalSearch url: https://api.private.global-search-tagging.cloud.ibm.com - name: GlobalTagging url: https://tags.private.global-search-tagging.cloud.ibm.com networkResourceGroupName: us-east-example-existing-network-rg 15 vpcName: us-east-example-network-1 16 controlPlaneSubnets: 17 - us-east-example-network-1-cp-us-east-1 - us-east-example-network-1-cp-us-east-2 - us-east-example-network-1-cp-us-east-3 computeSubnets: 18 - us-east-example-network-1-compute-us-east-1 - us-east-example-network-1-compute-us-east-2 - us-east-example-network-1-compute-us-east-3 credentialsMode: Manual pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 additionalTrustBundle: | 22 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 23 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 8 12 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 14 Based on the network restrictions of the VPC, specify alternate service endpoints as needed. This overrides the default public endpoint for the service. 15 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 16 Specify the name of an existing VPC. 17 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 19 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000. For <credentials> , specify the base64-encoded user name and password for your mirror registry. 20 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated or Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 21 Optional: provide the sshKey value that you use to access the machines in your cluster. 22 Provide the contents of the certificate file that you used for your mirror registry. 23 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 9.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.9. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 9.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. If the Red Hat Enterprise Linux CoreOS (RHCOS) image is available locally, the host running the installation program does not require internet access. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Export the OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE variable to specify the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image by running the following command: USD export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE="<path_to_image>/rhcos-<image_version>-ibmcloud.x86_64.qcow2.gz" Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 9.12. Post installation Complete the following steps to complete the configuration of your cluster. 9.12.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 9.12.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 9.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 9.14. steps Customize your cluster . Optional: Opt out of remote health reporting .
[ "DEFAULT_SG=USD(ibmcloud is vpc <your_vpc_name> --output JSON | jq -r '.default_security_group.id')", "ibmcloud is security-group-rule-add USDDEFAULT_SG inbound tcp --remote 0.0.0.0/0 --port-min 443 --port-max 443", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export IC_API_KEY=<api_key>", "./openshift-install coreos print-stream-json", ".Example output ---- \"release\": \"415.92.202311241643-0\", \"formats\": { \"qcow2.gz\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.15-9.2/builds/415.92.202311241643-0/x86_64/rhcos-415.92.202311241643-0-ibmcloud.x86_64.qcow2.gz\", \"sha256\": \"6b562dee8431bec3b93adeac1cfefcd5e812d41e3b7d78d3e28319870ffc9eae\", \"uncompressed-sha256\": \"5a0f9479505e525a30367b6a6a6547c86a8f03136f453c1da035f3aa5daa8bc9\" ----", "mkdir <installation_directory>", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "vpcName: <existing_vpc> controlPlaneSubnets: <control_plane_subnet> computeSubnets: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "serviceEndpoints: - name: IAM url: <iam_alternate_endpoint_url> - name: VPC url: <vpc_alternate_endpoint_url> - name: ResourceController url: <resource_controller_alternate_endpoint_url> - name: ResourceManager url: <resource_manager_alternate_endpoint_url> - name: DNSServices url: <dns_services_alternate_endpoint_url> - name: COS url: <cos_alternate_endpoint_url> - name: GlobalSearch url: <global_search_alternate_endpoint_url> - name: GlobalTagging url: <global_tagging_alternate_endpoint_url>", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibm-cloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-east 12 resourceGroupName: us-east-example-cluster-rg 13 serviceEndpoints: 14 - name: IAM url: https://private.us-east.iam.cloud.ibm.com - name: VPC url: https://us-east.private.iaas.cloud.ibm.com/v1 - name: ResourceController url: https://private.us-east.resource-controller.cloud.ibm.com - name: ResourceManager url: https://private.us-east.resource-controller.cloud.ibm.com - name: DNSServices url: https://api.private.dns-svcs.cloud.ibm.com/v1 - name: COS url: https://s3.direct.us-east.cloud-object-storage.appdomain.cloud - name: GlobalSearch url: https://api.private.global-search-tagging.cloud.ibm.com - name: GlobalTagging url: https://tags.private.global-search-tagging.cloud.ibm.com networkResourceGroupName: us-east-example-existing-network-rg 15 vpcName: us-east-example-network-1 16 controlPlaneSubnets: 17 - us-east-example-network-1-cp-us-east-1 - us-east-example-network-1-cp-us-east-2 - us-east-example-network-1-cp-us-east-3 computeSubnets: 18 - us-east-example-network-1-compute-us-east-1 - us-east-example-network-1-compute-us-east-2 - us-east-example-network-1-compute-us-east-3 credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 additionalTrustBundle: | 22 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 23 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE=\"<path_to_image>/rhcos-<image_version>-ibmcloud.x86_64.qcow2.gz\"", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_cloud/installing-ibm-cloud-restricted
Chapter 3. Performing advanced procedures
Chapter 3. Performing advanced procedures This chapter describes advanced procedures, such as setting up keystores and a truststore for the Red Hat Single Sign-On server, creating an administrator account, as well as an overview of available Red Hat Single Sign-On client registration methods, and guidance on configuring clustering. 3.1. Deploying passthrough TLS termination templates You can deploy using these templates. They require HTTPS, JGroups keystores and the Red Hat Single Sign-On server truststore to already exist, and therefore can be used to instantiate the Red Hat Single Sign-On server pod using your custom HTTPS, JGroups keystores and Red Hat Single Sign-On server truststore. 3.1.1. Preparing the deployment Procedure Log in to the OpenShift CLI with a user that holds the cluster:admin role. Create a new project: USD oc new-project sso-app-demo Add the view role to the default service account. This enables the service account to view all the resources in the sso-app-demo namespace, which is necessary for managing the cluster. USD oc policy add-role-to-user view system:serviceaccount:USD(oc project -q):default 3.1.2. Creating HTTPS and JGroups Keystores, and Truststore for the Red Hat Single Sign-On Server In this procedure, the openssl toolkit is used to generate a CA certificate to sign the HTTPS keystore, and create a truststore for the Red Hat Single Sign-On server. The keytool , a package included with the Java Development Kit , is then used to generate self-signed certificates for these keystores. The Red Hat Single Sign-On application templates, using re-encryption TLS termination , do not require or expect the HTTPS and JGroups keystores and Red Hat Single Sign-On server truststore to be prepared beforehand. The re-encryption templates use OpenShift's internal Service serving certificate secrets to automatically create the HTTPS and JGroups keystores. The Red Hat Single Sign-On server truststore is also created automatically. It is pre-populated with the all known, trusted CA certificate files found in the Java system path. Note If you want to provision the Red Hat Single Sign-On server using existing HTTPS / JGroups keystores, use some of the passthrough templates instead. Prerequisites The Red Hat Single Sign-On application templates using passthrough TLS termination require the following to be deployed: An HTTPS keystore used for encryption of https traffic, The JGroups keystore used for encryption of JGroups communications between nodes in the cluster, and Red Hat Single Sign-On server truststore used for securing the Red Hat Single Sign-On requests Note For production environments Red Hat recommends that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS). See the JBoss Enterprise Application Platform Security Guide for more information on how to create a keystore with self-signed or purchased SSL certificates. Create the HTTPS keystore: Procedure Generate a CA certificate. Pick and remember the password. Provide identical password, when signing the certificate sign request with the CA certificate below: USD openssl req -new -newkey rsa:4096 -x509 -keyout xpaas.key -out xpaas.crt -days 365 -subj "/CN=xpaas-sso-demo.ca" Generate a private key for the HTTPS keystore. Provide mykeystorepass as the keystore password: USD keytool -genkeypair -keyalg RSA -keysize 2048 -dname "CN=secure-sso-sso-app-demo.openshift.example.com" -alias jboss -keystore keystore.jks Generate a certificate sign request for the HTTPS keystore. Provide mykeystorepass as the keystore password: USD keytool -certreq -keyalg rsa -alias jboss -keystore keystore.jks -file sso.csr Sign the certificate sign request with the CA certificate. Provide the same password that was used to generate the CA certificate : USD openssl x509 -req -extfile <(printf "subjectAltName=DNS:secure-sso-sso-app-demo.openshift.example.com") -CA xpaas.crt -CAkey xpaas.key -in sso.csr -out sso.crt -days 365 -CAcreateserial Note To make the preceding command work on one line, the command includes the process substitution ( <() syntax ). Be sure that your current shell environment supports such syntax. Otherwise, you can encounter a syntax error near unexpected token `(' message. Import the CA certificate into the HTTPS keystore. Provide mykeystorepass as the keystore password. Reply yes to Trust this certificate? [no]: question: USD keytool -import -file xpaas.crt -alias xpaas.ca -keystore keystore.jks Import the signed certificate sign request into the HTTPS keystore. Provide mykeystorepass as the keystore password: USD keytool -import -file sso.crt -alias jboss -keystore keystore.jks Generate a secure key for the JGroups keystore: Provide password as the keystore password: USD keytool -genseckey -alias secret-key -storetype JCEKS -keystore jgroups.jceks Import the CA certificate into a new Red Hat Single Sign-On server truststore: Provide mykeystorepass as the truststore password. Reply yes to Trust this certificate? [no]: question: USD keytool -import -file xpaas.crt -alias xpaas.ca -keystore truststore.jks 3.1.3. Creating secrets Procedure You create objects called secrets that OpenShift uses to hold sensitive information, such as passwords or keystores. Create the secrets for the HTTPS and JGroups keystores, and Red Hat Single Sign-On server truststore, generated in the section . USD oc create secret generic sso-app-secret --from-file=keystore.jks --from-file=jgroups.jceks --from-file=truststore.jks Link these secrets to the default service account, which is used to run Red Hat Single Sign-On pods. USD oc secrets link default sso-app-secret Additional resources What is a secret? Default project service accounts and roles 3.1.4. Deploying a Passthrough TLS template using the OpenShift CLI After you create keystores and secrets , deploy a passthrough TLS termination template by using the oc command. 3.1.4.1. oc command guidelines In the following oc command, the values of SSO_ADMIN_USERNAME , SSO_ADMIN_PASSWORD , HTTPS_PASSWORD , JGROUPS_ENCRYPT_PASSWORD , and SSO_TRUSTSTORE_PASSWORD variables match the default values from the sso76-ocp4-https Red Hat Single Sign-On application template. For production environments, Red Hat recommends that you consult the on-site policy for your organization for guidance on generating a strong user name and password for the administrator user account of the Red Hat Single Sign-On server, and passwords for the HTTPS and JGroups keystores, and the truststore of the Red Hat Single Sign-On server. Also, when you create the template, make the passwords match the passwords provided when you created the keystores. If you used a different username or password, modify the values of the parameters in your template to match your environment. Note You can determine the alias names associated with the certificate by using the following keytool commands. The keytool is a package included with the Java Development Kit. USD keytool -v -list -keystore keystore.jks | grep Alias Enter keystore password: mykeystorepass Alias name: xpaas.ca Alias name: jboss USD keytool -v -list -keystore jgroups.jceks -storetype jceks | grep Alias Enter keystore password: password Alias name: secret-key The SSO_ADMIN_USERNAME , SSO_ADMIN_PASSWORD , and the SSO_REALM template parameters in the following command are optional. 3.1.4.2. Sample oc command USD oc new-app --template=sso76-ocp4-https \ -p HTTPS_SECRET="sso-app-secret" \ -p HTTPS_KEYSTORE="keystore.jks" \ -p HTTPS_NAME="jboss" \ -p HTTPS_PASSWORD="mykeystorepass" \ -p JGROUPS_ENCRYPT_SECRET="sso-app-secret" \ -p JGROUPS_ENCRYPT_KEYSTORE="jgroups.jceks" \ -p JGROUPS_ENCRYPT_NAME="secret-key" \ -p JGROUPS_ENCRYPT_PASSWORD="password" \ -p SSO_ADMIN_USERNAME="admin" \ -p SSO_ADMIN_PASSWORD="redhat" \ -p SSO_REALM="demorealm" \ -p SSO_TRUSTSTORE="truststore.jks" \ -p SSO_TRUSTSTORE_PASSWORD="mykeystorepass" \ -p SSO_TRUSTSTORE_SECRET="sso-app-secret" --> Deploying template "openshift/sso76-ocp4-https" to project sso-app-demo Red Hat Single Sign-On 7.6.11 (Ephemeral with passthrough TLS) --------- An example Red Hat Single Sign-On 7 application. For more information about using this template, see https://github.com/jboss-openshift/application-templates. A new Red Hat Single Sign-On service has been created in your project. The admin username/password for accessing the master realm via the Red Hat Single Sign-On console is admin/redhat. Please be sure to create the following secrets: "sso-app-secret" containing the keystore.jks file used for serving secure content; "sso-app-secret" containing the jgroups.jceks file used for securing JGroups communications; "sso-app-secret" containing the truststore.jks file used for securing Red Hat Single Sign-On requests. * With parameters: * Application Name=sso * Custom http Route Hostname= * Custom https Route Hostname= * Server Keystore Secret Name=sso-app-secret * Server Keystore Filename=keystore.jks * Server Keystore Type= * Server Certificate Name=jboss * Server Keystore Password=mykeystorepass * Datasource Minimum Pool Size= * Datasource Maximum Pool Size= * Datasource Transaction Isolation= * JGroups Secret Name=sso-app-secret * JGroups Keystore Filename=jgroups.jceks * JGroups Certificate Name=secret-key * JGroups Keystore Password=password * JGroups Cluster Password=yeSppLfp # generated * ImageStream Namespace=openshift * Red Hat Single Sign-On Administrator Username=admin * Red Hat Single Sign-On Administrator Password=redhat * Red Hat Single Sign-On Realm=demorealm * Red Hat Single Sign-On Service Username= * Red Hat Single Sign-On Service Password= * Red Hat Single Sign-On Trust Store=truststore.jks * Red Hat Single Sign-On Trust Store Password=mykeystorepass * Red Hat Single Sign-On Trust Store Secret=sso-app-secret * Container Memory Limit=1Gi --> Creating resources ... service "sso" created service "secure-sso" created service "sso-ping" created route "sso" created route "secure-sso" created deploymentconfig "sso" created --> Success Run 'oc status' to view your app. Additional resources Passthrough TLS Termination 3.2. Customizing the Hostname for the Red Hat Single Sign-On Server The hostname SPI introduced a flexible way to configure the hostname for the Red Hat Single Sign-On server. The default hostname provider one is default . This provider provides enhanced functionality over the original request provider which is now deprecated. Without additional settings, it uses the request headers to determine the hostname similarly to the original request provider. For configuration options of the default provider, refer to the Server Installation and Configuration Guide . The frontendUrl option can be configured via SSO_FRONTEND_URL environment variable. Note For backward compatibility, SSO_FRONTEND_URL settings is ignored if SSO_HOSTNAME is also set. Another option of hostname provider is fixed , which allows configuring a fixed hostname. The latter makes sure that only valid hostnames can be used and allows internal applications to invoke the Red Hat Single Sign-On server through an alternative URL. Procedure Run the following commands to set the fixed hostname SPI provider for the Red Hat Single Sign-On server: Deploy the Red Hat Single Sign-On for OpenShift image with SSO_HOSTNAME environment variable set to the desired hostname of the Red Hat Single Sign-On server. USD oc new-app --template=sso76-ocp4-x509-https \ -p SSO_HOSTNAME="rh-sso-server.openshift.example.com" Identify the name of the route for the Red Hat Single Sign-On service. USD oc get routes NAME HOST/PORT sso sso-sso-app-demo.openshift.example.com Change the host: field to match the hostname specified as the value of the SSO_HOSTNAME environment variable above. Note Adjust the rh-sso-server.openshift.example.com value in the following command as necessary. If successful, the command will return the following output: 3.3. Connecting to an external database Red Hat Single Sign-On can be configured to connect to an external (to OpenShift cluster) database. In order to achieve this, you need to modify the sso-{database name} Endpoints object to point to the proper address. The procedure is described in the OpenShift manual . The easiest way to get started is to deploy Red Hat Single Sign-On from a template and then modify the Endpoints object. You might also need to update some of the datasource configuration variables in the DeploymentConfig. Once you're done, just roll a new deployment out. 3.4. Clustering 3.4.1. Configuring a JGroups discovery mechanism Clustering in OpenShift is achieved through one of two discovery mechanisms: Kubernetes or DNS . They can be set: Either by configuring the JGroups protocol stack directly in the standalone-openshift.xml configuration file with either the <kubernetes.KUBE_PING/> or <dns.DNS_PING/> elements, Or by specifying the JGROUPS_PING_PROTOCOL environment variable which can be set to either dns.DNS_PING or kubernetes.KUBE_PING . The OpenShift 4.x templates are configured to use the dns.DNS_PING mechanism with the spec.ipFamilyPolicy field set to PreferDualStack to enable dual-stack configured clusters by default . However kubernetes.KUBE_PING is the default option used by the image if no value is specified for the JGROUPS_PING_PROTOCOL environment variable. 3.4.1.1. Configuring DNS_PING on a single-stack configured cluster For DNS_PING to work on IPv4 or IPv6 single-stack cluster , the following steps must be taken: The OPENSHIFT_DNS_PING_SERVICE_NAME environment variable must be set to the name of the ping service for the cluster. If not set, the server will act as if it is a single-node cluster (a "cluster of one"). The OPENSHIFT_DNS_PING_SERVICE_PORT environment variables should be set to the port number on which the ping service is exposed. The DNS_PING protocol will attempt to discern the port from the SRV records, if it cannot discern the port, this variable will default to 8888. A ping service which exposes the ping port must be defined. This service should be "headless" (ClusterIP=None) and must have the following: The port must be named for port discovery to work. The spec.publishNotReadyAddresses field of this service must be set to "true" . Omitting the setting of this boolean will result in each node forming their own "cluster of one" during startup, then merging their cluster into the other nodes' clusters after startup (as the other nodes are not detected until after they have started). Example definition of a ping service for use with DNS_PING on a single-stack (IPv4 or IPv6) cluster kind: Service apiVersion: v1 spec: clusterIP: None ipFamilyPolicy: SingleStack ports: - name: ping port: 8888 publishNotReadyAddresses: true selector: deploymentConfig: sso metadata: name: sso-ping annotations: description: "The JGroups ping port for clustering." 3.4.1.2. Configuring DNS_PING on a dual-stack configured cluster Moreover, for the DNS_PING to work also on dual-network clusters that support both IPv4 and IPv6 address families, the spec.ipFamilyPolicy field of the ping service for the cluster must be set to PreferDualStack or RequireDualStack . This setting ensures the control plane assigns both IPv4 and IPv6 cluster IP addresses for the ping service on clusters that have dual-stack configured, enables reverse DNS lookups for both IPv4 and IPv6 IP addresses to work properly, and creates corresponding DNS SRV records for the ping headless service as illustrated below: Example of ping service DNS SRV records on a dual-stack configured cluster with spec.ipFamilyPolicy matching PreferDualStack USD host -t SRV "USD{OPENSHIFT_DNS_PING_SERVICE_NAME}" sso-ping.dual-stack-demo.svc.cluster.local has SRV record 0 50 8888 10-128-0-239.sso-ping.dual-stack-demo.svc.cluster.local. sso-ping.dual-stack-demo.svc.cluster.local has SRV record 0 50 8888 fd01-0-0-1--b8.sso-ping.dual-stack-demo.svc.cluster.local. Example definition of a ping service for use with DNS_PING on dual-stack (IPv4 and IPv6) cluster kind: Service apiVersion: v1 spec: clusterIP: None ipFamilyPolicy: PreferDualStack ports: - name: ping port: 8888 publishNotReadyAddresses: true selector: deploymentConfig: sso metadata: name: sso-ping annotations: description: "The JGroups ping port for clustering." 3.4.1.3. Configuring KUBE_PING For KUBE_PING to work, the following steps must be taken: The KUBERNETES_NAMESPACE environment variable must be set. If not set, the server will act as if it is a single-node cluster (a "cluster of one"). The KUBERNETES_LABELS environment variables should be set. If not set, pods outside of your application (even if they are in your namespace) will try to join. Authorization must be granted to the service account the pod is running under to be allowed to access Kubernetes' REST api. You grant authorization on the command line. Refer to the following policy commands examples: Example 3.1. Policy commands Using the default service account in the myproject namespace: Using the sso-service-account in the myproject namespace: Note Since the kubernetes.KUBE_PING discovery mechanism does not require an extra ping service for the cluster, it works using the aforementioned steps on both a single-stack and a dual-stack configured clusters. Refer to a dedicated section of JBoss EAP for OpenShift documentation to: Explore available environment variables to encrypt JGroups traffic Considerations for scaling up pods 3.5. Using Custom JDBC Driver To connect to any database, the JDBC driver for that database must be present and Red Hat Single Sign-On configured properly. Currently, the only JDBC driver available in the image is the PostgreSQL JDBC driver. For any other database, you need to extend the Red Hat Single Sign-On image with a custom JDBC driver and a CLI script to register it and set up the connection properties. The following steps illustrate how to do that, taking MariaDB driver as an example. Update the example for other database drivers accordingly. Procedure Create an empty directory. Download the JDBC driver binaries into this directory. Create a new Dockerfile file in this directory with the following contents. For other databases, replace mariadb-java-client-2.5.4.jar with the filename of the respective driver: Create a new sso-extensions.cli file in this directory with the following contents. Update the values of the variables in italics according to the deployment needs: batch set DB_DRIVER_NAME= mariadb set DB_USERNAME= username set DB_PASSWORD= password set DB_DRIVER= org.mariadb.jdbc.Driver set DB_XA_DRIVER= org.mariadb.jdbc.MariaDbDataSource set DB_JDBC_URL= jdbc:mariadb://jdbc-host/keycloak set DB_EAP_MODULE= org.mariadb set FILE=/opt/eap/extensions/jdbc-driver.jar module add --name=USDDB_EAP_MODULE --resources=USDFILE --dependencies=javax.api,javax.resource.api /subsystem=datasources/jdbc-driver=USDDB_DRIVER_NAME:add( \ driver-name=USDDB_DRIVER_NAME, \ driver-module-name=USDDB_EAP_MODULE, \ driver-class-name=USDDB_DRIVER, \ driver-xa-datasource-class-name=USDDB_XA_DRIVER \ ) /subsystem=datasources/data-source=KeycloakDS:remove() /subsystem=datasources/data-source=KeycloakDS:add( \ jndi-name=java:jboss/datasources/KeycloakDS, \ enabled=true, \ use-java-context=true, \ connection-url=USDDB_JDBC_URL, \ driver-name=USDDB_DRIVER_NAME, \ user-name=USDDB_USERNAME, \ password=USDDB_PASSWORD \ ) run-batch In this directory, build your image by typing the following command, replacing the project/name:tag with arbitrary name. docker can be used instead of podman . USD podman build -t docker-registry-default/project/name:tag . After the build finishes, push your image to the registry used by OpenShift to deploy your image. Refer to the OpenShift guide for details. If you want to use this image with the custom JDBC driver that you built in the step with the existing Red Hat Single Sign-On OpenShift DeploymentConfig that was previously created by some Red Hat Single Sign-On OpenShift template, you need to patch the DeploymentConfig definition. Enter the following command: USD oc patch dc/sso --type=json -p '[{"op": "replace", "path": "/spec/triggers/0/imageChangeParams/from/name", "value": "sso76-openshift-rhel8-image-with-custom-jdbc-driver:latest"}]' "sso" patched This command assumes the image stream name and tag combination of the Red Hat Single Sign-On image with the custom JDBC driver is "sso76-openshift-rhel8-image-with-custom-jdbc-driver:latest." 3.6. Creating the Administrator Account for Red Hat Single Sign-On Server Red Hat Single Sign-On does not provide any pre-configured management account out of the box. This administrator account is necessary for logging into the master realm's management console and performing server maintenance operations such as creating realms or users or registering applications intended to be secured by Red Hat Single Sign-On. The administrator account can be created: By providing values for the SSO_ADMIN_USERNAME and SSO_ADMIN_PASSWORD parameters , when deploying the Red Hat Single Sign-On application template, or By a remote shell session to particular Red Hat Single Sign-On pod , if the Red Hat Single Sign-On for OpenShift image is deployed without an application template. Note Red Hat Single Sign-On allows an initial administrator account to be created by the Welcome Page web form, but only if the Welcome Page is accessed from localhost; this method of administrator account creation is not applicable for the Red Hat Single Sign-On for OpenShift image. 3.6.1. Creating the Administrator Account using template parameters When deploying Red Hat Single Sign-On application template, the SSO_ADMIN_USERNAME and SSO_ADMIN_PASSWORD parameters denote the username and password of the Red Hat Single Sign-On server's administrator account to be created for the master realm. Both of these parameters are required. If not specified, they are auto generated and displayed as an OpenShift instructional message when the template is instantiated. The lifespan of the Red Hat Single Sign-On server's administrator account depends upon the storage type used to store the Red Hat Single Sign-On server's database: For an in-memory database mode ( sso76-ocp3-https , sso76-ocp4-https , sso76-ocp3-x509-https , and sso76-ocp4-x509-https templates), the account exists throughout the lifecycle of the particular Red Hat Single Sign-On pod (stored account data is lost upon pod destruction), For an ephemeral database mode ( sso76-ocp3-postgresql and sso76-ocp4-postgresql template), the account exists throughout the lifecycle of the database pod. Even if the Red Hat Single Sign-On pod is destructed, the stored account data is preserved under the assumption that the database pod is still running, For persistent database mode ( sso76-ocp3-postgresql-persistent , sso76-ocp4-postgresql-persistent , sso76-ocp3-x509-postgresql-persistent , and sso76-ocp4-x509-postgresql-persistent templates), the account exists throughout the lifecycle of the persistent medium used to hold the database data. This means that the stored account data is preserved even when both the Red Hat Single Sign-On and the database pods are destructed. It is a common practice to deploy an Red Hat Single Sign-On application template to get the corresponding OpenShift deployment config for the application, and then reuse that deployment config multiple times (every time a new Red Hat Single Sign-On application needs to be instantiated). In the case of ephemeral or persistent database mode , after creating the RH_SSO server's administrator account, remove the SSO_ADMIN_USERNAME and SSO_ADMIN_PASSWORD variables from the deployment config before deploying new Red Hat Single Sign-On applications. Procedure Run the following commands to prepare the previously created deployment config of the Red Hat Single Sign-On application for reuse after the administrator account has been created: Identify the deployment config of the Red Hat Single Sign-On application. USD oc get dc -o name deploymentconfig/sso deploymentconfig/sso-postgresql Clear the SSO_ADMIN_USERNAME and SSO_ADMIN_PASSWORD variables setting. USD oc set env dc/sso \ -e SSO_ADMIN_USERNAME="" \ -e SSO_ADMIN_PASSWORD="" 3.6.2. Creating the Administrator Account via a remote shell session to Red Hat Single Sign-On Pod You use the following commands to create an administrator account for the master realm of the Red Hat Single Sign-On server, when deploying the Red Hat Single Sign-On for OpenShift image directly from the image stream without using a template. Prerequisite Red Hat Single Sign-On application pod has been started. Procedure Identify the Red Hat Single Sign-On application pod. USD oc get pods NAME READY STATUS RESTARTS AGE sso-12-pt93n 1/1 Running 0 1m sso-postgresql-6-d97pf 1/1 Running 0 2m Open a remote shell session to the Red Hat Single Sign-On for OpenShift container. USD oc rsh sso-12-pt93n sh-4.2USD Create the Red Hat Single Sign-On server administrator account for the master realm at the command line with the add-user-keycloak.sh script. sh-4.2USD cd /opt/eap/bin/ sh-4.2USD ./add-user-keycloak.sh \ -r master \ -u sso_admin \ -p sso_password Added 'sso_admin' to '/opt/eap/standalone/configuration/keycloak-add-user.json', restart server to load user Note The 'sso_admin' / 'sso_password' credentials in the example above are for demonstration purposes only. Refer to the password policy applicable within your organization for guidance on how to create a secure user name and password. Restart the underlying JBoss EAP server instance to load the newly added user account. Wait for the server to restart properly. sh-4.2USD ./jboss-cli.sh --connect ':reload' { "outcome" => "success", "result" => undefined } Warning When restarting the server it is important to restart just the JBoss EAP process within the running Red Hat Single Sign-On container, and not the whole container. This is because restarting the whole container will recreate it from scratch, without the Red Hat Single Sign-On server administration account for the master realm. Log in to the master realm's Admin Console of the Red Hat Single Sign-On server using the credentials created in the steps above. In the browser, navigate to http://sso-<project-name>.<hostname>/auth/admin for the Red Hat Single Sign-On web server, or to https://secure-sso-<project-name>.<hostname>/auth/admin for the encrypted Red Hat Single Sign-On web server, and specify the user name and password used to create the administrator user. Additional resources Templates for use with this software 3.7. Customizing the default behavior of the Red Hat Single Sign-On image You can change the default behavior of the Red Hat Single Sign-On image such as enabling TechPreview features or enabling debugging. This section describes how to make this change by using the JAVA_OPTS_APPEND variable. Prerequisites This procedure assumes that the Red Hat Single Sign-On for OpenShift image has been previously deployed using one of the following templates: sso76-ocp3-postgresql sso76-ocp4-postgresql sso76-ocp3-postgresql-persistent sso76-ocp4-postgresql-persistent sso76-ocp3-x509-postgresql-persistent sso76-ocp4-x509-postgresql-persistent Procedure You can use the OpenShift web console or the CLI to change the default behavior. If you use the OpenShift web console, you add the JAVA_OPTS_APPEND variable to the sso deployment config. For example, to enable TechPreview features, you set the variable as follows: JAVA_OPTS_APPEND="-Dkeycloak.profile=preview" If you use the CLI, use the following commands to enable TechPreview features when the Red Hat Single Sign-On pod was deployed using a template that is mentioned under Prerequisites. Scale down the Red Hat Single Sign-On pod: USD oc get dc -o name deploymentconfig/sso deploymentconfig/sso-postgresql USD oc scale --replicas=0 dc sso deploymentconfig "sso" scaled Note In the preceding command, sso-postgresql appears because a PostgreSQL template was used to deploy the Red Hat Single Sign-On for OpenShift image. Edit the deployment config to set the JAVA_OPTS_APPEND variable. For example, to enable TechPreview features, you set the variable as follows: USD oc env dc/sso -e "JAVA_OPTS_APPEND=-Dkeycloak.profile=preview" Scale up the Red Hat Single Sign-On pod: USD oc scale --replicas=1 dc sso deploymentconfig "sso" scaled Test a TechPreview feature of your choice. 3.8. Deployment process Once deployed, the sso76-ocp3-https , sso76-ocp4-https templates and either the sso76-ocp3-x509-https or the sso76-ocp4-x509-https template create a single pod that contains both the database and the Red Hat Single Sign-On servers. The sso76-ocp3-postgresql , sso76-ocp4-postgresql , sso76-ocp3-postgresql-persistent , sso76-ocp4-postgresql-persistent , and either the sso76-ocp3-x509-postgresql-persistent or the sso76-ocp4-x509-postgresql-persistent template create two pods, one for the database server and one for the Red Hat Single Sign-On web server. After the Red Hat Single Sign-On web server pod has started, it can be accessed at its custom configured hostnames, or at the default hostnames: http://sso- <project-name> . <hostname> /auth/admin : for the Red Hat Single Sign-On web server, and https://secure-sso- <project-name> . <hostname> /auth/admin : for the encrypted Red Hat Single Sign-On web server. Use the administrator user credentials to log in into the master realm's Admin Console. 3.9. Red Hat Single Sign-On clients Clients are Red Hat Single Sign-On entities that request user authentication. A client can be an application requesting Red Hat Single Sign-On to provide user authentication, or it can be making requests for access tokens to start services on behalf of an authenticated user. See the Managing Clients chapter of the Red Hat Single Sign-On documentation for more information. Red Hat Single Sign-On provides OpenID-Connect and SAML client protocols. OpenID-Connect is the preferred protocol and uses three different access types: public : Useful for JavaScript applications that run directly in the browser and require no server configuration. confidential : Useful for server-side clients, such as EAP web applications, that need to perform a browser login. bearer-only : Useful for back-end services that allow bearer token requests. It is required to specify the client type in the <auth-method> key of the application web.xml file. This file is read by the image at deployment. Set the value of <auth-method> element to: KEYCLOAK for the OpenID Connect client. KEYCLOAK-SAML for the SAML client. The following is an example snippet for the application web.xml to configure an OIDC client: ... <login-config> <auth-method>KEYCLOAK</auth-method> </login-config> ... 3.9.1. Automatic and manual Red Hat Single Sign-On client registration methods A client application can be automatically registered to an Red Hat Single Sign-On realm by using credentials passed in variables specific to the eap64-sso-s2i , eap71-sso-s2i , and datavirt63-secure-s2i templates. Alternatively, you can manually register the client application by configuring and exporting the Red Hat Single Sign-On client adapter and including it in the client application configuration. 3.9.1.1. Automatic Red Hat Single Sign-On client registration Automatic Red Hat Single Sign-On client registration is determined by Red Hat Single Sign-On environment variables specific to the eap64-sso-s2i , eap71-sso-s2i , and datavirt63-secure-s2i templates. The Red Hat Single Sign-On credentials supplied in the template are then used to register the client to the Red Hat Single Sign-On realm during deployment of the client application. The Red Hat Single Sign-On environment variables included in the eap64-sso-s2i , eap71-sso-s2i , and datavirt63-secure-s2i templates are: Variable Description HOSTNAME_HTTP Custom hostname for http service route. Leave blank for default hostname of <application-name>.<project>.<default-domain-suffix> HOSTNAME_HTTPS Custom hostname for https service route. Leave blank for default hostname of <application-name>.<project>.<default-domain-suffix> SSO_URL The Red Hat Single Sign-On web server authentication address: https://secure-sso- <project-name> . <hostname> /auth SSO_REALM The Red Hat Single Sign-On realm created for this procedure. SSO_USERNAME The name of the realm management user . SSO_PASSWORD The password of the user. SSO_PUBLIC_KEY The public key generated by the realm. It is located in the Keys tab of the Realm Settings in the Red Hat Single Sign-On console. SSO_BEARER_ONLY If set to true , the OpenID Connect client is registered as bearer-only. SSO_ENABLE_CORS If set to true , the Red Hat Single Sign-On adapter enables Cross-Origin Resource Sharing (CORS). If the Red Hat Single Sign-On client uses the SAML protocol, the following additional variables need to be configured: Variable Description SSO_SAML_KEYSTORE_SECRET Secret to use for access to SAML keystore. The default is sso-app-secret . SSO_SAML_KEYSTORE Keystore filename in the SAML keystore secret. The default is keystore.jks . SSO_SAML_KEYSTORE_PASSWORD Keystore password for SAML. The default is mykeystorepass . SSO_SAML_CERTIFICATE_NAME Alias for keys/certificate to use for SAML. The default is jboss . See Example Workflow: Automatically Registering EAP Application in Red Hat Single Sign-On with OpenID-Connect Client for an end-to-end example of the automatic client registration method using an OpenID-Connect client. 3.9.1.2. Manual Red Hat Single Sign-On client registration Manual Red Hat Single Sign-On client registration is determined by the presence of a deployment file in the client application's ../configuration/ directory. These files are exported from the client adapter in the Red Hat Single Sign-On web console. The name of this file is different for OpenID-Connect and SAML clients: OpenID-Connect ../configuration/secure-deployments SAML ../configuration/secure-saml-deployments These files are copied to the Red Hat Single Sign-On adapter configuration section in the standalone-openshift.xml at when the application is deployed. There are two methods for passing the Red Hat Single Sign-On adapter configuration to the client application: Modify the deployment file to contain the Red Hat Single Sign-On adapter configuration so that it is included in the standalone-openshift.xml file at deployment, or Manually include the OpenID-Connect keycloak.json file, or the SAML keycloak-saml.xml file in the client application's ../WEB-INF directory. See Example Workflow: Manually Configure an Application to Use Red Hat Single Sign-On Authentication, Using SAML Client for an end-to-end example of the manual Red Hat Single Sign-On client registration method using a SAML client. 3.10. Using Red Hat Single Sign-On vault with OpenShift secrets Several fields in the Red Hat Single Sign-On administration support obtaining the value of a secret from an external vault, see Server Administration Guide . The following example shows how to set up the file-based plaintext vault in OpenShift and set it up to be used for obtaining an SMTP password. Procedure Specify a directory for the vault using the SSO_VAULT_DIR environment variable. You can introduce the SSO_VAULT_DIR environment variable directly in the environment in your deployment configuration. It can also be included in the template by addding the following snippets at the appropriate places in the template: "parameters": [ ... { "displayName": "RH-SSO Vault Secret directory", "description": "Path to the RH-SSO Vault directory.", "name": "SSO_VAULT_DIR", "value": "", "required": false } ... ] env: [ ... { "name": "SSO_VAULT_DIR", "value": "USD{SSO_VAULT_DIR}" } ... ] Note The files plaintext vault provider will be configured only when you set SSO_VAULT_DIR environment variable. Create a secret in your OpenShift cluster: USD oc create secret generic rhsso-vault-secrets --from-literal=master_smtp-password=mySMTPPsswd Mount a volume to your deployment config using the USD{SSO_VAULT_DIR} as the path. For a deployment that is already running: USD oc set volume dc/sso --add --mount-path=USD{SSO_VAULT_DIR} --secret-name=rhsso-vault-secrets After a pod is created you can use a customized string within your Red Hat Single Sign-On configuration to refer to the secret. For example, for using mySMTPPsswd secret created in this tutorial, you can use USD{vault.smtp-password} within the master realm in the configuration of the smtp password and it will be replaced by mySMTPPsswd when used. 3.11. Limitations OpenShift does not currently accept OpenShift role mapping from external providers. If Red Hat Single Sign-On is used as an authentication gateway for OpenShift, users created in Red Hat Single Sign-On must have the roles added using the OpenShift Administrator oc adm policy command. For example, to allow an Red Hat Single Sign-On-created user to view a project namespace in OpenShift: USD oc adm policy add-role-to-user view < user-name > -n < project-name >
[ "oc new-project sso-app-demo", "oc policy add-role-to-user view system:serviceaccount:USD(oc project -q):default", "openssl req -new -newkey rsa:4096 -x509 -keyout xpaas.key -out xpaas.crt -days 365 -subj \"/CN=xpaas-sso-demo.ca\"", "keytool -genkeypair -keyalg RSA -keysize 2048 -dname \"CN=secure-sso-sso-app-demo.openshift.example.com\" -alias jboss -keystore keystore.jks", "keytool -certreq -keyalg rsa -alias jboss -keystore keystore.jks -file sso.csr", "openssl x509 -req -extfile <(printf \"subjectAltName=DNS:secure-sso-sso-app-demo.openshift.example.com\") -CA xpaas.crt -CAkey xpaas.key -in sso.csr -out sso.crt -days 365 -CAcreateserial", "keytool -import -file xpaas.crt -alias xpaas.ca -keystore keystore.jks", "keytool -import -file sso.crt -alias jboss -keystore keystore.jks", "keytool -genseckey -alias secret-key -storetype JCEKS -keystore jgroups.jceks", "keytool -import -file xpaas.crt -alias xpaas.ca -keystore truststore.jks", "oc create secret generic sso-app-secret --from-file=keystore.jks --from-file=jgroups.jceks --from-file=truststore.jks", "oc secrets link default sso-app-secret", "keytool -v -list -keystore keystore.jks | grep Alias Enter keystore password: mykeystorepass Alias name: xpaas.ca Alias name: jboss", "keytool -v -list -keystore jgroups.jceks -storetype jceks | grep Alias Enter keystore password: password Alias name: secret-key", "oc new-app --template=sso76-ocp4-https -p HTTPS_SECRET=\"sso-app-secret\" -p HTTPS_KEYSTORE=\"keystore.jks\" -p HTTPS_NAME=\"jboss\" -p HTTPS_PASSWORD=\"mykeystorepass\" -p JGROUPS_ENCRYPT_SECRET=\"sso-app-secret\" -p JGROUPS_ENCRYPT_KEYSTORE=\"jgroups.jceks\" -p JGROUPS_ENCRYPT_NAME=\"secret-key\" -p JGROUPS_ENCRYPT_PASSWORD=\"password\" -p SSO_ADMIN_USERNAME=\"admin\" -p SSO_ADMIN_PASSWORD=\"redhat\" -p SSO_REALM=\"demorealm\" -p SSO_TRUSTSTORE=\"truststore.jks\" -p SSO_TRUSTSTORE_PASSWORD=\"mykeystorepass\" -p SSO_TRUSTSTORE_SECRET=\"sso-app-secret\" --> Deploying template \"openshift/sso76-ocp4-https\" to project sso-app-demo Red Hat Single Sign-On 7.6.11 (Ephemeral with passthrough TLS) --------- An example Red Hat Single Sign-On 7 application. For more information about using this template, see https://github.com/jboss-openshift/application-templates. A new Red Hat Single Sign-On service has been created in your project. The admin username/password for accessing the master realm via the Red Hat Single Sign-On console is admin/redhat. Please be sure to create the following secrets: \"sso-app-secret\" containing the keystore.jks file used for serving secure content; \"sso-app-secret\" containing the jgroups.jceks file used for securing JGroups communications; \"sso-app-secret\" containing the truststore.jks file used for securing Red Hat Single Sign-On requests. * With parameters: * Application Name=sso * Custom http Route Hostname= * Custom https Route Hostname= * Server Keystore Secret Name=sso-app-secret * Server Keystore Filename=keystore.jks * Server Keystore Type= * Server Certificate Name=jboss * Server Keystore Password=mykeystorepass * Datasource Minimum Pool Size= * Datasource Maximum Pool Size= * Datasource Transaction Isolation= * JGroups Secret Name=sso-app-secret * JGroups Keystore Filename=jgroups.jceks * JGroups Certificate Name=secret-key * JGroups Keystore Password=password * JGroups Cluster Password=yeSppLfp # generated * ImageStream Namespace=openshift * Red Hat Single Sign-On Administrator Username=admin * Red Hat Single Sign-On Administrator Password=redhat * Red Hat Single Sign-On Realm=demorealm * Red Hat Single Sign-On Service Username= * Red Hat Single Sign-On Service Password= * Red Hat Single Sign-On Trust Store=truststore.jks * Red Hat Single Sign-On Trust Store Password=mykeystorepass * Red Hat Single Sign-On Trust Store Secret=sso-app-secret * Container Memory Limit=1Gi --> Creating resources service \"sso\" created service \"secure-sso\" created service \"sso-ping\" created route \"sso\" created route \"secure-sso\" created deploymentconfig \"sso\" created --> Success Run 'oc status' to view your app.", "oc new-app --template=sso76-ocp4-x509-https -p SSO_HOSTNAME=\"rh-sso-server.openshift.example.com\"", "oc get routes NAME HOST/PORT sso sso-sso-app-demo.openshift.example.com", "oc patch route/sso --type=json -p '[{\"op\": \"replace\", \"path\": \"/spec/host\", \"value\": \"rh-sso-server.openshift.example.com\"}]'", "route \"sso\" patched", "kind: Service apiVersion: v1 spec: clusterIP: None ipFamilyPolicy: SingleStack ports: - name: ping port: 8888 publishNotReadyAddresses: true selector: deploymentConfig: sso metadata: name: sso-ping annotations: description: \"The JGroups ping port for clustering.\"", "host -t SRV \"USD{OPENSHIFT_DNS_PING_SERVICE_NAME}\" sso-ping.dual-stack-demo.svc.cluster.local has SRV record 0 50 8888 10-128-0-239.sso-ping.dual-stack-demo.svc.cluster.local. sso-ping.dual-stack-demo.svc.cluster.local has SRV record 0 50 8888 fd01-0-0-1--b8.sso-ping.dual-stack-demo.svc.cluster.local.", "kind: Service apiVersion: v1 spec: clusterIP: None ipFamilyPolicy: PreferDualStack ports: - name: ping port: 8888 publishNotReadyAddresses: true selector: deploymentConfig: sso metadata: name: sso-ping annotations: description: \"The JGroups ping port for clustering.\"", "policy add-role-to-user view system:serviceaccount:myproject:default -n myproject", "policy add-role-to-user view system:serviceaccount:myproject:sso-service-account -n myproject", "FROM rh-sso-7/sso76-openshift-rhel8:latest COPY sso-extensions.cli /opt/eap/extensions/ COPY mariadb-java-client-2.5.4.jar /opt/eap/extensions/jdbc-driver.jar", "batch set DB_DRIVER_NAME= mariadb set DB_USERNAME= username set DB_PASSWORD= password set DB_DRIVER= org.mariadb.jdbc.Driver set DB_XA_DRIVER= org.mariadb.jdbc.MariaDbDataSource set DB_JDBC_URL= jdbc:mariadb://jdbc-host/keycloak set DB_EAP_MODULE= org.mariadb set FILE=/opt/eap/extensions/jdbc-driver.jar module add --name=USDDB_EAP_MODULE --resources=USDFILE --dependencies=javax.api,javax.resource.api /subsystem=datasources/jdbc-driver=USDDB_DRIVER_NAME:add( driver-name=USDDB_DRIVER_NAME, driver-module-name=USDDB_EAP_MODULE, driver-class-name=USDDB_DRIVER, driver-xa-datasource-class-name=USDDB_XA_DRIVER ) /subsystem=datasources/data-source=KeycloakDS:remove() /subsystem=datasources/data-source=KeycloakDS:add( jndi-name=java:jboss/datasources/KeycloakDS, enabled=true, use-java-context=true, connection-url=USDDB_JDBC_URL, driver-name=USDDB_DRIVER_NAME, user-name=USDDB_USERNAME, password=USDDB_PASSWORD ) run-batch", "podman build -t docker-registry-default/project/name:tag .", "oc patch dc/sso --type=json -p '[{\"op\": \"replace\", \"path\": \"/spec/triggers/0/imageChangeParams/from/name\", \"value\": \"sso76-openshift-rhel8-image-with-custom-jdbc-driver:latest\"}]' \"sso\" patched", "oc get dc -o name deploymentconfig/sso deploymentconfig/sso-postgresql", "oc set env dc/sso -e SSO_ADMIN_USERNAME=\"\" -e SSO_ADMIN_PASSWORD=\"\"", "oc get pods NAME READY STATUS RESTARTS AGE sso-12-pt93n 1/1 Running 0 1m sso-postgresql-6-d97pf 1/1 Running 0 2m", "oc rsh sso-12-pt93n sh-4.2USD", "sh-4.2USD cd /opt/eap/bin/ sh-4.2USD ./add-user-keycloak.sh -r master -u sso_admin -p sso_password Added 'sso_admin' to '/opt/eap/standalone/configuration/keycloak-add-user.json', restart server to load user", "sh-4.2USD ./jboss-cli.sh --connect ':reload' { \"outcome\" => \"success\", \"result\" => undefined }", "JAVA_OPTS_APPEND=\"-Dkeycloak.profile=preview\"", "oc get dc -o name deploymentconfig/sso deploymentconfig/sso-postgresql oc scale --replicas=0 dc sso deploymentconfig \"sso\" scaled", "oc env dc/sso -e \"JAVA_OPTS_APPEND=-Dkeycloak.profile=preview\"", "oc scale --replicas=1 dc sso deploymentconfig \"sso\" scaled", "<login-config> <auth-method>KEYCLOAK</auth-method> </login-config>", "\"parameters\": [ { \"displayName\": \"RH-SSO Vault Secret directory\", \"description\": \"Path to the RH-SSO Vault directory.\", \"name\": \"SSO_VAULT_DIR\", \"value\": \"\", \"required\": false } ] env: [ { \"name\": \"SSO_VAULT_DIR\", \"value\": \"USD{SSO_VAULT_DIR}\" } ]", "oc create secret generic rhsso-vault-secrets --from-literal=master_smtp-password=mySMTPPsswd", "oc set volume dc/sso --add --mount-path=USD{SSO_VAULT_DIR} --secret-name=rhsso-vault-secrets", "oc adm policy add-role-to-user view < user-name > -n < project-name >" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/red_hat_single_sign-on_for_openshift/performing_advanced_procedures
Chapter 6. Updating hosted control planes
Chapter 6. Updating hosted control planes Updates for hosted control planes involve updating the hosted cluster and the node pools. For a cluster to remain fully operational during an update process, you must meet the requirements of the Kubernetes version skew policy while completing the control plane and node updates. 6.1. Requirements to upgrade hosted control planes The multicluster engine for Kubernetes Operator can manage one or more OpenShift Container Platform clusters. After you create a hosted cluster on OpenShift Container Platform, you must import your hosted cluster in the multicluster engine Operator as a managed cluster. Then, you can use the OpenShift Container Platform cluster as a management cluster. Consider the following requirements before you start updating hosted control planes: You must use the bare metal platform for an OpenShift Container Platform cluster when using OpenShift Virtualization as a provider. You must use bare metal or OpenShift Virtualization as the cloud platform for the hosted cluster. You can find the platform type of your hosted cluster in the spec.Platform.type specification of the HostedCluster custom resource (CR). You must upgrade the OpenShift Container Platform cluster, multicluster engine Operator, hosted cluster, and node pools by completing the following tasks: Upgrade an OpenShift Container Platform cluster to the latest version. For more information, see "Updating a cluster using the web console" or "Updating a cluster using the CLI". Upgrade the multicluster engine Operator to the latest version. For more information, see "Updating installed Operators". Upgrade the hosted cluster and node pools from the OpenShift Container Platform version to the latest version. For more information, see "Updating a control plane in a hosted cluster" and "Updating node pools in a hosted cluster". Additional resources Updating a cluster using the web console Updating a cluster using the CLI Updating installed Operators 6.2. Setting channels in a hosted cluster You can see available updates in the HostedCluster.Status field of the HostedCluster custom resource (CR). The available updates are not fetched from the Cluster Version Operator (CVO) of a hosted cluster. The list of the available updates can be different from the available updates from the following fields of the HostedCluster custom resource (CR): status.version.availableUpdates status.version.conditionalUpdates The initial HostedCluster CR does not have any information in the status.version.availableUpdates and status.version.conditionalUpdates fields. After you set the spec.channel field to the stable OpenShift Container Platform release version, the HyperShift Operator reconciles the HostedCluster CR and updates the status.version field with the available and conditional updates. See the following example of the HostedCluster CR that contains the channel configuration: spec: autoscaling: {} channel: stable-4.y 1 clusterID: d6d42268-7dff-4d37-92cf-691bd2d42f41 configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: dev11.red-chesterfield.com privateZoneID: Z0180092I0DQRKL55LN0 publicZoneID: Z00206462VG6ZP0H2QLWK 1 Replace <4.y> with the OpenShift Container Platform release version you specified in spec.release . For example, if you set the spec.release to ocp-release:4.16.4-multi , you must set spec.channel to stable-4.16 . After you configure the channel in the HostedCluster CR, to view the output of the status.version.availableUpdates and status.version.conditionalUpdates fields, run the following command: USD oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml Example output version: availableUpdates: - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:b7517d13514c6308ae16c5fd8108133754eb922cd37403ed27c846c129e67a9a url: https://access.redhat.com/errata/RHBA-2024:6401 version: 4.16.11 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:d08e7c8374142c239a07d7b27d1170eae2b0d9f00ccf074c3f13228a1761c162 url: https://access.redhat.com/errata/RHSA-2024:6004 version: 4.16.10 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:6a80ac72a60635a313ae511f0959cc267a21a89c7654f1c15ee16657aafa41a0 url: https://access.redhat.com/errata/RHBA-2024:5757 version: 4.16.9 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:ea624ae7d91d3f15094e9e15037244679678bdc89e5a29834b2ddb7e1d9b57e6 url: https://access.redhat.com/errata/RHSA-2024:5422 version: 4.16.8 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:e4102eb226130117a0775a83769fe8edb029f0a17b6cbca98a682e3f1225d6b7 url: https://access.redhat.com/errata/RHSA-2024:4965 version: 4.16.6 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:f828eda3eaac179e9463ec7b1ed6baeba2cd5bd3f1dd56655796c86260db819b url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 conditionalUpdates: - conditions: - lastTransitionTime: "2024-09-23T22:33:38Z" message: |- Could not evaluate exposure to update risk SRIOVFailedToConfigureVF (creating PromQL round-tripper: unable to load specified CA cert /etc/tls/service-ca/service-ca.crt: open /etc/tls/service-ca/service-ca.crt: no such file or directory) SRIOVFailedToConfigureVF description: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. SRIOVFailedToConfigureVF URL: https://issues.redhat.com/browse/NHE-1171 reason: EvaluationFailed status: Unknown type: Recommended release: channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:fb321a3f50596b43704dbbed2e51fdefd7a7fd488ee99655d03784d0cd02283f url: https://access.redhat.com/errata/RHSA-2024:5107 version: 4.16.7 risks: - matchingRules: - promql: promql: | group(csv_succeeded{_id="d6d42268-7dff-4d37-92cf-691bd2d42f41", name=~"sriov-network-operator[.].*"}) or 0 * group(csv_count{_id="d6d42268-7dff-4d37-92cf-691bd2d42f41"}) type: PromQL message: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. name: SRIOVFailedToConfigureVF url: https://issues.redhat.com/browse/NHE-1171 6.3. Updating the OpenShift Container Platform version in a hosted cluster Hosted control planes enables the decoupling of updates between the control plane and the data plane. As a cluster service provider or cluster administrator, you can manage the control plane and the data separately. You can update a control plane by modifying the HostedCluster custom resource (CR) and a node by modifying its NodePool CR. Both the HostedCluster and NodePool CRs specify an OpenShift Container Platform release image in a .release field. To keep your hosted cluster fully operational during an update process, the control plane and the node updates must follow the Kubernetes version skew policy . 6.3.1. The multicluster engine Operator hub management cluster The multicluster engine for Kubernetes Operator requires a specific OpenShift Container Platform version for the management cluster to remain in a supported state. You can install the multicluster engine Operator from OperatorHub in the OpenShift Container Platform web console. See the following support matrices for the multicluster engine Operator versions: multicluster engine Operator 2.7 multicluster engine Operator 2.6 multicluster engine Operator 2.5 multicluster engine Operator 2.4 The multicluster engine Operator supports the following OpenShift Container Platform versions: The latest unreleased version The latest released version Two versions before the latest released version You can also get the multicluster engine Operator version as a part of Red Hat Advanced Cluster Management (RHACM). 6.3.2. Supported OpenShift Container Platform versions in a hosted cluster When deploying a hosted cluster, the OpenShift Container Platform version of the management cluster does not affect the OpenShift Container Platform version of a hosted cluster. The HyperShift Operator creates the supported-versions ConfigMap in the hypershift namespace. The supported-versions ConfigMap describes the range of supported OpenShift Container Platform versions that you can deploy. See the following example of the supported-versions ConfigMap: apiVersion: v1 data: server-version: 2f6cfe21a0861dea3130f3bed0d3ae5553b8c28b supported-versions: '{"versions":["4.17","4.16","4.15","4.14"]}' kind: ConfigMap metadata: creationTimestamp: "2024-06-20T07:12:31Z" labels: hypershift.openshift.io/supported-versions: "true" name: supported-versions namespace: hypershift resourceVersion: "927029" uid: f6336f91-33d3-472d-b747-94abae725f70 Important To create a hosted cluster, you must use the OpenShift Container Platform version from the support version range. However, the multicluster engine Operator can manage only between n+1 and n-2 OpenShift Container Platform versions, where n defines the current minor version. You can check the multicluster engine Operator support matrix to ensure the hosted clusters managed by the multicluster engine Operator are within the supported OpenShift Container Platform range. To deploy a higher version of a hosted cluster on OpenShift Container Platform, you must update the multicluster engine Operator to a new minor version release to deploy a new version of the Hypershift Operator. Upgrading the multicluster engine Operator to a new patch, or z-stream, release does not update the HyperShift Operator to the version. See the following example output of the hcp version command that shows the supported OpenShift Container Platform versions for OpenShift Container Platform 4.16 in the management cluster: Client Version: openshift/hypershift: fe67b47fb60e483fe60e4755a02b3be393256343. Latest supported OCP: 4.17.0 Server Version: 05864f61f24a8517731664f8091cedcfc5f9b60d Server Supports OCP Versions: 4.17, 4.16, 4.15, 4.14 6.4. Updates for the hosted cluster The spec.release value dictates the version of the control plane. The HostedCluster object transmits the intended spec.release value to the HostedControlPlane.spec.release value and runs the appropriate Control Plane Operator version. The hosted control plane manages the rollout of the new version of the control plane components along with any OpenShift Container Platform components through the new version of the Cluster Version Operator (CVO). Important In hosted control planes, the NodeHealthCheck resource cannot detect the status of the CVO. A cluster administrator must manually pause the remediation triggered by NodeHealthCheck , before performing critical operations, such as updating the cluster, to prevent new remediation actions from interfering with cluster updates. To pause the remediation, enter the array of strings, for example, pause-test-cluster , as a value of the pauseRequests field in the NodeHealthCheck resource. For more information, see About the Node Health Check Operator . After the cluster update is complete, you can edit or delete the remediation. Navigate to the Compute NodeHealthCheck page, click your node health check, and then click Actions , which shows a drop-down list. 6.5. Updates for node pools With node pools, you can configure the software that is running in the nodes by exposing the spec.release and spec.config values. You can start a rolling node pool update in the following ways: Changing the spec.release or spec.config values. Changing any platform-specific field, such as the AWS instance type. The result is a set of new instances with the new type. Changing the cluster configuration, if the change propagates to the node. Node pools support replace updates and in-place updates. The nodepool.spec.release value dictates the version of any particular node pool. A NodePool object completes a replace or an in-place rolling update according to the .spec.management.upgradeType value. After you create a node pool, you cannot change the update type. If you want to change the update type, you must create a node pool and delete the other one. 6.5.1. Replace updates for node pools A replace update creates instances in the new version while it removes old instances from the version. This update type is effective in cloud environments where this level of immutability is cost effective. Replace updates do not preserve any manual changes because the node is entirely re-provisioned. 6.5.2. In place updates for node pools An in-place update directly updates the operating systems of the instances. This type is suitable for environments where the infrastructure constraints are higher, such as bare metal. In-place updates can preserve manual changes, but will report errors if you make manual changes to any file system or operating system configuration that the cluster directly manages, such as kubelet certificates. 6.6. Updating node pools in a hosted cluster You can update your version of OpenShift Container Platform by updating the node pools in your hosted cluster. The node pool version must not surpass the hosted control plane version. The .spec.release field in the NodePool custom resource (CR) shows the version of a node pool. Procedure Change the spec.release.image value in the node pool by entering the following command: USD oc patch nodepool <node_pool_name> -n <hosted_cluster_namespace> --type=merge -p '{"spec":{"nodeDrainTimeout":"60s","release":{"image":"<openshift_release_image>"}}}' 1 2 1 Replace <node_pool_name> and <hosted_cluster_namespace> with your node pool name and hosted cluster namespace, respectively. 2 The <openshift_release_image> variable specifies the new OpenShift Container Platform release image that you want to upgrade to, for example, quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 . Replace <4.y.z> with the supported OpenShift Container Platform version. Verification To verify that the new version was rolled out, check the .status.conditions value in the node pool by running the following command: USD oc get -n <hosted_cluster_namespace> nodepool <node_pool_name> -o yaml Example output status: conditions: - lastTransitionTime: "2024-05-20T15:00:40Z" message: 'Using release image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64' 1 reason: AsExpected status: "True" type: ValidReleaseImage 1 Replace <4.y.z> with the supported OpenShift Container Platform version. 6.7. Updating a control plane in a hosted cluster On hosted control planes, you can upgrade your version of OpenShift Container Platform by updating the hosted cluster. The .spec.release in the HostedCluster custom resource (CR) shows the version of the control plane. The HostedCluster updates the .spec.release field to the HostedControlPlane.spec.release and runs the appropriate Control Plane Operator version. The HostedControlPlane resource orchestrates the rollout of the new version of the control plane components along with the OpenShift Container Platform component in the data plane through the new version of the Cluster Version Operator (CVO). The HostedControlPlane includes the following artifacts: CVO Cluster Network Operator (CNO) Cluster Ingress Operator Manifests for the Kube API server, scheduler, and manager Machine approver Autoscaler Infrastructure resources to enable ingress for control plane endpoints such as the Kube API server, ignition, and konnectivity You can set the .spec.release field in the HostedCluster CR to update the control plane by using the information from the status.version.availableUpdates and status.version.conditionalUpdates fields. Procedure Add the hypershift.openshift.io/force-upgrade-to=<openshift_release_image> annotation to the hosted cluster by entering the following command: USD oc annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> "hypershift.openshift.io/force-upgrade-to=<openshift_release_image>" --overwrite 1 2 1 Replace <hosted_cluster_name> and <hosted_cluster_namespace> with your hosted cluster name and hosted cluster namespace, respectively. 2 The <openshift_release_image> variable specifies the new OpenShift Container Platform release image that you want to upgrade to, for example, quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 . Replace <4.y.z> with the supported OpenShift Container Platform version. Change the spec.release.image value in the hosted cluster by entering the following command: USD oc patch hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> --type=merge -p '{"spec":{"release":{"image":"<openshift_release_image>"}}}' Verification To verify that the new version was rolled out, check the .status.conditions and .status.version values in the hosted cluster by running the following command: USD oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml Example output status: conditions: - lastTransitionTime: "2024-05-20T15:01:01Z" message: Payload loaded version="4.y.z" image="quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64" 1 status: "True" type: ClusterVersionReleaseAccepted #... version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 2 version: 4.y.z 1 2 Replace <4.y.z> with the supported OpenShift Container Platform version. 6.8. Updating a hosted cluster by using the multicluster engine Operator console You can update your hosted cluster by using the multicluster engine Operator console. Important Before updating a hosted cluster, you must refer to the available and conditional updates of a hosted cluster. Choosing a wrong release version might break the hosted cluster. Procedure Select All clusters . Navigate to Infrastructure Clusters to view managed hosted clusters. Click the Upgrade available link to update the control plane and node pools. 6.9. Limitations of managing imported hosted clusters Hosted clusters are automatically imported into the local multicluster engine for Kubernetes Operator, unlike a standalone OpenShift Container Platform or third party clusters. Hosted clusters run some of their agents in the hosted mode so that the agents do not use the resources of your cluster. If you choose to automatically import hosted clusters, you can update node pools and the control plane in hosted clusters by using the HostedCluster resource on the management cluster. To update node pools and a control plane, see "Updating node pools in a hosted cluster" and "Updating a control plane in a hosted cluster". You can import hosted clusters into a location other than the local multicluster engine Operator by using the Red Hat Advanced Cluster Management (RHACM). For more information, see "Discovering multicluster engine for Kubernetes Operator hosted clusters in Red Hat Advanced Cluster Management". In this topology, you must update your hosted clusters by using the command-line interface or the console of the local multicluster engine for Kubernetes Operator where the cluster is hosted. You cannot update the hosted clusters through the RHACM hub cluster. Additional resources Updating node pools in a hosted cluster Updating a control plane in a hosted cluster Discovering multicluster engine for Kubernetes Operator hosted clusters in Red Hat Advanced Cluster Management
[ "spec: autoscaling: {} channel: stable-4.y 1 clusterID: d6d42268-7dff-4d37-92cf-691bd2d42f41 configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: dev11.red-chesterfield.com privateZoneID: Z0180092I0DQRKL55LN0 publicZoneID: Z00206462VG6ZP0H2QLWK", "oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml", "version: availableUpdates: - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:b7517d13514c6308ae16c5fd8108133754eb922cd37403ed27c846c129e67a9a url: https://access.redhat.com/errata/RHBA-2024:6401 version: 4.16.11 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:d08e7c8374142c239a07d7b27d1170eae2b0d9f00ccf074c3f13228a1761c162 url: https://access.redhat.com/errata/RHSA-2024:6004 version: 4.16.10 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:6a80ac72a60635a313ae511f0959cc267a21a89c7654f1c15ee16657aafa41a0 url: https://access.redhat.com/errata/RHBA-2024:5757 version: 4.16.9 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:ea624ae7d91d3f15094e9e15037244679678bdc89e5a29834b2ddb7e1d9b57e6 url: https://access.redhat.com/errata/RHSA-2024:5422 version: 4.16.8 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:e4102eb226130117a0775a83769fe8edb029f0a17b6cbca98a682e3f1225d6b7 url: https://access.redhat.com/errata/RHSA-2024:4965 version: 4.16.6 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:f828eda3eaac179e9463ec7b1ed6baeba2cd5bd3f1dd56655796c86260db819b url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 conditionalUpdates: - conditions: - lastTransitionTime: \"2024-09-23T22:33:38Z\" message: |- Could not evaluate exposure to update risk SRIOVFailedToConfigureVF (creating PromQL round-tripper: unable to load specified CA cert /etc/tls/service-ca/service-ca.crt: open /etc/tls/service-ca/service-ca.crt: no such file or directory) SRIOVFailedToConfigureVF description: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. SRIOVFailedToConfigureVF URL: https://issues.redhat.com/browse/NHE-1171 reason: EvaluationFailed status: Unknown type: Recommended release: channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:fb321a3f50596b43704dbbed2e51fdefd7a7fd488ee99655d03784d0cd02283f url: https://access.redhat.com/errata/RHSA-2024:5107 version: 4.16.7 risks: - matchingRules: - promql: promql: | group(csv_succeeded{_id=\"d6d42268-7dff-4d37-92cf-691bd2d42f41\", name=~\"sriov-network-operator[.].*\"}) or 0 * group(csv_count{_id=\"d6d42268-7dff-4d37-92cf-691bd2d42f41\"}) type: PromQL message: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. name: SRIOVFailedToConfigureVF url: https://issues.redhat.com/browse/NHE-1171", "apiVersion: v1 data: server-version: 2f6cfe21a0861dea3130f3bed0d3ae5553b8c28b supported-versions: '{\"versions\":[\"4.17\",\"4.16\",\"4.15\",\"4.14\"]}' kind: ConfigMap metadata: creationTimestamp: \"2024-06-20T07:12:31Z\" labels: hypershift.openshift.io/supported-versions: \"true\" name: supported-versions namespace: hypershift resourceVersion: \"927029\" uid: f6336f91-33d3-472d-b747-94abae725f70", "Client Version: openshift/hypershift: fe67b47fb60e483fe60e4755a02b3be393256343. Latest supported OCP: 4.17.0 Server Version: 05864f61f24a8517731664f8091cedcfc5f9b60d Server Supports OCP Versions: 4.17, 4.16, 4.15, 4.14", "oc patch nodepool <node_pool_name> -n <hosted_cluster_namespace> --type=merge -p '{\"spec\":{\"nodeDrainTimeout\":\"60s\",\"release\":{\"image\":\"<openshift_release_image>\"}}}' 1 2", "oc get -n <hosted_cluster_namespace> nodepool <node_pool_name> -o yaml", "status: conditions: - lastTransitionTime: \"2024-05-20T15:00:40Z\" message: 'Using release image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64' 1 reason: AsExpected status: \"True\" type: ValidReleaseImage", "oc annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \"hypershift.openshift.io/force-upgrade-to=<openshift_release_image>\" --overwrite 1 2", "oc patch hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> --type=merge -p '{\"spec\":{\"release\":{\"image\":\"<openshift_release_image>\"}}}'", "oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml", "status: conditions: - lastTransitionTime: \"2024-05-20T15:01:01Z\" message: Payload loaded version=\"4.y.z\" image=\"quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64\" 1 status: \"True\" type: ClusterVersionReleaseAccepted # version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 2 version: 4.y.z" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/hosted_control_planes/updating-hosted-control-planes
Chapter 1. Configuring Argo CD RBAC
Chapter 1. Configuring Argo CD RBAC By default, if you are logged in to Argo CD using Red Hat SSO (RH SSO), you are a read-only user. You can change and manage the user level access. 1.1. Configuring user level access To manage and modify the user level access, configure the role-based access control (RBAC) section in the Argo CD custom resource (CR). Procedure Edit the argocd CR: USD oc edit argocd [argocd-instance-name] -n [namespace] Output metadata ... ... rbac: policy: 'g, rbacsystem:cluster-admins, role:admin' scopes: '[groups]' Add the policy configuration to the rbac section and add the name and the desired role to be applied to the user: metadata ... ... rbac: policy: g, <name>, role:<admin> scopes: '[groups]' Note Currently, RHSSO cannot read the group information of Red Hat OpenShift GitOps users. Therefore, configure the RBAC at the user level. 1.2. Modifying RHSSO resource requests/limits By default, the RHSSO container is created with resource requests and limitations. You can change and manage the resource requests. Resource Requests Limits CPU 500 1000m Memory 512 Mi 1024 Mi Procedure Modify the default resource requirements patching the Argo CD custom resource (CR): USD oc -n openshift-gitops patch argocd openshift-gitops --type='json' -p='[{"op": "add", "path": "/spec/sso", "value": {"provider": "keycloak", "resources": {"requests": {"cpu": "512m", "memory": "512Mi"}, "limits": {"cpu": "1024m", "memory": "1024Mi"}} }}]' Note RHSSO created by the Red Hat OpenShift GitOps only persists the changes that are made by the operator. If the RHSSO restarts, any additional configuration created by the Admin in RHSSO is deleted.
[ "oc edit argocd [argocd-instance-name] -n [namespace]", "metadata rbac: policy: 'g, rbacsystem:cluster-admins, role:admin' scopes: '[groups]'", "metadata rbac: policy: g, <name>, role:<admin> scopes: '[groups]'", "oc -n openshift-gitops patch argocd openshift-gitops --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/sso\", \"value\": {\"provider\": \"keycloak\", \"resources\": {\"requests\": {\"cpu\": \"512m\", \"memory\": \"512Mi\"}, \"limits\": {\"cpu\": \"1024m\", \"memory\": \"1024Mi\"}} }}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/access_control_and_user_management/configuring-argo-cd-rbac
Appendix B. The LVM Configuration Files
Appendix B. The LVM Configuration Files LVM supports multiple configuration files. At system startup, the lvm.conf configuration file is loaded from the directory specified by the environment variable LVM_SYSTEM_DIR , which is set to /etc/lvm by default. The lvm.conf file can specify additional configuration files to load. Settings in later files override settings from earlier ones. To display the settings in use after loading all the configuration files, execute the lvmconfig command. For information on loading additional configuration files, see Section D.2, "Host Tags" . B.1. The LVM Configuration Files The following files are used for LVM configuration: /etc/lvm/lvm.conf Central configuration file read by the tools. etc/lvm/lvm_ hosttag .conf For each host tag, an extra configuration file is read if it exists: lvm_ hosttag .conf . If that file defines new tags, then further configuration files will be appended to the list of files to read in. For information on host tags, see Section D.2, "Host Tags" . In addition to the LVM configuration files, a system running LVM includes the following files that affect LVM system setup: /etc/lvm/cache/.cache Device name filter cache file (configurable). /etc/lvm/backup/ Directory for automatic volume group metadata backups (configurable). /etc/lvm/archive/ Directory for automatic volume group metadata archives (configurable with regard to directory path and archive history depth). /var/lock/lvm/ In single-host configuration, lock files to prevent parallel tool runs from corrupting the metadata; in a cluster, cluster-wide DLM is used.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/config_file
Chapter 1. Support policy for Eclipse Temurin
Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these versions remain similar to Oracle JDK versions that Oracle designates as long-term support (LTS). A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.4/rn-openjdk-temurin-support-policy
Chapter 1. Workflow and architecture
Chapter 1. Workflow and architecture To install Metrics Store, complete the following major tasks: Create the Metrics Store virtual machines . Deploy Metrics Store services on Red Hat OpenShift . Configure networking for Metrics Store virtual machines . Deploy collectd and rsyslog . Verify the Metrics Store installation . Metrics Store architecture The Metrics Store architecture is based on the Red Hat OpenShift EFK logging stack , running on Red Hat OpenShift Container Platform 3.11 . Metrics Store uses the following services: collectd (hosts) collects metrics from hosts, virtual machines, and databases. rsyslog (hosts) collects metrics, adds log data, enriches the data with metadata, and sends the enriched data to Elasticsearch. Elasticsearch (Metrics Store virtual machine) stores and indexes the data. Kibana (Metrics Store virtual machine) analyzes and presents the data as dashboards and charts. Figure 1.1. Metrics Store architecture
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_installation_guide/metrics_store_installation_overview
Chapter 18. Security enhancements
Chapter 18. Security enhancements The following sections provide some suggestions to harden the security of your overcloud. 18.1. Using secure root user access The overcloud image automatically contains hardened security for the root user. For example, each deployed overcloud node automatically disables direct SSH access to the root user. You can still access the root user on overcloud nodes. Procedure Log in to the undercloud node as the stack user. Each overcloud node has a heat-admin user account. This user account contains the undercloud public SSH key, which provides SSH access without a password from the undercloud to the overcloud node. On the undercloud node, log in to the an overcloud node through SSH as the heat-admin user. Switch to the root user with sudo -i . 18.2. Managing the overcloud firewall Each of the core OpenStack Platform services contains firewall rules in their respective composable service templates. This automatically creates a default set of firewall rules for each overcloud node. The overcloud heat templates contain a set of parameters that can help with additional firewall management: ManageFirewall Defines whether to automatically manage the firewall rules. Set this parameter to true to allow Puppet to automatically configure the firewall on each node. Set to false if you want to manually manage the firewall. The default is true . PurgeFirewallRules Defines whether to purge the default Linux firewall rules before configuring new ones. The default is false . If you set the ManageFirewall parameter to true , you can create additional firewall rules on deployment. Set the tripleo::firewall::firewall_rules hieradata using a configuration hook (see Section 4.5, "Puppet: Customizing hieradata for roles" ) in an environment file for your overcloud. This hieradata is a hash containing the firewall rule names and their respective parameters as keys, all of which are optional: port The port associated to the rule. dport The destination port associated to the rule. sport The source port associated to the rule. proto The protocol associated to the rule. Defaults to tcp . action The action policy associated to the rule. Defaults to accept . jump The chain to jump to. If present, it overrides action . state An Array of states associated to the rule. Defaults to ['NEW'] . source The source IP address associated to the rule. iniface The network interface associated to the rule. chain The chain associated to the rule. Defaults to INPUT . destination The destination CIDR associated to the rule. The following example demonstrates the syntax of the firewall rule format: This applies two additional firewall rules to all nodes through ExtraConfig . Note Each rule name becomes the comment for the respective iptables rule. Each rule name starts with a three-digit prefix to help Puppet order all defined rules in the final iptables file. The default Red Hat OpenStack Platform rules use prefixes in the 000 to 200 range. 18.3. Changing the Simple Network Management Protocol (SNMP) strings Director provides a default read-only SNMP configuration for your overcloud. It is advisable to change the SNMP strings to mitigate the risk of unauthorized users learning about your network devices. Note When you configure the ExtraConfig interface with a string parameter, you must use the following syntax to ensure that heat and Hiera do not interpret the string as a Boolean value: '"<VALUE>"' . Set the following hieradata using the ExtraConfig hook in an environment file for your overcloud: SNMP traditional access control settings snmp::ro_community IPv4 read-only SNMP community string. The default value is public . snmp::ro_community6 IPv6 read-only SNMP community string. The default value is public . snmp::ro_network Network that is allowed to RO query the daemon. This value can be a string or an array. Default value is 127.0.0.1 . snmp::ro_network6 Network that is allowed to RO query the daemon with IPv6. This value can be a string or an array. The default value is ::1/128 . tripleo::profile::base::snmp::snmpd_config Array of lines to add to the snmpd.conf file as a safety valve. The default value is [] . See the SNMP Configuration File web page for all available options. For example: This changes the read-only SNMP community string on all nodes. SNMP view-based access control settings (VACM) snmp::com2sec An array of VACM com2sec mappings. Must provide SECNAME, SOURCE and COMMUNITY. snmp::com2sec6 An array of VACM com2sec6 mappings. Must provide SECNAME, SOURCE and COMMUNITY. For example: This changes the read-only SNMP community string on all nodes. For more information, see the snmpd.conf man page. 18.4. Changing the SSL/TLS cipher and rules for HAProxy If you enabled SSL/TLS in the overcloud, consider hardening the SSL/TLS ciphers and rules that are used with the HAProxy configuration. By hardening the SSL/TLS ciphers, you help avoid SSL/TLS vulnerabilities, such as the POODLE vulnerability . Create a heat template environment file called tls-ciphers.yaml : Use the ExtraConfig hook in the environment file to apply values to the tripleo::haproxy::ssl_cipher_suite and tripleo::haproxy::ssl_options hieradata: Note The cipher collection is one continuous line. Include the tls-ciphers.yaml environment file with the overcloud deploy command when deploying the overcloud: 18.5. Using the Open vSwitch firewall You can configure security groups to use the Open vSwitch (OVS) firewall driver in Red Hat OpenStack Platform director. Use the NeutronOVSFirewallDriver parameter to specify firewall driver that you want to use: iptables_hybrid - Configures the Networking service (neutron) to use the iptables/hybrid based implementation. openvswitch - Configures the Networking service to use the OVS firewall flow-based driver. The openvswitch firewall driver includes higher performance and reduces the number of interfaces and bridges used to connect guests to the project network. Important Multicast traffic is handled differently by the Open vSwitch (OVS) firewall driver than by the iptables firewall driver. With iptables, by default, VRRP traffic is denied, and you must enable VRRP in the security group rules for any VRRP traffic to reach an endpoint. With OVS, all ports share the same OpenFlow context, and multicast traffic cannot be processed individually per port. Because security groups do not apply to all ports (for example, the ports on a router), OVS uses the NORMAL action and forwards multicast traffic to all ports as specified by RFC 4541. Note The iptables_hybrid option is not compatible with OVS-DPDK. The openvswitch option is not compatible with OVS Hardware Offload. Configure the NeutronOVSFirewallDriver parameter in the network-environment.yaml file: NeutronOVSFirewallDriver: openvswitch NeutronOVSFirewallDriver : Configures the name of the firewall driver that you want to use when you implement security groups. Possible values depend on your system configuration. Some examples are noop , openvswitch , and iptables_hybrid . The default value of an empty string results in a supported configuration.
[ "ExtraConfig: tripleo::firewall::firewall_rules: '300 allow custom application 1': port: 999 proto: udp action: accept '301 allow custom application 2': port: 8081 proto: tcp action: accept", "parameter_defaults: ExtraConfig: snmp::ro_community: mysecurestring snmp::ro_community6: myv6securestring", "parameter_defaults: ExtraConfig: snmp::com2sec: [\"notConfigUser default mysecurestring\"] snmp::com2sec6: [\"notConfigUser default myv6securestring\"]", "touch ~/templates/tls-ciphers.yaml", "parameter_defaults: ExtraConfig: tripleo::haproxy::ssl_cipher_suite: 'DHE-RSA-AES128-CCM:DHE-RSA-AES256-CCM:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-CCM:ECDHE-ECDSA-AES256-CCM:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305' tripleo::haproxy::ssl_options: 'no-sslv3 no-tls-tickets'", "openstack overcloud deploy --templates -e /home/stack/templates/tls-ciphers.yaml", "NeutronOVSFirewallDriver: openvswitch" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/assembly_security-enhancements
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 2.2-9 Mon Aug 05 2019 Marie Dolezelova Document version for 7.7 GA publication. Revision 2.2-6 Mon Jul 24 2017 Marie Dolezelova Document version for 7.4 GA publication. Revision 2.2-5 Tue Mar 21 2017 Milan Navratil Asynchronous update: the Tuned chapter rewrite Revision 2.0-2 Fri Oct 14 2016 Marie Dolezelova Version for 7.3 GA publication. Revision 2.0-1 Wed 11 Nov 2015 Jana Heves Version for 7.2 GA release. Revision 1-3 Fri 19 Jun 2015 Jacquelynn East Fixed incorrect package name in Core Infrastructure and Mechanics. Revision 1-2 Wed 18 Feb 2015 Jacquelynn East Version for 7.1 GA Revision 1-1 Thu Dec 4 2014 Jacquelynn East Version for 7.1 Beta Revision 1.0-9 Tue Jun 9 2014 Yoana Ruseva Version for 7.0 GA release Revision 0.9-1 Fri May 9 2014 Yoana Ruseva Rebuild for style changes. Revision 0.9-0 Wed May 7 2014 Yoana Ruseva Red Hat Enterprise Linux 7.0 release of the book for review purposes. Revision 0.1-1 Thu Jan 17 2013 Jack Reed Branched from the Red Hat Enterprise Linux 6 version of the document
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/appe-publican-revision_history
Chapter 1. Introduction to Security in the Data Grid
Chapter 1. Introduction to Security in the Data Grid 1.1. Securing Data in Red Hat JBoss Data Grid In Red Hat JBoss Data Grid, data security can be implemented in the following ways: Role-based Access Control JBoss Data Grid features role-based access control for operations on designated secured caches. Roles can be assigned to users who access your application, with roles mapped to permissions for cache and cache-manager operations. Only authenticated users are able to perform the operations that are authorized for their role. In Library mode, data is secured via role-based access control for CacheManagers and Caches, with authentication delegated to the container or application. In Remote Client-Server mode, JBoss Data Grid is secured by passing identity tokens from the Hot Rod client to the server, and role-based access control of Caches and CacheManagers. Node Authentication and Authorization Node-level security requires new nodes or merging partitions to authenticate before joining a cluster. Only authenticated nodes that are authorized to join the cluster are permitted to do so. This provides data protection by preventing unauthorized servers from storing your data. Encrypted Communications Within the Cluster JBoss Data Grid increases data security by supporting encrypted communications between the nodes in a cluster by using a user-specified cryptography algorithm, as supported by Java Cryptography Architecture (JCA). JBoss Data Grid also provides audit logging for operations, and the ability to encrypt communication between the Hot Rod Client and Server using Transport Layer Security (TLS/SSL). Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/chap-introduction_to_security_in_the_data_grid
Chapter 2. Creating Red Hat Ansible Automation Platform backup resources
Chapter 2. Creating Red Hat Ansible Automation Platform backup resources Backing up your Red Hat Ansible Automation Platform deployment involves creating backup resources for your deployed automation hub and automation controller instances. Use these procedures to create backup resources for your Red Hat Ansible Automation Platform deployment. 2.1. Backing up the Automation controller deployment Use this procedure to back up a deployment of the controller, including jobs, inventories, and credentials. Prerequisites You must be authenticated with an Openshift cluster. The Ansible Automation Platform Operator has been installed to the cluster. The automation controller is deployed to using the Ansible Automation Platform Operator. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Automation Controller Backup tab. Click Create AutomationControllerBackup . Enter a Name for the backup. Enter the Deployment name of the deployed Ansible Automation Platform instance being backed up. For example, if your automation controller must be backed up and the deployment name is aap-controller , enter 'aap-controller' in the Deployment name field. If you want to use a custom, pre-created pvc: Optionally enter the name of the Backup persistant volume claim . Optionally enter the Backup PVC storage requirements , and Backup PVC storage class . Note If no pvc or storage class is provided, the cluster's default storage class is used to create the pvc. If you have a large database, specify your storage requests accordingly under Backup management pod resource requirements . Note You can check the size of the existing postgres database data directory by running the following command inside the postgres pod. USD df -h | grep "/var/lib/pgsql/data" Click Create . A backup tarball of the specified deployment is created and available for data recovery or deployment rollback. Future backups are stored in separate tar files on the same pvc. Verification Log in to Red Hat Red Hat OpenShift Container Platform Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the AutomationControllerBackup tab. Select the backup resource you want to verify. Scroll to Conditions and check that the Successful status is True . Note If Successful is False , the backup has failed. Check the automation controller operator logs for the error to fix the issue. 2.2. Backing up the Automation hub deployment Use this procedure to back up a deployment of the hub, including all hosted Ansible content. Prerequisites You must be authenticated with an Openshift cluster. The Ansible Automation Platform Operator has been installed to the cluster. The automation hub is deployed to using the Ansible Automation Platform Operator. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Automation Hub Backup tab. Click Create AutomationHubBackup . Enter a Name for the backup. Enter the Deployment name of the deployed Ansible Automation Platform instance being backed up. For example, if your automation hub must be backed up and the deployment name is aap-hub , enter 'aap-hub' in the Deployment name field. If you want to use a custom, pre-created pvc: Optionally, enter the name of the Backup persistent volume claim , Backup persistent volume claim namespace , Backup PVC storage requirements , and Backup PVC storage class . Click Create . A backup of the specified deployment is created and available for data recovery or deployment rollback.
[ "df -h | grep \"/var/lib/pgsql/data\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operator_backup_and_recovery_guide/aap-backup
8.6. Multiple Caching Providers
8.6. Multiple Caching Providers Caching providers are obtained from javax.cache.Caching using the overloaded getCachingProvider() method; by default this method will attempt to load any META-INF/services/javax.cache.spi.CachingProvider files found in the classpath. If one is found it will determine the caching provider in use. With multiple caching providers available a specific provider may be selected using either of the following methods: getCachingProvider(ClassLoader classLoader) getCachingProvider(String fullyQualifiedClassName) To switch between caching providers ensure that the appropriate provider is available in the default classpath, or select it using one of the above methods. All javax.cache.spi.CachingProviders that are detected or have been loaded by the Caching class are maintained in an internal registry, and subsequent requests for the same caching provider will be returned from this registry instead of being reloaded or reinstantiating the caching provider implementation. To view the current caching providers either of the following methods may be used: getCachingProviders() - provides a list of caching providers in the default class loader. getCachingProviders(ClassLoader classLoader) - provides a list of caching providers in the specified class loader. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/section-multiple_caching_providers
Chapter 3. Authenticating API Calls
Chapter 3. Authenticating API Calls Interaction with the Satellite API requires SSL authentication with Satellite Server CA certificate and authentication with valid Satellite user credentials. This chapter outlines the authenticating methods you can use. 3.1. SSL Authentication Overview Red Hat Satellite uses HTTPS, which provides a degree of encryption and identity verification when communicating with a Red Hat Satellite Server. Satellite 6.11 does not support non-SSL communications. Each Red Hat Satellite Server uses a self-signed certificate. This certificate acts as both the server certificate to verify the encryption key and the certificate authority (CA) to trust the identity of Satellite Server. 3.1.1. Configuring SSL Authentication Use the following procedure to configure an SSL authentication for the API requests to Satellite Server. Procedure Obtain a certificate from the Satellite Server with which you want to communicate using one of the following options: If you execute the command from a remote server, obtain a certificate using SSH: If you execute the command directly on the Satellite Server, copy the certificate to the /etc/pki/ca-trust/source/anchors directory: Add the certificate to the list of trusted CAs: Verification Verify that the certificate is present in the NSS database by entering the API request without the --cacert option: 3.2. HTTP Authentication Overview All requests to the Satellite API require a valid Satellite user name and password. The API uses HTTP Basic Authentication to encode these credentials and add to the Authorization header. For more information about Basic Authentication, see RFC 2617 HTTP Authentication: Basic and Digest Access Authentication . If a request does not include an appropriate Authorization header, the API returns a 401 Authorization Required error Important Basic authentication involves potentially sensitive information, for example, it sends passwords as plain text. The REST API requires HTTPS for transport level encryption of plain text requests. Some base64 libraries break encoded credentials into multiple lines and terminate each line with a newline character. This invalidates the header and causes a faulty request. The Authorization header requires that the encoded credentials be on a single line within the header. 3.3. Personal Access Token Authentication Overview Red Hat Satellite supports Personal Access Tokens that you can use to authenticate API requests instead of using your password. You can set an expiration date for your Personal Access Token and you can revoke it if you decide it should expire before the expiration date. 3.3.1. Creating a Personal Access Token Use this procedure to create a Personal Access Token. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to create a Personal Access Token. On the Personal Access Tokens tab, click Add Personal Access Token . Enter a Name for you Personal Access Token. Optional: Select the Expires date to set an expiration date. If you do not set an expiration date, your Personal Access Token will never expire unless revoked. Click Submit. You now have the Personal Access Token available to you on the Personal Access Tokens tab. Important Ensure to store your Personal Access Token as you will not be able to access it again after you leave the page or create a new Personal Access Token. You can click Copy to clipboard to copy your Personal Access Token. Verification Make an API request to your Satellite Server and authenticate with your Personal Access Token: You should receive a response with status 200 , for example: If you go back to Personal Access Tokens tab, you can see the updated Last Used time to your Personal Access Token. 3.3.2. Revoking a Personal Access Token Use this procedure to revoke a Personal Access Token before its expiration date. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to revoke the Personal Access Token. On the Personal Access Tokens tab, locate the Personal Access Token you want to revoke. Click Revoke in the Actions column to the Personal Access Token you want to revoke. Verification Make an API request to your Satellite Server and try to authenticate with the revoked Personal Access Token: You receive the following error message: 3.4. OAuth Authentication Overview As an alternative to basic authentication, you can use limited OAuth 1.0 authentication. This is sometimes referred to as 1-legged OAuth in version 1.0a of the protocol. To view OAuth settings, in the Satellite web UI, navigate to Administer > Settings > Authentication . The OAuth consumer key is the token to be used by all OAuth clients. Satellite stores OAuth settings in the /etc/foreman/settings.yaml file. Use the satellite-installer script to configure these settings, because Satellite overwrites any manual changes to this file when upgrading. 3.4.1. Configuring OAuth To change the OAuth settings, enter the satellite-installer with the required options. Enter the following command to list all the OAuth related installer options: Enabling OAuth mapping By default, Satellite authorizes all OAuth API requests as the built-in anonymous API administrator account. Therefore, API responses include all Satellite data. However, you can also specify the Foreman user that makes the request and restrict access to data to that user. To enable OAuth user mapping, enter the following command: Important Satellite does not sign the header in an OAuth request. Anyone with a valid consumer key can impersonate any Foreman user. 3.4.2. OAuth Request Format Use an OAuth client library to construct all OAuth parameters. Every OAuth API request requires the FOREMAN-USER header with the login of an existing Foreman user and the Authorization header in the following format: Example This example lists architectures using OAuth for authentication. The request uses a sat_username username in the FOREMAN-USER header. With the --foreman-oauth-map-users set to true , the response includes only architectures that the user has access to view. The signature reflects every parameter, HTTP method, and URI change. Example request:
[ "scp root@ satellite.example.com :/var/www/html/pub/katello-server-ca.crt /etc/pki/ca-trust/source/anchors/ satellite.example.com -katello-server-ca.crt", "cp /var/www/html/pub/katello-server-ca.crt /etc/pki/ca-trust/source/anchors/ satellite.example.com -katello-server-ca.crt", "update-ca-trust extract", "curl --request GET --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts", "curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token", "{\"satellite_version\":\"6.11.0\",\"result\":\"ok\",\"status\":200,\"version\":\"3.5.1.10\",\"api_version\":2}", "curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token", "{ \"error\": {\"message\":\"Unable to authenticate user My_Username \"} }", "satellite-installer --full-help | grep oauth", "satellite-installer --foreman-oauth-map-users true", "--header 'FOREMAN-USER: sat_username ' --header 'Authorization: OAuth oauth_version=\"1.0\",oauth_consumer_key=\" secretkey \",oauth_signature_method=\"hmac-sha1\",oauth_timestamp=1321473112,oauth_signature=Il8hR8/ogj/XVuOqMPB9qNjSy6E='", "curl 'https:// satellite.example.com /api/architectures' --header 'Content-Type: application/json' --header 'Accept:application/json' --header 'FOREMAN-USER: sat_username ' --header 'Authorization: OAuth oauth_version=\"1.0\",oauth_consumer_key=\" secretkey \",oauth_signature_method=\"hmac-sha1\",oauth_timestamp=1321473112,oauth_signature=Il8hR8/ogj/XVuOqMPB9qNjSy6E='" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/api_guide/chap-red_hat_satellite-api_guide-authenticating_api_calls
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, you can highlight the text in a document and add comments. This section explains how to submit feedback. Prerequisites You are logged in to the Red Hat Customer Portal. In the Red Hat Customer Portal, view the document in HTML format. Procedure To provide your feedback, perform the following steps: Click the Feedback button in the top-right corner of the document to see existing feedback. Note The feedback feature is enabled only in the HTML format. Highlight the section of the document where you want to provide feedback. Click the Add Feedback pop-up that appears near the highlighted text. A text box appears in the feedback section on the right side of the page. Enter your feedback in the text box and click Submit . A documentation issue is created. To view the issue, click the issue tracker link in the feedback view.
null
https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/dekorate_guide_for_spring_boot_developers/proc_providing-feedback-on-red-hat-documentation
12.8.4. Destroying the vHBA Storage Pool
12.8.4. Destroying the vHBA Storage Pool A vHBA storage pool can be destroyed by the virsh pool-destroy command: Delete the vHBA with the following command To verify the pool and vHBA have been destroyed, run: scsi_host5 will no longer appear in the list of results.
[ "virsh pool-destroy vhbapool_host3", "virsh nodedev-destroy scsi_host5", "virsh nodedev-list --cap scsi_host" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-npiv_storage-destroying_vhba_pool
Chapter 3. Deploying OpenShift sandboxed containers workloads
Chapter 3. Deploying OpenShift sandboxed containers workloads You can install the OpenShift sandboxed containers Operator using either the web console or OpenShift CLI ( oc ). Before installing the OpenShift sandboxed containers Operator, you must prepare your OpenShift Container Platform cluster. 3.1. Prerequisites Before you install OpenShift sandboxed containers, ensure that your OpenShift Container Platform cluster meets the following requirements: Your cluster must be installed on bare-metal infrastructure, on premise with Red Hat Enterprise Linux CoreOS (RHCOS) workers. Important OpenShift sandboxed containers only supports RHCOS worker nodes. RHEL nodes are not supported. Nested virtualization is not supported. 3.1.1. Resource requirements for OpenShift sandboxed containers OpenShift sandboxed containers lets users run workloads on their OpenShift Container Platform clusters inside a sandboxed runtime (Kata). Each pod is represented by a virtual machine (VM). Each VM runs in a QEMU process and hosts a kata-agent process that acts as a supervisor for managing container workloads, and the processes running in those containers. Two additional processes add more overhead: containerd-shim-kata-v2 is used to communicate with the pod. virtiofsd handles host file system access on behalf of the guest. Each VM is configured with a default amount of memory. Additional memory is hot-plugged into the VM for containers that explicitly request memory. A container running without a memory resource consumes free memory until the total memory used by the VM reaches the default allocation. The guest and its I/O buffers also consume memory. If a container is given a specific amount of memory, then that memory is hot-plugged into the VM before the container starts. When a memory limit is specified, the workload is terminated if it consumes more memory than the limit. If no memory limit is specified, the kernel running on the VM might run out of memory. If the kernel runs out of memory, it might terminate other processes on the VM. Default memory sizes The following table lists some the default values for resource allocation. Resource Value Memory allocated by default to a virtual machine 2Gi Guest Linux kernel memory usage at boot ~110Mi Memory used by the QEMU process (excluding VM memory) ~30Mi Memory used by the virtiofsd process (excluding VM I/O buffers) ~10Mi Memory used by the containerd-shim-kata-v2 process ~20Mi File buffer cache data after running dnf install on Fedora ~300Mi* [1] File buffers appear and are accounted for in multiple locations: In the guest where it appears as file buffer cache. In the virtiofsd daemon that maps allowed user-space file I/O operations. In the QEMU process as guest memory. Note Total memory usage is properly accounted for by the memory utilization metrics, which only count that memory once. Pod overhead describes the amount of system resources that a pod on a node uses. You can get the current pod overhead for the Kata runtime by using oc describe runtimeclass kata as shown below. Example USD oc describe runtimeclass kata Example output kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: "500Mi" cpu: "500m" You can change the pod overhead by changing the spec.overhead field for a RuntimeClass . For example, if the configuration that you run for your containers consumes more than 350Mi of memory for the QEMU process and guest kernel data, you can alter the RuntimeClass overhead to suit your needs. Note The specified default overhead values are supported by Red Hat. Changing default overhead values is not supported and can result in technical issues. When performing any kind of file system I/O in the guest, file buffers are allocated in the guest kernel. The file buffers are also mapped in the QEMU process on the host, as well as in the virtiofsd process. For example, if you use 300Mi of file buffer cache in the guest, both QEMU and virtiofsd appear to use 300Mi additional memory. However, the same memory is being used in all three cases. In other words, the total memory usage is only 300Mi, mapped in three different places. This is correctly accounted for when reporting the memory utilization metrics. Additional resources Installing a user-provisioned cluster on bare metal 3.2. Deploying OpenShift sandboxed containers workloads using the web console You can deploy OpenShift sandboxed containers workloads from the web console. First, you must install the OpenShift sandboxed containers Operator, then create the KataConfig custom resource (CR). Once you are ready to deploy a workload in a sandboxed container, you must manually add kata as the runtimeClassName to the workload YAML file. 3.2.1. Installing the OpenShift sandboxed containers Operator using the web console You can install the OpenShift sandboxed containers Operator from the OpenShift Container Platform web console. Prerequisites You have OpenShift Container Platform 4.9 installed. You have access to the cluster as a user with the cluster-admin role. Procedure From the Administrator perspective in the web console, navigate to Operators OperatorHub . In the Filter by keyword field, type OpenShift sandboxed containers . Select the OpenShift sandboxed containers tile. Read the information about the Operator and click Install . On the Install Operator page: Select preview-1.1 from the list of available Update Channel options. Verify that Operator recommended Namespace is selected for Installed Namespace . This installs the Operator in the mandatory openshift-sandboxed-containers-operator namespace. If this namespace does not yet exist, it is automatically created. Note Attempting to install the OpenShift sandboxed containers Operator in a namespace other than openshift-sandboxed-containers-operator causes the installation to fail. Verify that Automatic is selected for Approval Strategy . Automatic is the default value, and enables automatic updates to OpenShift sandboxed containers when a new z-stream release is available. Click Install . The OpenShift sandboxed containers Operator is now installed on your cluster. Verification From the Administrator perspective in the web console, navigate to Operators Installed Operators . Verify that the OpenShift sandboxed containers Operator is listed in the in operators list. 3.2.2. Creating the KataConfig custom resource in the web console You must create one KataConfig custom resource (CR) to enable installing kata as a RuntimeClass on your cluster nodes. Prerequisites You have installed OpenShift Container Platform 4.9 on your cluster. You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift sandboxed containers Operator. Note Kata is installed on all worker nodes by default. If you want to install kata as a RuntimeClass only on specific nodes, you can add labels to those nodes, then define the label in the KataConfig CR when you create it. Procedure From the Administrator perspective in the web console, navigate to Operators Installed Operators . Select the OpenShift sandboxed containers Operator from the list of operators. In the KataConfig tab, click Create KataConfig . In the Create KataConfig page, select to configure the KataConfig CR via YAML view . Copy and paste the following manifest into the YAML view : apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig If you want to install kata as a RuntimeClass only on selected nodes, include the label in the manifest: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1 1 Labels in kataConfigPoolSelector only support single values; nodeSelector syntax is not supported. Click Create . The new KataConfig CR is created and begins to install kata as a RuntimeClass on the worker nodes. Important OpenShift sandboxed containers installs Kata only as a secondary, optional runtime on the cluster and not as the primary runtime. Verification In the KataConfig tab, select the new KataConfig CR. In the KataConfig page, select the YAML tab. Monitor the installationStatus field in the status. A message appears each time there is an update. Click Reload to view the updated KataConfig CR. Once the value of Completed nodes equals the number of worker or labeled nodes, the installation is complete. The status also contains a list of nodes where the installation is completed. 3.2.3. Deploying a workload in a sandboxed container using the web console OpenShift sandboxed containers installs Kata as a secondary, optional runtime on your cluster, and not as the primary runtime. To deploy a pod-templated workload in a sandboxed container, you must manually add kata as the runtimeClassName to the workload YAML file. Prerequisites You have installed OpenShift Container Platform 4.9 on your cluster. You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift sandboxed containers Operator. You have created a KataConfig custom resource (CR). Procedure From the Administrator perspective in the web console, expand Workloads and select the type of workload you want to create. In the workload page, click to create the workload. In the YAML file for the workload, in the spec field where the container is listed, add runtimeClassName: kata . Example for Pod apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: openshift-sandboxed-containers-operator spec: runtimeClassName: kata containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 Example for Deployment apiVersion: apps/v1 kind: Deployment metadata: name: example namespace: openshift-sandboxed-containers-operator spec: selector: matchLabels: app: httpd replicas: 3 template: metadata: labels: app: httpd spec: runtimeClassName: kata containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 Click Save . OpenShift Container Platform creates the workload and begins scheduling it. 3.3. Deploying OpenShift sandboxed containers workloads using the CLI You can deploy OpenShift sandboxed containers workloads using the CLI. First, you must install the OpenShift sandboxed containers Operator, then create the KataConfig custom resource. Once you are ready to deploy a workload in a sandboxed container, you must add kata as the runtimeClassName to the workload YAML file. 3.3.1. Installing the OpenShift sandboxed containers Operator using the CLI You can install the OpenShift sandboxed containers Operator using the OpenShift Container Platform CLI. Prerequisites You have OpenShift Container Platform 4.9 installed on your cluster. You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have subscribed to the OpenShift sandboxed containers catalog. Note Subscribing to the OpenShift sandboxed containers catalog provides openshift-sandboxed-containers-operator namespace access to the OpenShift sandboxed containers Operator. Procedure Create the Namespace object for the OpenShift sandboxed containers Operator. Create a Namespace object YAML file that contains the following manifest: apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator Create the Namespace object: USD oc create -f Namespace.yaml Create the OperatorGroup object for the OpenShift sandboxed containers Operator. Create an OperatorGroup object YAML file that contains the following manifest: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator Create the OperatorGroup object: USD oc create -f OperatorGroup.yaml Create the Subscription object to subscribe the Namespace to the OpenShift sandboxed containers Operator. Create a Subscription object YAML file that contains the following manifest: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: "preview-1.1" installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.1.0 Create the Subscription object: USD oc create -f Subscription.yaml The OpenShift sandboxed containers Operator is now installed on your cluster. Note All the object file names listed above are suggestions. You can create the object YAML files using other names. Verification Ensure that the Operator is correctly installed: USD oc get csv -n openshift-sandboxed-containers-operator Example output NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.1.0 1.0.2 Succeeded Additional resources Installing from OperatorHub using the CLI 3.3.2. Creating the KataConfig custom resource using the CLI You must create one KataConfig custom resource (CR) to install kata as a RuntimeClass on your nodes. Creating the KataConfig CR triggers the OpenShift sandboxed containers Operator to do the following: Install the needed RHCOS extensions, such as QEMU and kata-containers , on your RHCOS node. Ensure that the CRI-O runtime is configured with the correct kata runtime handlers. Create a RuntimeClass CR named kata with a default configuration. This enables users to configure workloads to use kata as the runtime by referencing the CR in the RuntimeClassName field. This CR also specifies the resource overhead for the runtime. Note Kata is installed on all worker nodes by default. If you want to install kata as a RuntimeClass only on specific nodes, you can add labels to those nodes, then define the label in the KataConfig CR when you create it. Prerequisites You have installed OpenShift Container Platform 4.9 on your cluster. You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift sandboxed containers Operator. Procedure Create a YAML file with the following manifest: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig (Optional) If you want to install kata as a RuntimeClass only on selected nodes, create a YAML file that includes the label in the manifest: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1 1 Labels in kataConfigPoolSelector only support single values; nodeSelector syntax is not supported. Create the KataConfig resource: USD oc create -f <file name>.yaml The new KataConfig CR is created and begins to install kata as a RuntimeClass on the worker nodes. Important OpenShift sandboxed containers installs Kata only as a secondary, optional runtime on the cluster and not as the primary runtime. Verification Monitor the installation progress: USD watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p" Once the value of Is In Progress appears as false , the installation is complete. Additional resources Understanding how to update labels on nodes 3.3.3. Deploying a workload in a sandboxed container using the CLI OpenShift sandboxed containers installs Kata as a secondary, optional runtime on your cluster, and not as the primary runtime. To deploy a pod-templated workload in a sandboxed container, you must add kata as the runtimeClassName to the workload YAML file. Prerequisites You have installed OpenShift Container Platform 4.9 on your cluster. You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift sandboxed containers Operator. You have created a KataConfig custom resource (CR). Procedure Add runtimeClassName: kata to any pod-templated object: Pod objects ReplicaSet objects ReplicationController objects StatefulSet objects Deployment objects DeploymentConfig objects Example for Pod objects apiVersion: v1 kind: Pod metadata: name: mypod spec: runtimeClassName: kata Example for Deployment objects apiVersion: apps/v1 kind: Deployment metadata: name: mypod labels: app: mypod spec: replicas: 3 selector: matchLabels: app: mypod template: metadata: labels: app: mypod spec: runtimeClassName: kata containers: - name: mypod image: myImage OpenShift Container Platform creates the workload and begins scheduling it. Verification Inspect the runtimeClassName field on a pod-templated object. If the runtimeClassName is kata , then the workload is running on a OpenShift sandboxed containers. 3.4. Additional resources The OpenShift sandboxed containers Operator is supported in a restricted network environment. For more information, Using Operator Lifecycle Manager on restricted networks . When using a disconnected cluster on a restricted network, you must configure proxy support in Operator Lifecycle Manager to access the OperatorHub. Using a proxy allows the cluster to fetch the OpenShift sandboxed containers Operator.
[ "oc describe runtimeclass kata", "kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: \"500Mi\" cpu: \"500m\"", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1", "apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: openshift-sandboxed-containers-operator spec: runtimeClassName: kata containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080", "apiVersion: apps/v1 kind: Deployment metadata: name: example namespace: openshift-sandboxed-containers-operator spec: selector: matchLabels: app: httpd replicas: 3 template: metadata: labels: app: httpd spec: runtimeClassName: kata containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080", "apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator", "oc create -f Namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator", "oc create -f OperatorGroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: \"preview-1.1\" installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.1.0", "oc create -f Subscription.yaml", "oc get csv -n openshift-sandboxed-containers-operator", "NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.1.0 1.0.2 Succeeded", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1", "oc create -f <file name>.yaml", "watch \"oc describe kataconfig | sed -n /^Status:/,/^Events/p\"", "apiVersion: v1 kind: Pod metadata: name: mypod spec: runtimeClassName: kata", "apiVersion: apps/v1 kind: Deployment metadata: name: mypod labels: app: mypod spec: replicas: 3 selector: matchLabels: app: mypod template: metadata: labels: app: mypod spec: runtimeClassName: kata containers: - name: mypod image: myImage" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/sandboxed_containers_support_for_openshift/deploying-sandboxed-containers-workloads
7.5. Configuring System Services for SSSD
7.5. Configuring System Services for SSSD SSSD provides interfaces towards several system services. Most notably: Name Service Switch (NSS) See Section 7.5.1, "Configuring Services: NSS" . Pluggable Authentication Modules (PAM) See Section 7.5.2, "Configuring Services: PAM" . OpenSSH See Configuring SSSD to Provide a Cache for the OpenSSH Services in the Linux Domain Identity, Authentication, and Policy Guide . autofs See Section 7.5.3, "Configuring Services: autofs " . sudo See Section 7.5.4, "Configuring Services: sudo " . 7.5.1. Configuring Services: NSS How SSSD Works with NSS The Name Service Switch (NSS) service maps system identities and services with configuration sources: it provides a central configuration store where services can look up sources for various configuration and name resolution mechanisms. SSSD can use NSS as a provider for several types of NSS maps. Most notably: User information (the passwd map) Groups (the groups map) Netgroups (the netgroups map) Services (the services map) Prerequisites Install SSSD. Configure NSS Services to Use SSSD Use the authconfig utility to enable SSSD: This updates the /etc/nsswitch.conf file to enable the following NSS maps to use SSSD: Open /etc/nsswitch.conf and add sss to the services map line: Configure SSSD to work with NSS Open the /etc/sssd/sssd.conf file. In the [sssd] section, make sure that NSS is listed as one of the services that works with SSSD. In the [nss] section, configure how SSSD interacts with NSS. For example: For a complete list of available options, see NSS configuration options in the sssd.conf (5) man page. Restart SSSD. Test That the Integration Works Correctly Display information about a user with these commands: id user getent passwd user 7.5.2. Configuring Services: PAM Warning A mistake in the PAM configuration file can lock users out of the system completely. Always back up the configuration files before performing any changes, and keep a session open so that you can revert any changes. Configure PAM to Use SSSD Use the authconfig utility to enable SSSD: This updates the PAM configuration to reference the SSSD modules, usually in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files. For example: For details, see the pam.conf (5) or pam (8) man pages. Configure SSSD to work with PAM Open the /etc/sssd/sssd.conf file. In the [sssd] section, make sure that PAM is listed as one of the services that works with SSSD. In the [pam] section, configure how SSSD interacts with PAM. For example: For a complete list of available options, see PAM configuration options in the sssd.conf (5) man page. Restart SSSD. Test That the Integration Works Correctly Try logging in as a user. Use the sssctl user-checks user_name auth command to check your SSSD configuration. For details, use the sssctl user-checks --help command. 7.5.3. Configuring Services: autofs How SSSD Works with automount The automount utility can mount and unmount NFS file systems automatically (on-demand mounting), which saves system resources. For details on automount , see autofs in the Storage Administration Guide . You can configure automount to point to SSSD. In this setup: When a user attempts to mount a directory, SSSD contacts LDAP to obtain the required information about the current automount configuration. SSSD stores the information required by automount in a cache, so that users can mount directories even when the LDAP server is offline. Configure autofs to Use SSSD Install the autofs package. Open the /etc/nsswitch.conf file. On the automount line, change the location where to look for the automount map information from ldap to sss : Configure SSSD to work with autofs Open the /etc/sssd/sssd.conf file. In the [sssd] section, add autofs to the list of services that SSSD manages. Create a new [autofs] section. You can leave it empty. For a list of available options, see AUTOFS configuration options in the sssd.conf (5) man page. Make sure an LDAP domain is available in sssd.conf , so that SSSD can read the automount information from LDAP. See Section 7.3.2, "Configuring an LDAP Domain for SSSD" . The [domain] section of sssd.conf accepts several autofs -related options. For example: For a complete list of available options, see DOMAIN SECTIONS in the sssd.conf (5) man page. If you do not provide additional autofs options, the configuration depends on the identity provider settings. Restart SSSD. Test the Configuration Use the automount -m command to print the maps from SSSD. 7.5.4. Configuring Services: sudo How SSSD Works with sudo The sudo utility gives administrative access to specified users. For more information about sudo , see The sudo utility documentation in the System Administrator's Guide . You can configure sudo to point to SSSD. In this setup: When a user attempts a sudo operation, SSSD contacts LDAP or AD to obtain the required information about the current sudo configuration. SSSD stores the sudo information in a cache, so that users can perform sudo operations even when the LDAP or AD server is offline. SSSD only caches sudo rules which apply to the local system, depending on the value of the sudoHost attribute. See the sssd-sudo (5) man page for details. Configure sudo to Use SSSD Open the /etc/nsswitch.conf file. Add SSSD to the list on the sudoers line. Configure SSSD to work with sudo Open the /etc/sssd/sssd.conf file. In the [sssd] section, add sudo to the list of services that SSSD manages. Create a new [sudo] section. You can leave it empty. For a list of available options, see SUDO configuration options in the sssd.conf (5) man page. Make sure an LDAP or AD domain is available in sssd.conf , so that SSSD can read the sudo information from the directory. For details, see: Section 7.3.2, "Configuring an LDAP Domain for SSSD" the Using Active Directory as an Identity Provider for SSSD section in the Windows Integration Guide . The [domain] section for the LDAP or AD domain must include these sudo -related parameters: Note Setting Identity Management or AD as the ID provider automatically enables the sudo provider. In this situation, it is not necessary to specify the sudo_provider parameter. For a complete list of available options, see DOMAIN SECTIONS in the sssd.conf (5) man page. For options available for a sudo provider, see the sssd-ldap (5) man page. Restart SSSD. If you use AD as the provider, you must extend the AD schema to support sudo rules. For details, see the sudo documentation. For details about providing sudo rules in LDAP or AD, see the sudoers.ldap (5) man page.
[ "yum install sssd", "authconfig --enablesssd --update", "passwd: files sss shadow: files sss group: files sss netgroup: files sss", "services: files sss", "[sssd] [... file truncated ...] services = nss , pam", "[nss] filter_groups = root filter_users = root entry_cache_timeout = 300 entry_cache_nowait_percentage = 75", "systemctl restart sssd.service", "authconfig --enablesssdauth --update", "[... file truncated ...] auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_sss.so use_first_pass auth required pam_deny.so [... file truncated ...]", "[sssd] [... file truncated ...] services = nss, pam", "[pam] offline_credentials_expiration = 2 offline_failed_login_attempts = 3 offline_failed_login_delay = 5", "systemctl restart sssd.service", "yum install autofs", "automount: files sss", "[sssd] services = nss,pam, autofs", "[autofs]", "[domain/LDAP] [... file truncated ...] autofs_provider=ldap ldap_autofs_search_base=cn=automount,dc=example,dc=com ldap_autofs_map_object_class=automountMap ldap_autofs_entry_object_class=automount ldap_autofs_map_name=automountMapName ldap_autofs_entry_key=automountKey ldap_autofs_entry_value=automountInformation", "systemctl restart sssd.service", "sudoers: files sss", "[sssd] services = nss,pam, sudo", "[sudo]", "[domain/ LDAP_or_AD_domain ] sudo_provider = ldap ldap_sudo_search_base = ou=sudoers,dc= example ,dc= com", "systemctl restart sssd.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/Configuring_Services
Monitoring
Monitoring OpenShift Container Platform 4.18 Configuring and using the monitoring stack in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "curl http://<example_app_endpoint>/metrics", "HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1", "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc -n openshift-monitoring get configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |", "oc apply -f cluster-monitoring-config.yaml", "oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1", "oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1", "oc label nodes <node_name> <node_label> 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusK8s: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosQuerier: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperator: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi metricsServer: resources: requests: cpu: 10m memory: 50Mi limits: cpu: 50m memory: 500Mi kubeStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi telemeterClient: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi openshiftStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi nodeExporter: resources: limits: cpu: 50m memory: 150Mi requests: cpu: 20m memory: 50Mi monitoringPlugin: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperatorAdmissionWebhook: resources: limits: cpu: 50m memory: 100Mi requests: cpu: 20m memory: 50Mi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: <metrics_collection_profile_name> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: minimal", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: prometheus", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> 1 retentionSize: <size_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | metricsServer: audit: profile: Request", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2", "oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-monitoring get pods", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1", "oc -n openshift-monitoring get pods", "prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m prometheus-k8s-0 6/6 Running 1 57m prometheus-k8s-1 6/6 Running 1 57m thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m", "oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2", "oc -n openshift-monitoring get pods", "token=`oc create token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: Bearer USDtoken\" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'", "oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep", "apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7", "apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4", "apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> 1 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3", "apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>", "apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: 1 - <secret_name_1> 2 - <secret_name_2>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod", "oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "global: resolve_timeout: 5m http_config: proxy_from_environment: true 1 route: group_wait: 30s 2 group_interval: 5m 3 repeat_interval: 12h 4 receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=<your_service>\" 5 routes: - matchers: - <your_matching_rules> 6 receiver: <receiver> 7 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> 8", "global: resolve_timeout: 5m http_config: proxy_from_environment: true route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: 1 - \"service=example-app\" routes: - matchers: - \"severity=critical\" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \"<your_key>\" http_config: 2 proxy_from_environment: true authorization: credentials: xxxxxxxxxx", "oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1", "oc -n openshift-user-workload-monitoring get pod", "NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h", "oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring", "oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring", "oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring", "Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # alertmanagerMain: enableUserAlertmanagerConfig: true 1 #", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2", "oc -n openshift-user-workload-monitoring get alertmanager", "NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s", "oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1", "oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1", "oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1", "oc label namespace my-project 'openshift.io/user-monitoring=false'", "oc label namespace my-project 'openshift.io/user-monitoring-'", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false", "oc -n openshift-user-workload-monitoring get pod", "No resources found in openshift-user-workload-monitoring project.", "oc label nodes <node_name> <node_label> 1", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1 enforcedLabelLimit: 500 2 enforcedLabelNameLengthLimit: 50 3 enforcedLabelValueLengthLimit: 600 4 scrapeInterval: 1m30s 5 evaluationInterval: 1m15s 6", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: (scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)) > 0.9 9 for: 10m 10 labels: severity: warning 11", "oc apply -f monitoring-stack-alerts.yaml", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1", "oc -n openshift-user-workload-monitoring get pods", "prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep", "apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7", "apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4", "apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3", "apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>", "apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3", "apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP", "oc apply -f prometheus-example-app.yaml", "oc -n ns1 get pod", "NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app", "oc apply -f example-app-service-monitor.yaml", "oc -n <namespace> get servicemonitor", "NAME AGE prometheus-example-monitor 81m", "apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app", "apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app", "apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod", "apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post", "oc apply -f example-app-alert-routing.yaml", "oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "global: http_config: proxy_from_environment: true 1 route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 2 receiver: <receiver> 3 receivers: - name: Default - name: <receiver> <receiver_configuration> 4", "oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-", "oc get routes -n openshift-monitoring thanos-querier -o jsonpath='{.status.ingress[0].host}'", "curl -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://<thanos_querier_route>/api/v1/metadata 1", "oc get routes -n openshift-monitoring thanos-querier -o jsonpath='{.status.ingress[0].host}'", "curl -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://<thanos_querier_route>/api/v1/metadata 1", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath='{.status.ingress[].host}')", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v2/receivers\"", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath='{.status.ingress[].host}')", "curl -G -k -H \"Authorization: Bearer USDTOKEN\" https://USDHOST/federate --data-urlencode 'match[]=up'", "TYPE up untyped up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.143.148:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035322214 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.148.166:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035338597 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.173.16:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035343834", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}')", "NAMESPACE=ns1", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\"", "{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"up\", \"endpoint\": \"web\", \"instance\": \"10.129.0.46:8080\", \"job\": \"prometheus-example-app\", \"namespace\": \"ns1\", \"pod\": \"prometheus-example-app-68d47c4fb6-jztp2\", \"service\": \"prometheus-example-app\" }, \"value\": [ 1591881154.748, \"1\" ] } ], } }", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: example namespace: openshift-monitoring 1 spec: groups: - name: example-rules rules: - alert: ExampleAlert 2 for: 1m 3 expr: vector(1) 4 labels: severity: warning 5 annotations: message: This is an example alert. 6", "oc apply -f example-alerting-rule.yaml", "apiVersion: monitoring.openshift.io/v1 kind: AlertRelabelConfig metadata: name: watchdog namespace: openshift-monitoring 1 spec: configs: - sourceLabels: [alertname,severity] 2 regex: \"Watchdog;none\" 3 targetLabel: severity 4 replacement: critical 5 action: Replace 6", "oc apply -f example-modified-alerting-rule.yaml", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5", "oc apply -f example-app-alerting-rule.yaml", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 #", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: \"ProjectNotEnforcingRestrictedPolicy\" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~\"(openshift|kube).*|default\",label_pod_security_kubernetes_io_enforce!=\"restricted\"} 4 annotations: message: \"Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy.\" 5 labels: severity: warning 6", "oc apply -f example-cross-project-alerting-rule.yaml", "oc -n <namespace> delete prometheusrule <foo>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "kind: ConfigMap apiVersion: v1 metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | userWorkload: rulesWithoutLabelEnforcementAllowed: false #", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5", "oc apply -f example-app-alerting-rule.yaml", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 #", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: \"ProjectNotEnforcingRestrictedPolicy\" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~\"(openshift|kube).*|default\",label_pod_security_kubernetes_io_enforce!=\"restricted\"} 4 annotations: message: \"Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy.\" 5 labels: severity: warning 6", "oc apply -f example-cross-project-alerting-rule.yaml", "oc -n <project> get prometheusrule", "oc -n <project> get prometheusrule <rule> -o yaml", "oc -n <namespace> delete prometheusrule <foo>", "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))", "topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')", "TOKEN=USD(oc whoami -t)", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"", "\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo \"[0-9|A-Z]{26}\")'", "308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B", "oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/", "Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod", "oc -n openstack get monitoringstacks metric-storage -o yaml", "oc --namespace openstack create secret generic mtls-bundle --from-file=./ca.crt --from-file=osp-client.crt --from-file=osp-client.key", "oc -n openstack edit openstackcontrolplane/controlplane", "metricStorage: customMonitoringStack: alertmanagerConfig: disabled: false logLevel: info prometheusConfig: scrapeInterval: 30s remoteWrite: - url: https://external-prometheus.example.com/api/v1/write 1 tlsConfig: ca: secret: name: mtls-bundle key: ca.crt cert: secret: name: mtls-bundle key: ocp-client.crt keySecret: name: mtls-bundle key: ocp-client.key replicas: 2 resourceSelector: matchLabels: service: metricStorage resources: limits: cpu: 500m memory: 512Mi requests: cpu: 100m memory: 256Mi retention: 1d 2 dashboardsEnabled: false dataplaneNetwork: ctlplane enabled: true prometheusTls: {}", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 1d 1 remoteWrite: - url: \"https://external-prometheus.example.com/api/v1/write\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ targetLabel: cluster_id action: replace tlsConfig: ca: secret: name: mtls-bundle key: ca.crt cert: secret: name: mtls-bundle key: ocp-client.crt keySecret: name: mtls-bundle key: ocp-client.key", "oc --namespace openshift-monitoring create secret generic mtls-bundle --from-file=./ca.crt --from-file=ocp-client.crt --from-file=ocp-client.key", "oc apply -f cluster-monitoring-config.yaml", "oc whoami -t", "oc -n openstack create secret generic ocp-federated --from-literal=token=<the_token_fetched_previously>", "oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath={'.status.ingress[].host'}", "apiVersion: monitoring.rhobs/v1alpha1 kind: ScrapeConfig metadata: labels: service: metricStorage name: sos1-federated namespace: openstack spec: params: 'match[]': - '{__name__=~\"kube_node_info|kube_persistentvolume_info|cluster:master_nodes\"}' 1 metricsPath: '/federate' authorization: type: Bearer credentials: name: ocp-federated 2 key: token scheme: HTTPS # or HTTP scrapeInterval: 30s 3 staticConfigs: - targets: - prometheus-k8s-federate-openshift-monitoring.apps.openshift.example 4", "oc apply -f cluster-scrape-config.yaml", "sum by (vm_instance) ( group by (vm_instance, resource) (ceilometer_cpu) / on (resource) group_right(vm_instance) ( group by (node, resource) ( label_replace(kube_node_info, \"resource\", \"USD1\", \"system_uuid\", \"(.+)\") ) / on (node) group_left group by (node) ( cluster:master_nodes ) ) )" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/monitoring/modifying-retention-time-for-prometheus-metrics-data_configuring-the-monitoring-stack
probe::nfs.proc.read_done
probe::nfs.proc.read_done Name probe::nfs.proc.read_done - NFS client response to a read RPC task Synopsis nfs.proc.read_done Values timestamp V4 timestamp, which is used for lease renewal prot transfer protocol count number of bytes read version NFS version status result of last operation server_ip IP address of server Description Fires when a reply to a read RPC task is received or some read error occurs (timeout or socket shutdown).
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-proc-read-done
Chapter 7. Known issues in Red Hat Process Automation Manager 7.13
Chapter 7. Known issues in Red Hat Process Automation Manager 7.13 This section lists known issues with Red Hat Process Automation Manager 7.13. 7.1. Process Designer The JavaScript language in an On Entry Action property produces an error after changing a node to multiple instances [ RHPAM-3409 ] Issue: When the language of the On Entry Action property is set to JavaScript and you change the node to Multiple Instance , you receive a system error. Steps to reproduce: Create a new business process. Create a user task and set it to the Multiple Instance property. Enter any string to On Entry Action or On Exit Action . Select the JavaScript language. Select the Multiple Instance check box. Actual result: You receive a system error. Expected result: You do not receive an error either in the UI or in the server log file. Workaround: None. customCaseRoles metadata attribute is not added [ RHPAM-4410 ] Issue: It is not possible to add a new customCaseRoles metadata attribute in a case process definition. Steps to reproduce: Create a case project. Create a case definition. Open Case Management in the Properties panel and add a new case role as owner:1 . Save, close, and reopen the case. In the Properties panel, check the Metadata Attributes under the Advanced section. Actual result: The Metadata Attributes section is empty. Expected result: The Metadata Attributes section contains the customCaseRoles:owner:1 . Workaround: None. DataObject from canvas is missing in assignments when the case file variable is present [ RHPAM-4420 ] Issue: The DataObject from the canvas is missing in assignments when the case file variable is present. This applies to both top-level nodes as well as nodes placed in sub-processes. Steps to reproduce: Create a case project. Create a case definition. Add a case file variable to the process. Create a DataObject on the canvas. Create a node with assignments on the canvas or in the sub-process. Activate the node that has the assignments, open the assignments, and click Source/Target . Actual result: The DataObject is missing from the listed items. Expected result: Both the case file variable as well as the DataObject from the canvas is present in the listed items. Workaround: None. Custom data object in multiple variables causes error in case project [ RHPAM-4422 ] Issue: When you create a custom data object in multiple variables, you receive an error in a case project. Steps to reproduce: Create a case project. Create a case definition. Create a custom data object in the same project. Add a process variable and a case file variable with the same CustomDataObject type. Create a multiple instance node or data object on the canvas. If you set a multiple instance node, set the MI Collection input/output and try to change Data Input/Output type. If you set a data object, try to change the data type. Actual result: You receive an error. Expected result: No errors occur. Workaround: None. 7.2. Process engine When you abort a process instance, the timer is not deleted [ RHPAM-4380 ] Issue: Aborting a process instance with an active timer does not delete the timer. The timer then fires at the defined trigger date, which is silently dismissed by the system, so this is not a functional problem. However, it populates the EJB timer subsystem with orphaned timers, especially if the timers are long-running and the number of aborted process instances is high. Workaround: None. When you use Spring Boot, the UserGroupCallback implementation is not injected into KIE Server [ RHPAM-4281 ] Issue: When you are using an engine embedded in a KIE Server packaged as a Spring Boot application, the bean defined as userGroupCallback is not injected into the engine. Then, when you try to call some of the rest endpoints fetching some tasks based on the user or groups assigned to them (such as potOwner , stakeHolders , businessAdmin , and so on) they do not work as expected because the UserGroupCallback implementation used in the engine is different from the one defined at the Spring boot application level. Note that this only applies to cases and not to processes. Steps to reproduce: Start KIE Server as a Spring Boot app with a default identity provider and a UserGroupCallback implementation. Try to fetch some tasks assigned to a group by using some rest endpoints such as potOwner , stakeHolders , or businessAdmins . Workaround: None. Kafka-clients contain misalignment with any supported AMQ Streams version [ RHPAM-4417 ] Issue: Kafka dependencies for the community are not aligned with Red Hat Process Automation Manager 7.13. The current Kafka community version is 2.8.0 and it must be aligned with the version used by AMQ Streams 2.1.0 which is 3.1.0 for the community. Workaround: None. 7.3. Spring Boot Wrong managed version of Spring Boot dependencies [ RHPAM-4413 ] Issue: The Spring Boot version (2.6.6) in the Maven repository is not certified by Red Hat yet. Therefore, you will receive a mismatch for the Narayana starter in productized binaries. Workaround: In your pom.xml file, define the following properties to override the current versions: <version.org.springframework.boot>2.5.12</version.org.springframework.boot> <version.me.snowdrop.narayana>2.6.3.redhat-00001</version.me.snowdrop.narayana> 7.4. Red Hat build of Kogito Red Hat build of Kogito is aligned with a non-supported Spring Boot version [ RHPAM-4419 ] Issue: Red Hat build of Kogito Spring Boot versions are managed in the kogito-spring-boot-bom file, which imports dependency management from the org.springframework.boot:spring-boot-dependencies BOM. The currently aligned version is 2.6.6, which does not map to any Red Hat supported versions. The latest supported version is 2.5.12. You must override dependency management with a BOM aligning to the Red Hat supported version which is 2.5.12. Workaround: To maintain the order of the imported BOM files, first include the Spring Boot BOM and then include the Red Hat build of Kogito specific BOM file: <dependencyManagement> <dependencies> <dependency> <groupId>dev.snowdrop</groupId> <artifactId>snowdrop-dependencies</artifactId> <version>2.5.12.Final-redhat-00001</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-spring-boot-bom</artifactId> <version>1.13.2.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Align the version of spring-boot-maven-plugin to the same version in your project build configuration file: <plugins> <plugin> <groupId>org.kie.kogito</groupId> <artifactId>kogito-maven-plugin</artifactId> <version>1.13.2.redhat-00002</version> <extensions>true</extensions> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>2.5.12</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> Red Hat build of Kogito on Spring Boot leads to misalignment of Kafka-clients version [ RHPAM-4418 ] Issue: The Kafka-clients dependency version for Red Hat build of Kogito Spring Boot is by default managed by the org.springframework.boot:spring-boot-dependencies BOM. Depending on which Spring Boot version is used, users might end up with an unsupported or vulnerable version of Kafka-clients. You must override the default dependency in your kogito-spring-boot-bom to make sure you have the expected Kafka-clients version. Workaround: In your projects, define dependencyManagement explicitly for org.apache.kafka:kafka-clients dependency to use the version released by AMQ Streams.
[ "<version.org.springframework.boot>2.5.12</version.org.springframework.boot> <version.me.snowdrop.narayana>2.6.3.redhat-00001</version.me.snowdrop.narayana>", "<dependencyManagement> <dependencies> <dependency> <groupId>dev.snowdrop</groupId> <artifactId>snowdrop-dependencies</artifactId> <version>2.5.12.Final-redhat-00001</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-spring-boot-bom</artifactId> <version>1.13.2.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>", "<plugins> <plugin> <groupId>org.kie.kogito</groupId> <artifactId>kogito-maven-plugin</artifactId> <version>1.13.2.redhat-00002</version> <extensions>true</extensions> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>2.5.12</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/release_notes_for_red_hat_process_automation_manager_7.13/rn-7.13-known-issues-ref
Chapter 65. Creating and managing certificate profiles in Identity Management
Chapter 65. Creating and managing certificate profiles in Identity Management Certificate profiles are used by the Certificate Authority (CA) when signing certificates to determine if a certificate signing request (CSR) is acceptable, and if so what features and extensions are present on the certificate. A certificate profile is associated with issuing a particular type of certificate. By combining certificate profiles and CA access control lists (ACLs), you can define and control access to custom certificate profiles. In describing how to create certificate profiles, the procedures use S/MIME certificates as an example. Some email programs support digitally signed and encrypted email using the Secure Multipurpose Internet Mail Extension (S/MIME) protocol. Using S/MIME to sign or encrypt email messages requires the sender of the message to have an S/MIME certificate. What is a certificate profile Creating a certificate profile What is a CA access control list Defining a CA ACL to control access to certificate profiles Using certificate profiles and CA ACLs to issue certificates Modifying a certificate profile Certificate profile configuration parameters 65.1. What is a certificate profile? You can use certificate profiles to determine the content of certificates, as well as constraints for issuing the certificates, such as the following: The signing algorithm to use to encipher the certificate signing request. The default validity of the certificate. The revocation reasons that can be used to revoke a certificate. If the common name of the principal is copied to the subject alternative name field. The features and extensions that should be present on the certificate. A single certificate profile is associated with issuing a particular type of certificate. You can define different certificate profiles for users, services, and hosts in IdM. IdM includes the following certificate profiles by default: caIPAserviceCert IECUserRoles KDCs_PKINIT_Certs (used internally) In addition, you can create and import custom profiles, which allow you to issue certificates for specific purposes. For example, you can restrict the use of a particular profile to only one user or one group, preventing other users and groups from using that profile to issue a certificate for authentication. To create custom certificate profiles, use the ipa certprofile command. Additional resources See the ipa help certprofile command. 65.2. Creating a certificate profile Follow this procedure to create a certificate profile through the command line by creating a profile configuration file for requesting S/MIME certificates. Procedure Create a custom profile by copying an existing default profile: Open the newly created profile configuration file in a text editor. Change the Profile ID to a name that reflects the usage of the profile, for example smime . Note When you are importing a newly created profile, the profileId field, if present, must match the ID specified on the command line. Update the Extended Key Usage configuration. The default Extended Key Usage extension configuration is for TLS server and client authentication. For example for S/MIME, the Extended Key Usage must be configured for email protection: Import the new profile: Verification Verify the new certificate profile has been imported: Additional resources See ipa help certprofile . See RFC 5280, section 4.2.1.12 . 65.3. What is a CA access control list? Certificate Authority access control list (CA ACL) rules define which profiles can be used to issue certificates to which principals. You can use CA ACLs to do this, for example: Determine which user, host, or service can be issued a certificate with a particular profile Determine which IdM certificate authority or sub-CA is permitted to issue the certificate For example, using CA ACLs, you can restrict use of a profile intended for employees working from an office located in London only to users that are members of the London office-related IdM user group. The ipa caacl utility for management of CA ACL rules allows privileged users to add, display, modify, or delete a specified CA ACL. Additional resources See ipa help caacl . 65.4. Defining a CA ACL to control access to certificate profiles Follow this procedure to use the caacl utility to define a CA Access Control List (ACL) rule to allow users in a group access to a custom certificate profile. In this case, the procedure describes how to create an S/MIME user's group and a CA ACL to allow users in that group access to the smime certificate profile. Prerequisites Make sure that you have obtained IdM administrator's credentials. Procedure Create a new group for the users of the certificate profile: Create a new user to add to the smime_user_group group: Add the smime_user to the smime_users_group group: Create the CA ACL to allow users in the group to access the certificate profile: Add the user group to the CA ACL: Add the certificate profile to the CA ACL: Verification View the details of the CA ACL you created: Additional resources See ipa man page on your system. See ipa help caacl . 65.5. Using certificate profiles and CA ACLs to issue certificates You can request certificates using a certificate profile when permitted by the Certificate Authority access control lists (CA ACLs). Follow this procedure to request an S/MIME certificate for a user using a custom certificate profile which has been granted access through a CA ACL. Prerequisites Your certificate profile has been created. An CA ACL has been created which permits the user to use the required certificate profile to request a certificate. Note You can bypass the CA ACL check if the user performing the cert-request command: Is the admin user. Has the Request Certificate ignoring CA ACLs permission. Procedure Generate a certificate request for the user. For example, using OpenSSL: Request a new certificate for the user from the IdM CA: Optionally pass the --ca sub-CA_name option to the command to request the certificate from a sub-CA instead of the root CA. Verification Verify the newly-issued certificate is assigned to the user: Additional resources ipa(a) and openssl(lssl) man pages on your system ipa help user-show command ipa help cert-request command 65.6. Modifying a certificate profile Follow this procedure to modify certificate profiles directly through the command line using the ipa certprofile-mod command. Procedure Determine the certificate profile ID for the certificate profile you are modifying. To display all certificate profiles currently stored in IdM: Modify the certificate profile description. For example, if you created a custom certificate profile for S/MIME certificates using an existing profile, change the description in line with the new usage: Open your customer certificate profile file in a text editor and modify to suit your requirements: For details on the options which can be configured in the certificate profile configuration file, see Certificate profile configuration parameters . Update the existing certificate profile configuration file: Verification Verify the certificate profile has been updated: Additional resources See ipa(a) man page on your system. See ipa help certprofile-mod . 65.7. Certificate profile configuration parameters Certificate profile configuration parameters are stored in a profile_name .cfg file in the CA profile directory, /var/lib/pki/pki-tomcat/ca/profiles/ca . All of the parameters for a profile - defaults, inputs, outputs, and constraints - are configured within a single policy set. A policy set for a certificate profile has the name policyset. policyName.policyNumber . For example, for policy set serverCertSet : Each policy set contains a list of policies configured for the certificate profile by policy ID number in the order in which they should be evaluated. The server evaluates each policy set for each request it receives. When a single certificate request is received, one set is evaluated, and any other sets in the profile are ignored. When dual key pairs are issued, the first policy set is evaluated for the first certificate request, and the second set is evaluated for the second certificate request. You do not need more than one policy set when issuing single certificates or more than two sets when issuing dual key pairs. Table 65.1. Certificate profile configuration file parameters Parameter Description desc A free text description of the certificate profile, which is shown on the end-entities page. For example, desc=This certificate profile is for enrolling server certificates with agent authentication . enable Enables the profile so it is accessible through the end-entities page. For example, enable=true . auth.instance_id Sets the authentication manager plug-in to use to authenticate the certificate request. For automatic enrollment, the CA issues a certificate immediately if the authentication is successful. If authentication fails or there is no authentication plug-in specified, the request is queued to be manually approved by an agent. For example, auth.instance_id=AgentCertAuth . authz.acl Specifies the authorization constraint. This is predominantly used to set the group evaluation Access Control List (ACL). For example, the caCMCUserCert parameter requires that the signer of the CMC request belongs to the Certificate Manager Agents group: authz.acl=group="Certificate Manager Agents In directory-based user certificate renewal, this option is used to ensure that the original requester and the currently-authenticated user are the same. An entity must authenticate (bind or, essentially, log into the system) before authorization can be evaluated. name The name of the certificate profile. For example, name=Agent-Authenticated Server Certificate Enrollment . This name is displayed on the end users enrollment or renewal page. input.list Lists the allowed inputs for the certificate profile by name. For example, input.list=i1,i2 . input.input_id.class_id Indicates the java class name for the input by input ID (the name of the input listed in input.list). For example, input.i1.class_id=certReqInputImpl . output.list Lists the possible output formats for the certificate profile by name. For example, output.list=o1 . output.output_id.class_id Specifies the java class name for the output format named in output.list. For example, output.o1.class_id=certOutputImpl . policyset.list Lists the configured certificate profile rules. For dual certificates, one set of rules applies to the signing key and the other to the encryption key. Single certificates use only one set of certificate profile rules. For example, policyset.list=serverCertSet . policyset.policyset_id.list Lists the policies within the policy set configured for the certificate profile by policy ID number in the order in which they should be evaluated. For example, policyset.serverCertSet.list=1,2,3,4,5,6,7,8 . policyset.policyset_id.policy_number.constraint.class_id Indicates the java class name of the constraint plug-in set for the default configured in the profile rule. For example, policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl. policyset.policyset_id.policy_number.constraint.name Gives the user-defined name of the constraint. For example, policyset.serverCertSet.1.constraint.name=Subject Name Constraint. policyset.policyset_id.policy_number.constraint.params.attribute Specifies a value for an allowed attribute for the constraint. The possible attributes vary depending on the type of constraint. For example, policyset.serverCertSet.1.constraint.params.pattern=CN=.*. policyset.policyset_id.policy_number.default.class_id Gives the java class name for the default set in the profile rule. For example, policyset.serverCertSet.1.default.class_id=userSubjectNameDefaultImpl policyset.policyset_id.policy_number.default.name Gives the user-defined name of the default. For example, policyset.serverCertSet.1.default.name=Subject Name Default policyset.policyset_id.policy_number.default.params.attribute Specifies a value for an allowed attribute for the default. The possible attributes vary depending on the type of default. For example, policyset.serverCertSet.1.default.params.name=CN=(Name)USDrequest.requestor_nameUSD.
[ "ipa certprofile-show --out smime.cfg caIPAserviceCert ------------------------------------------------ Profile configuration stored in file 'smime.cfg' ------------------------------------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE", "vi smime.cfg", "policyset.serverCertSet.7.default.params.exKeyUsageOIDs=1.3.6.1.5.5.7.3.4", "ipa certprofile-import smime --file smime.cfg --desc \"S/MIME certificates\" --store TRUE ------------------------ Imported profile \"smime\" ------------------------ Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE", "ipa certprofile-find ------------------ 4 profiles matched ------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE Profile ID: IECUserRoles Profile description: User profile that includes IECUserRoles extension from request Store issued certificates: TRUE Profile ID: KDCs_PKINIT_Certs Profile description: Profile for PKINIT support by KDCs Store issued certificates: TRUE Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE ---------------------------- Number of entries returned 4 ----------------------------", "ipa group-add smime_users_group --------------------------------- Added group \"smime users group\" --------------------------------- Group name: smime_users_group GID: 75400001", "ipa user-add smime_user First name: smime Last name: user ---------------------- Added user \"smime_user\" ---------------------- User login: smime_user First name: smime Last name: user Full name: smime user Display name: smime user Initials: TU Home directory: /home/smime_user GECOS: smime user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1505000004 GID: 1505000004 Password: False Member of groups: ipausers Kerberos keys available: False", "ipa group-add-member smime_users_group --users=smime_user Group name: smime_users_group GID: 1505000003 Member users: smime_user ------------------------- Number of members added 1 -------------------------", "ipa caacl-add smime_acl ------------------------ Added CA ACL \"smime_acl\" ------------------------ ACL name: smime_acl Enabled: TRUE", "ipa caacl-add-user smime_acl --group smime_users_group ACL name: smime_acl Enabled: TRUE User Groups: smime_users_group ------------------------- Number of members added 1 -------------------------", "ipa caacl-add-profile smime_acl --certprofile smime ACL name: smime_acl Enabled: TRUE Profiles: smime User Groups: smime_users_group ------------------------- Number of members added 1 -------------------------", "ipa caacl-show smime_acl ACL name: smime_acl Enabled: TRUE Profiles: smime User Groups: smime_users_group", "openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout private.key -out cert.csr -subj '/CN= smime_user '", "ipa cert-request cert.csr --principal= smime_user --profile-id= smime", "ipa user-show user User login: user Certificate: MIICfzCCAWcCAQA", "ipa certprofile-find ------------------ 4 profiles matched ------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE Profile ID: IECUserRoles Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE -------------------------- Number of entries returned --------------------------", "ipa certprofile-mod smime --desc \"New certificate profile description\" ------------------------------------ Modified Certificate Profile \"smime\" ------------------------------------ Profile ID: smime Profile description: New certificate profile description Store issued certificates: TRUE", "vi smime.cfg", "ipa certprofile-mod _profile_ID_ --file=smime.cfg", "ipa certprofile-show smime Profile ID: smime Profile description: New certificate profile description Store issued certificates: TRUE", "policyset.list=serverCertSet policyset.serverCertSet.list=1,2,3,4,5,6,7,8 policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params.pattern=CN=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params.name=CN=USDrequest.req_subject_name.cnUSD, OU=pki-ipa, O=IPA policyset.serverCertSet.2.constraint.class_id=validityConstraintImpl policyset.serverCertSet.2.constraint.name=Validity Constraint policyset.serverCertSet.2.constraint.params.range=740 policyset.serverCertSet.2.constraint.params.notBeforeCheck=false policyset.serverCertSet.2.constraint.params.notAfterCheck=false policyset.serverCertSet.2.default.class_id=validityDefaultImpl policyset.serverCertSet.2.default.name=Validity Default policyset.serverCertSet.2.default.params.range=731 policyset.serverCertSet.2.default.params.startTime=0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/creating-and-managing-certificate-profiles-in-identity-management_configuring-and-managing-idm
Autoscale APIs
Autoscale APIs OpenShift Container Platform 4.18 Reference guide for autoscale APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/autoscale_apis/index
3.5. Network Interface Controller
3.5. Network Interface Controller The network interface controller (NIC) is a network adapter or LAN adapter that connects a computer to a computer network. The NIC operates on both the physical and data link layers of the machine and enables network connectivity. All virtualization hosts in a Red Hat Virtualization environment have at least one NIC, though it is more common for a host to have two or more NICs. One physical NIC can have multiple virtual NICs (vNICs) logically connected to it. A virtual NIC acts as a network interface for a virtual machine. To distinguish between a vNIC and the NIC that supports it, the Red Hat Virtualization Manager assigns each vNIC a unique MAC address.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/network_interface_controller_nic
Chapter 9. Troubleshooting Dev Spaces
Chapter 9. Troubleshooting Dev Spaces This section provides troubleshooting procedures for the most frequent issues a user can come in conflict with. Additional resources Section 9.1, "Viewing Dev Spaces workspaces logs" Section 9.2, "Troubleshooting slow workspaces" Section 9.3, "Troubleshooting network problems" Section 9.4, "Troubleshooting webview loading error" 9.1. Viewing Dev Spaces workspaces logs You can view OpenShift Dev Spaces logs to better understand and debug background processes should a problem occur. An IDE extension misbehaves or needs debugging The logs list the plugins that have been loaded by the editor. The container runs out of memory The logs contain an OOMKilled error message. Processes running in the container attempted to request more memory than is configured to be available to the container. A process runs out of memory The logs contain an error message such as OutOfMemoryException . A process inside the container ran out of memory without the container noticing. Additional resources Section 9.1.1, "Workspace logs in CLI" Section 9.1.2, "Workspace logs in OpenShift console" Section 9.1.3, "Language servers and debug adapters logs in the editor" 9.1.1. Workspace logs in CLI You can use the OpenShift CLI to observe the OpenShift Dev Spaces workspace logs. Prerequisites The OpenShift Dev Spaces workspace <workspace_name> is running. Your OpenShift CLI session has access to the OpenShift project <namespace_name> containing this workspace. Procedure Get the logs from the pod running the <workspace_name> workspace in the <namespace_name> project: 9.1.2. Workspace logs in OpenShift console You can use the OpenShift console to observe the OpenShift Dev Spaces workspace logs. Procedure In the OpenShift Dev Spaces dashboard, go to Workspaces . Click on a workspace name to display the workspace overview page. This page displays the OpenShift project name <project_name> . Click on the upper right Applications menu, and click the OpenShift console link. Run the steps in the OpenShift console, in the Administrator perspective. Click Workloads > Pods to see a list of all the active workspaces. In the Project drop-down menu, select the <project_name> project to narrow the search. Click on the name of the running pod that runs the workspace. The Details tab contains the list of all containers with additional information. Go to the Logs tab. 9.1.3. Language servers and debug adapters logs in the editor In the Microsoft Visual Studio Code - Open Source editor running in your workspace, you can configure the installed language server and debug adapter extensions to view their logs. Procedure Configure the extension: click File > Preferences > Settings , expand the Extensions section, search for your extension, and set the trace.server or similar configuration to verbose , if such configuration exists. Refer to the extension documentation for further configuration. View your language server logs by clicking View Output , and selecting your language server in the drop-down list for the Output view. Additional resources Open VSX registry 9.2. Troubleshooting slow workspaces Sometimes, workspaces can take a long time to start. Tuning can reduce this start time. Depending on the options, administrators or users can do the tuning. This section includes several tuning options for starting workspaces faster or improving workspace runtime performance. 9.2.1. Improving workspace start time Caching images with Image Puller Role: Administrator When starting a workspace, OpenShift pulls the images from the registry. A workspace can include many containers meaning that OpenShift pulls Pod's images (one per container). Depending on the size of the image and the bandwidth, it can take a long time. Image Puller is a tool that can cache images on each of OpenShift nodes. As such, pre-pulling images can improve start times. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.19/html-single/administration_guide/index#administration-guide:caching-images-for-faster-workspace-start . Choosing better storage type Role: Administrator and user Every workspace has a shared volume attached. This volume stores the project files, so that when restarting a workspace, changes are still available. Depending on the storage, attach time can take up to a few minutes, and I/O can be slow. Installing offline Role: Administrator Components of OpenShift Dev Spaces are OCI images. Set up Red Hat OpenShift Dev Spaces in offline mode to reduce any extra download at runtime because everything needs to be available from the beginning. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.19/html-single/administration_guide/index#administration-guide:installing-che-in-a-restricted-environment . Reducing the number of public endpoints Role: Administrator For each endpoint, OpenShift is creating OpenShift Route objects. Depending on the underlying configuration, this creation can be slow. To avoid this problem, reduce the exposure. For example, to automatically detect a new port listening inside containers and redirect traffic for the processes using a local IP address ( 127.0.0.1 ), Microsoft Visual Code - Open Source has three optional routes. By reducing the number of endpoints and checking endpoints of all plugins, workspace start can be faster. 9.2.2. Improving workspace runtime performance Providing enough CPU resources Plugins consume CPU resources. For example, when a plugin provides IntelliSense features, adding more CPU resources can improve performance. Ensure the CPU settings in the devfile definition, devfile.yaml , are correct: components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest cpuLimit: 4000m 1 cpuRequest: 1000m 2 1 Specifies the CPU limit 2 Specifies the CPU request Providing enough memory Plug-ins consume CPU and memory resources. For example, when a plugin provides IntelliSense features, collecting data can consume all the memory allocated to the container. Providing more memory to the container can increase performance. Ensure that memory settings in the devfile definition devfile.yaml file are correct. components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest memoryLimit: 6G 1 memoryRequest: 512Mi 2 1 Specifies the memory limit 2 Specifies the memory request 9.3. Troubleshooting network problems This section describes how to prevent or resolve issues related to network policies. OpenShift Dev Spaces requires the availability of the WebSocket Secure (WSS) connections. Secure WebSocket connections improve confidentiality and also reliability because they reduce the risk of interference by bad proxies. Prerequisites The WebSocket Secure (WSS) connections on port 443 must be available on the network. Firewall and proxy may need additional configuration. Procedure Verify the browser supports the WebSocket protocol. See: Searching a websocket test . Verify firewalls settings: WebSocket Secure (WSS) connections on port 443 must be available. Verify proxy servers settings: The proxy transmits and intercepts WebSocket Secure (WSS) connections on port 443. 9.4. Troubleshooting webview loading error If you use Microsoft Visual Studio Code - Open Source in a private browsing window, you might encounter the following error message: Error loading webview: Error: Could not register service workers . This is a known issue affecting following browsers: Google Chrome in Incognito mode Mozilla Firefox in Private Browsing mode Table 9.1. Dealing with the webview error in a private browsing window Browser Workarounds Google Chrome Go to Settings Privacy and security Cookies and other site data Allow all cookies . Mozilla Firefox Webviews are not supported in Private Browsing mode. See this reported bug for details.
[ "oc logs --follow --namespace=' <workspace_namespace> ' --selector='controller.devfile.io/devworkspace_name= <workspace_name> '", "components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest cpuLimit: 4000m 1 cpuRequest: 1000m 2", "components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest memoryLimit: 6G 1 memoryRequest: 512Mi 2" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/user_guide/troubleshooting-devspaces
Chapter 5. Using Camel CLI with Red Hat build of Apache Camel for Quarkus
Chapter 5. Using Camel CLI with Red Hat build of Apache Camel for Quarkus 5.1. Installing Camel CLI Prerequisites JBang must be installed on your machine. See instructions on how to download and install the JBang. After the JBang is installed, you can verify JBang is working by executing the following command from a command shell: jbang version This outputs the version of installed JBang. Procedure Optional: uninstall any versions of Camel CLI: jbang app uninstall camel Run the following command to install the Camel CLI application: jbang app install -Dcamel.jbang.version=4.8.2 camel@apache/camel Use a camel.jbang.version that matches the product camel version This installs the Apache Camel as the camel command within JBang. This means that you can run Camel from the command line by just executing camel command. 5.2. Using Camel CLI The Camel CLI supports multiple commands. The camel help command can display all the available commands. camel --help Note The first time you run this command, it may cause dependencies to be cached, therefore taking a few extra seconds to run. If you are already using JBang and you get errors such as Exception in thread "main" java.lang.NoClassDefFoundError: "org/apache/camel/dsl/jbang/core/commands/CamelJBangMain" , try clearing the JBang cache and re-install again. All the commands support the --help and will display the appropriate help if that flag is provided. 5.2.1. User configuration for Camel CLI Camel CLI config command is used to store and use the user configuration. This eliminates the need to specify CLI options each time. For example, to run a different Camel version, use: camel run * --camel-version=4.8.3.redhat-00004 the camel-version can be added to the user configuration such as: camel config set camel-version=4.8.3.redhat-00004 This configures the Camel version that is used when you use camel run command. The run command below uses the user configuration: camel run * The user configuration file is stored in ~/.camel-jbang-user.properties . 5.2.2. Enable shell completion Camel CLI provides shell completion for bash and zsh out of the box. To enable shell completion for Camel CLI, run: source <(camel completion) To make it permanent, run: echo 'source <(camel completion)' >> ~/.bashrc 5.3. Creating and running Camel routes You can create a new basic routes with the init command. For example to create an XML route, run the following command: camel init cheese.xml This creates the file cheese.xml (in the current directory) with a sample route. To run the file, run: camel run cheese.xml Note You can create and run any of the supported DSLs in Camel such as YAML, XML, Java, Groovy. To create a new .java route, run: camel init foo.java When you use the init command, Camel by default creates the file in the current directory. However, you can use the --directory option to create the file in the specified directory. For example to create in a folder named foobar , run: camel init foo.java --directory=foobar Note When you use the --directory option, Camel automatically cleans this directory if already exists. 5.3.1. Running routes from multiple files You can run routes from more than one file, for example to run two YAML files: camel run one.yaml two.yaml You can run routes from two different files such as yaml and Java: camel run one.yaml hello.java You can use wildcards (i.e. * ) to match multiple files, such as running all the yaml files: camel run *.yaml You can run all files starting with foo*: camel run foo* To run all the files in the directory, use: camel run * Note The run goal can also detect files that are properties , such as application.properties . 5.3.2. Running routes from input parameter For very small Java routes, it is possible to provide the route as CLI argument, as shown below: camel run --code='from("kamelet:beer-source").to("log:beer")' This is very limited as the CLI argument is a bit cumbersome to use than files. When you run the routes from input parameter, remember that: Only Java DSL code is supported. Code is wrapped in single quote, so you can use double quote in Java DSL. Code is limited to what literal values possible to provide from the terminal and JBang. All route(s) must be defined in a single --code parameter. Note Using --code is only usable for very quick and small prototypes. 5.3.3. Dev mode with live reload You can enable the dev mode that comes with live reload of the route(s) when the source file is updated (saved), using the --dev options as shown: camel run foo.yaml --dev Then while the Camel integration is running, you can update the YAML route and update when saving. This option works for all DLS including java , for example: camel run hello.java --dev Note The live reload option is meant for development purposes only, and if you encounter problems with reloading such as JVM class loading issues, then you may need to restart the integration. 5.3.4. Developer Console You can enable the developer console, which presents a variety of information to the developer. To enable the developer console, run: camel run hello.java --console The console is then accessible from a web browser at http://localhost:8080/q/dev (by default). The link is also displayed in the log when the Camel is starting up. The console can give you insights into your running Camel integration, such as reporting the top routes that takes the longest time to process messages. You can then identify the slowest individual EIPs in these routes. The developer console can also output the data in JSON format, that can be used by 3rd-party tooling to capture the information. For example, to output the top routes via curl, run: curl -s -H "Accept: application/json" http://0.0.0.0:8080/q/dev/top/ If you have jq installed, that can format and output the JSON data in colour, run: curl -s -H "Accept: application/json" http://0.0.0.0:8080/q/dev/top/ | jq 5.3.5. Using profiles A profile in Camel CLI is a name (id) that refers to the configuration that is loaded automatically with Camel CLI. The default profile is named as the application which is a (smart default) to let Camel CLI automatic load application.properties (if present). This means that you can create profiles that match to a specific properties file with the same name. For example, running with a profile named local means that Camel CLI will load local.properties instead of application.properties . To use a profile, specify the command line option --profile as shown: camel run hello.java --profile=local You can only specify one profile name at a time, for example, --profile=local,two is not valid. In the properties files you can configure all the configurations from Camel Main . To turn off and enable log masking run the following command: camel.main.streamCaching=false camel.main.logMask=true You can also configure Camel components such as camel-kafka to declare the URL to the brokers: camel.component.kafka.brokers=broker1:9092,broker2:9092,broker3:9092 Note Keys starting with camel.jbang are reserved keys that are used by Camel CLI internally, and allow for pre-configuring arguments for Camel CLI commands. 5.3.6. Downloading JARs over the internet By default, Camel CLI automatically resolves the dependencies needed to run Camel, this is done by JBang and Camel respectively. Camel itself detects at runtime if a component has a need for the JARs that are not currently available on the classpath, and can then automatically download the JARs. Camel downloads these JARs in the following order: from the local disk in ~/.m2/repository from the internet in Maven Central from internet in the custom 3rd-party Maven repositories from all the repositories found in active profiles of ~/.m2/settings.xml or a settings file specified using --maven-settings option. If you do not want the Camel CLI to download over the internet, you can turn this off with the --download option, as shown: camel run foo.java --download=false 5.3.7. Adding custom JARs Camel CLI automatically detects the dependencies for the Camel components, languages, and data formats from its own release. This means that it is not necessary to specify which JARs to use. However, if you need to add 3rd-party custom JARs then you can specify these with the --dep as CLI argument in Maven GAV syntax ( groupId:artifactId:version ), such as: camel run foo.java --dep=com.foo:acme:1.0 camel run foo.java --dep=camel-saxon You can specify multiple dependencies separated by comma: camel run foo.java --dep=camel-saxon,com.foo:acme:1.0 5.3.8. Using 3rd-party Maven repositories Camel CLI downloads from the local repository first, and then from the online Maven Central repository. To download from the 3rd-party Maven repositories, you must specify this as CLI argument, or in the application.properties file. camel run foo.java --repos=https://packages.atlassian.com/maven-external Note You can specify multiple repositories separated by comma. The configuration for the 3rd-party Maven repositories is configured in the application.properties file with the key camel.jbang.repos as shown: camel.jbang.repos=https://packages.atlassian.com/maven-external When you run Camel route, the application.properties is automatically loaded: camel run foo.java You can also explicitly specify the properties file to use: camel run foo.java application.properties Or you can specify this as a profile: camel run foo.java --profile=application Where the profile id is the name of the properties file. 5.3.9. Configuration of Maven usage By default, the existing ~/.m2/settings.xml file is loaded, so it is possible to alter the behavior of the Maven resolution process. Maven settings file provides the information about the Maven mirrors, credential configuration (potentially encrypted) or active profiles and additional repositories. Maven repositories can use authentication and the Maven-way to configure credentials is through <server> elements: <server> <id>external-repository</id> <username>camel</username> <password>{SSVqy/PexxQHvubrWhdguYuG7HnTvHlaNr6g3dJn7nk=}</password> </server> While the password may be specified using plain text, we recommend you configure the maven master password first and then use it to configure repository password: USD mvn -emp Master password: camel {hqXUuec2RowH8dA8vdqkF6jn4NU9ybOsDjuTmWvYj4U=} The above password must be added to ~/.m2/settings-security.xml file as shown: <settingsSecurity> <master>{hqXUuec2RowH8dA8vdqkF6jn4NU9ybOsDjuTmWvYj4U=}</master> </settingsSecurity> Then you can configure a normal password: USD mvn -ep Password: camel {SSVqy/PexxQHvubrWhdguYuG7HnTvHlaNr6g3dJn7nk=} Then you can use this password in the <server>/<password> configuration. By default, Maven reads the master password from ~/.m2/settings-security.xml file, but you can override it. Location of the settings.xml file itself can be specified as shown: camel run foo.java --maven-settings=/path/to/settings.xml --maven-settings-security=/path/to/settings-security.xml If you want to run Camel application without assuming any location (even ~/.m2/settings.xml ), use this option: camel run foo.java --maven-settings=false 5.3.10. Running routes hosted on GitHub You can run a route that is hosted on the GitHub using the Camels resource loader. For example, to run one of the Camel K examples, use: camel run github:apache:camel-kamelets-examples:jbang/hello-java/Hey.java You can also use the https URL for the GitHub. For example, you can browse the examples from a web-browser and then copy the URL from the browser window and run the example with Camel CLI: camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/hello-java You can also use wildcards (i.e. \* ) to match multiple files, such as running all the groovy files: camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/languages/*.groovy Or you can run all files starting with rou*: camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/languages/rou* 5.3.10.1. Running routes from the GitHub gists Using the gists from the GitHub is a quick way to share the small Camel routes that you can easily run. For example to run a gist, use: camel run https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92 A gist can contain one or more files, and Camel CLI will gather all relevant files, so a gist can contain multiple routes, properties files, and Java beans. 5.3.11. Downloading routes hosted on the GitHub You can use Camel CLI to download the existing examples from GitHub to local disk, which allows to modify the example and to run locally. For example, you can download the dependency injection example by running the following command: camel init https://github.com/apache/camel-kamelets-examples/tree/main/jbang/dependency-injection Then the files (not sub folders) are downloaded to the current directory. You can then run the example locally with: camel run * You can also download to the files to a new folder using the --directory option, for example to download the files to a folder named myproject , run: camel init https://github.com/apache/camel-kamelets-examples/tree/main/jbang/dependency-injection --directory=myproject Note When using --directory option, Camel will automatically clean this directory if already exists. You can run the example in dev mode, to hot-deploy on the source code changes. camel run * --dev You can download a single file, for example, to download one of the Camel K examples, run: camel init https://github.com/apache/camel-k-examples/blob/main/generic-examples/languages/simple.groovy This is a groovy route, which you can run with (or use * ): camel run simple.groovy 5.3.11.1. Downloading routes form GitHub gists You can download the files from the gists as shown: camel init https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92 This downloads the files to local disk, which you can run afterwards: camel run * You can download to a new folder using the --directory option, for example, to download to a folder named foobar , run: camel init https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92 --directory=foobar Note When using --directory option, Camel automatically cleans this directory if already exists. 5.3.12. Running the Camel K integrations or bindings Camel supports running the Camel K integrations and binding files, that are in the CRD format (Kubernetes Custom Resource Definitions).For example, to run a kamelet binding file named joke.yaml : #!/usr/bin/env jbang camel@apache/camel run apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: joke spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: chuck-norris-source properties: period: 2000 sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink properties: show-headers: false camel run joke.yaml 5.3.13. Run from the clipboard You can run the Camel routes directly from the OS clipboard. This allows to copy some code, and then quickly run the route. camel run clipboard.<extension> Where <extension> is the type of the content of the clipboard is, such as java , xml , or yaml . For example, you can copy this to your clipboard and then run the route: <route> <from uri="timer:foo"/> <log message="Hello World"/> </route> camel run clipboard.xml 5.3.14. Controlling the local Camel integrations To list the Camel integrations that are currently running, use the ps option: camel ps PID NAME READY STATUS AGE 61818 sample.camel.MyCamelApplica... 1/1 Running 26m38s 62506 test1 1/1 Running 4m34s This lists the PID, the name and age of the integration. You can use the stop command to stop any of these running Camel integrations. For example to stop the test1 , run: camel stop test1 Stopping running Camel integration (pid: 62506) You can use the PID to stop the integration: camel stop 62506 Stopping running Camel integration (pid: 62506) Note You do not have to type the full name, as the stop command will match the integrations that starts with the input, for example you can type camel stop t to stop all integrations starting with t . To stop all integrations, use the --all option as follows: camel stop --all Stopping running Camel integration (pid: 61818) Stopping running Camel integration (pid: 62506) 5.3.15. Controlling Quarkus integrations The Camel CLI by default only controls the Camel integrations that are running using the CLI, for example, camel run foo.java . For the CLI to be able to control and manage the Quarkus applications, you need to add a dependency to these projects to integrate with the Camel CLI. Quarkus In the Quarkus application, add the following dependency: <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-cli-connector</artifactId> </dependency> 5.3.16. Getting the status of Camel integrations The get command in the Camel CLI is used for getting the Camel specific status for one or all of the running Camel integrations. To display the status of the running Camel integrations, run: camel get PID NAME CAMEL PLATFORM READY STATUS AGE TOTAL FAILED INFLIGHT SINCE-LAST 61818 MyCamel 3.20.1-SNAPSHOT Quarkus v3.2 1/1 Running 28m34s 854 0 0 0s/0s/- 63051 test1 3.20.1-SNAPSHOT JBang 1/1 Running 18s 14 0 0 0s/0s/- 63068 mygroovy 3.20.1-SNAPSHOT JBang 1/1 Running 5s 2 0 0 0s/0s/- The camel get command displays the default integrations, which is equivalent to typing the camel get integrations or the camel get int commands. This displays the overall information for the every Camel integration, where you can see the total number of messages processed. The column Since Last shows how long time ago the last processed message for three stages (started/completed/failed). The value of 0s/0s/- means that the last started and completed message just happened (0 seconds ago), and that there has not been any failed message yet. In this example, 9s/9s/1h3m means that last started and completed message is 9 seconds ago, and last failed is 1 hour and 3 minutes ago. You can also see the status of every routes, from all the local Camel integrations with camel get route : camel get route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 61818 MyCamel hello timer://hello?period=2000 Running 29m2s 870 0 0 0 0 14 0s/0s/- 63051 test1 java timer://java?period=1000 Running 46s 46 0 0 0 0 9 0s/0s/- 63068 mygroovy groovy timer://groovy?period=1000 Running 34s 34 0 0 0 0 5 0s/0s/- Note Use camel get --help to display all the available commands. 5.3.16.1. Top status of the Camel integrations The camel top command is used for getting top utilization statistics (highest to lowest heap used memory) of the running Camel integrations. camel top PID NAME JAVA CAMEL PLATFORM STATUS AGE HEAP NON-HEAP GC THREADS CLASSES 22104 chuck 11.0.13 3.20.1-SNAPSHOT JBang Running 2m10s 131/322/4294 MB 70/73 MB 17ms (6) 7/8 7456/7456 14242 MyCamel 11.0.13 3.20.1-SNAPSHOT Quarkus 32. Running 33m40s 115/332/4294 MB 62/66 MB 37ms (6) 16/16 8428/8428 22116 bar 11.0.13 3.20.1-SNAPSHOT JBang Running 2m7s 33/268/4294 MB 54/58 MB 20ms (4) 7/8 6104/6104 The HEAP column shows the heap memory (used/committed/max) and the non-heap (used/committed). The GC column shows the garbage collection information (time and total runs). The CLASSES column shows the number of classes (loaded/total). You can also see the top performing routes (highest to lowest mean processing time) of every routes, from all the local Camel integrations with camel top route : camel top route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 22104 chuck chuck-norris-source-1 timer://chuck?period=10000 Started 10s 1 0 0 163 163 163 9s 22116 bar route1 timer://yaml2?period=1000 Started 7s 7 0 0 1 0 11 0s 22104 chuck chuck kamelet://chuck-norris-source Started 10s 1 0 0 0 0 0 9s 22104 chuck log-sink-2 kamelet://source?routeId=log-sink-2 Started 10s 1 0 0 0 0 0 9s 14242 MyCamel hello timer://hello?period=2000 Started 31m41s 948 0 0 0 0 4 0s Note Use camel top --help to display all the available commands. 5.3.16.2. Starting and Stopping the routes The camel cmd is used for executing the miscellaneous commands in the running Camel integrations, for example, the commands to start and stop the routes. To stop all the routes in the chuck integration, run: camel cmd stop-route chuck The status will be then changed to Stopped for the chuck integration: camel get route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 81663 chuck chuck kamelet://chuck-norris-source Stopped 600 0 0 0 0 1 4s 81663 chuck chuck-norris-source-1 timer://chuck?period=10000 Stopped 600 0 0 65 52 290 4s 81663 chuck log-sink-2 kamelet://source?routeId=log-sink-2 Stopped 600 0 0 0 0 1 4s 83415 bar route1 timer://yaml2?period=1000 Started 5m30s 329 0 0 0 0 10 0s 83695 MyCamel hello timer://hello?period=2000 Started 3m52s 116 0 0 0 0 9 1s To start the route, run: camel cmd start-route chuck To stop all the routes in every the Camel integration, use the --all flag as follows: camel cmd stop-route --all To start all the routes, use: camel cmd start-route --all Note You can stop one or more route by their ids by separating them using comma, for example, camel cmd start-route --id=route1,hello . Use the camel cmd start-route --help command for more details. 5.3.16.3. Configuring the logging levels You can see the current logging levels of the running Camel integrations by: camel cmd logger PID NAME AGE LOGGER LEVEL 90857 bar 2m48s root INFO 91103 foo 20s root INFO The logging level can be changed at a runtime. For example, to change the level for the foo to DEBUG, run: camel cmd logger --level=DEBUG foo Note You can use --all to change logging levels for all running integrations. 5.3.16.4. Listing services Some Camel integrations may host a service which clients can call, such as REST, or SOAP-WS, or socket-level services using TCP protocols. You can list the available services as shown in the example below: camel get service PID NAME COMPONENT PROTOCOL SERVICE 1912 netty netty tcp tcp:localhost:4444 2023 greetings platform-http rest http://0.0.0.0:7777/camel/greetings/{name} (GET) 2023 greetings platform-http http http://0.0.0.0:7777/q/dev Here, you can see the two Camel integrations. The netty integration hosts a TCP service that is available on port 4444. The other Camel integration hosts a REST service that can be called via GET only. The third integration comes with embedded web console (started with the --console option). Note For a service to be listed the Camel components must be able to advertise the services using Camel Console . 5.3.16.4.1. Listing state of Circuit Breakers If your Camel integration uses the link:https://camel.apache.org/components/3.20.x/eips/circuitBreaker-eip.html [Circuit Breaker], then you can output the status of the breakers with Camel CLI as follows: camel get circuit-breaker PID NAME COMPONENT ROUTE ID STATE PENDING SUCCESS FAIL REJECT 56033 mycb resilience4j route1 circuitBreaker1 HALF_OPEN 5 2 3 0 Here we can see the circuit breaker is in half open state, that is a state where the breaker is attempting to transition back to closed, if the failures start to drop. Note You can run the command with watch option to show the latest state, for example: watch camel get circuit-breaker . 5.3.17. Scripting from the terminal using pipes You can execute a Camel CLI file as a script that is used for terminal scripting with pipes and filters. Note Every time the script is executed a JVM is started with Camel. This is not very fast or low on memory usage, so use the Camel CLI terminal scripting, for example, to use the many Camel components or Kamelets to more easily send or receive data from disparate IT systems. This requires to add the following line in top of the file, for example, as in the upper.yaml file below: ///usr/bin/env jbang --quiet camel@apache/camel pipe "USD0" "USD@" ; exit USD? # Will upper-case the input - from: uri: "stream:in" steps: - setBody: simple: "USD{body.toUpperCase()}" - to: "stream:out" To execute this as a script, you need to set the execute file permission: chmod +x upper.yaml Then you can then execute this as a script: echo "Hello\nWorld" | ./upper.yaml This outputs: HELLO WORLD You can turn on the logging using --logging=true which then logs to .camel-jbang/camel-pipe.log file. The name of the logging file cannot be configured. echo "Hello\nWorld" | ./upper.yaml --logging=true 5.3.17.1. Using stream:in with line vs raw mode When using stream:in to read data from System in then the Stream Component works in two modes: line mode (default) - reads input as single lines (separated by line breaks). Message body is a String . raw mode - reads the entire stream until end of stream . Message body is a byte[] . Note The default mode is due to historically how the stream component was created. Therefore, you may want to set stream:in?readLine=false to use raw mode. 5.3.18. Running local Kamelets You can use Camel CLI to try local Kamelets, without the need to publish them on GitHub or package them in a jar. camel run --local-kamelet-dir=/path/to/local/kamelets earthquake.yaml Note When the kamelets are from local file system, then they can be live reloaded, if they are updated, when you run Camel CLI in --dev mode. You can also point to a folder in a GitHub repository. For example: camel run --local-kamelet-dir=https://github.com/apache/camel-kamelets-examples/tree/main/custom-kamelets user.java Note If a kamelet is loaded from GitHub, then they cannot be live reloaded. 5.3.19. Using the platform-http component When a route is started from platform-http then the Camel CLI automatically includes a VertX HTTP server running on port 8080. following example shows the route in a file named server.yaml : - from: uri: "platform-http:/hello" steps: - set-body: constant: "Hello World" You can run this example with: camel run server.yaml And then call the HTTP service with: USD curl http://localhost:8080/hello Hello World% 5.3.20. Using Java beans and processors There is basic support for including regular Java source files together with Camel routes, and let the Camel CLI runtime compile the Java source. This means you can include smaller utility classes, POJOs, Camel Processors that the application needs. Note The Java source files cannot use package names. 5.3.21. Debugging There are two kinds of debugging available: Java debugging - Java code debugging (Standard Java) Camel route debugging - Debugging Camel routes (requires Camel tooling plugins) 5.3.21.1. Java debugging You can debug your integration scripts by using the --debug flag provided by JBang. However, to enable the Java debugging when starting the JVM, use the jbang command, instead of camel as shown: jbang --debug camel@apache/camel run hello.yaml Listening for transport dt_socket at address: 4004 As you can see the default listening port is 4004 but can be configured as described in JBang debugging . This is a standard Java debug socket. You can then use the IDE of your choice. You can add a Processor to put breakpoints hit during route execution (as opposed to route definition creation). 5.3.21.2. Camel route debugging The Camel route debugger is available by default (the camel-debug component is automatically added to the classpath). By default, it can be reached through JMX at the URL service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi/camel . You can then use the Integrated Development Environment (IDE) of your choice. 5.3.22. Health Checks The status of health checks is accessed using the Camel CLI from the CLI as follows: camel get health PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61005 mybind 8s camel/context R UP 2/2/- 1s/3s/- Here you can see the Camel is UP . The application has been running for 8 seconds, and there are two health checks invoked. The output shows the default level of checks as: CamelContext health check Component specific health checks (such as from camel-kafka or camel-aws ) Custom health checks Any check which are not UP The RATE column shows three numbers separated by / . So 2/2/- means 2 checks in total, 2 successful and no failures. The two last columns will reset when a health check changes state as this number is the number of consecutive checks that was successful or failure. So if the health check starts to fail then the numbers could be: camel get health PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61005 mybind 3m2s camel/context R UP 77/-/3 1s/-/17s some kind of error Here you can see the numbers is changed to 77/-/3 . This means the total number of checks is 77. There is no success, but the check has been failing 3 times in a row. The SINCE column corresponds to the RATE . So in this case you can see the last check was 1 second ago, and that the check has been failing for 17 second in a row. You can use --level=full to output every health checks that will include consumer and route level checks as well. A health check may often be failed due to an exception was thrown which can be shown using --trace flag: camel get health --trace PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61038 mykafka 6m19s camel/context R UP 187/187/- 1s/6m16s/- 61038 mykafka 6m19s camel/kafka-consumer-kafka-not-secure... R DOWN 187/-/187 1s/-/6m16s KafkaConsumer is not ready - Error: Invalid url in bootstrap.servers: value ------------------------------------------------------------------------------------------------------------------------ STACK-TRACE ------------------------------------------------------------------------------------------------------------------------ PID: 61038 NAME: mykafka AGE: 6m19s CHECK-ID: camel/kafka-consumer-kafka-not-secured-source-1 STATE: DOWN RATE: 187 SINCE: 6m16s METADATA: bootstrap.servers = value group.id = 7d8117be-41b4-4c81-b4df-cf26b928d38a route.id = kafka-not-secured-source-1 topic = value MESSAGE: KafkaConsumer is not ready - Error: Invalid url in bootstrap.servers: value org.apache.kafka.common.KafkaException: Failed to construct kafka consumer at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:823) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:664) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:645) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:625) at org.apache.camel.component.kafka.DefaultKafkaClientFactory.getConsumer(DefaultKafkaClientFactory.java:34) at org.apache.camel.component.kafka.KafkaFetchRecords.createConsumer(KafkaFetchRecords.java:241) at org.apache.camel.component.kafka.KafkaFetchRecords.createConsumerTask(KafkaFetchRecords.java:201) at org.apache.camel.support.task.ForegroundTask.run(ForegroundTask.java:123) at org.apache.camel.component.kafka.KafkaFetchRecords.run(KafkaFetchRecords.java:125) at java.base/java.util.concurrent.ExecutorsUSDRunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.common.config.ConfigException: Invalid url in bootstrap.servers: value at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:59) at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:730) ... 13 more Here you can see that the health check fails because of the org.apache.kafka.common.config.ConfigException which is due to invalid configuration: Invalid url in bootstrap.servers: value . Note Use camel get health --help to see all the various options. 5.4. Listing what Camel components is available Camel comes with a lot of artifacts out of the box which are: components data formats expression languages miscellaneous components kamelets You can use the Camel CLI to list what Camel provides using the camel catalog command. For example, to list all the components: camel catalog components To see which Kamelets are available: camel catalog kamelets Note Use camel catalog --help to see all possible commands. 5.4.1. Displaying component documentation The doc goal can show quick documentation for every component, dataformat, and kamelets. For example, to see the kafka component run: camel doc kafka Note The documentation is not the full documentation as shown on the website, as the Camel CLI does not have direct access to this information and can only show a basic description of the component, but include tables for every configuration option. To see the documentation for jackson dataformat: camel doc jackson In some rare cases then there may be a component and dataformat with the same name, and the doc goal prioritizes components. In such a situation you can prefix the name with dataformat, for example: camel doc dataformat:thrift You can also see the kamelet documentation such as shown: camel doc aws-kinesis-sink Note See Supported Kamelets for the list of supported kamelets. 5.4.1.1. Browsing online documentation from the Camel website You can use the doc command to quickly open the url in the web browser for the online documentation. For example to browse the kafka component, you use --open-url : camel doc kafka --open-url This also works for data formats, languages, kamelets. camel doc aws-kinesis-sink --open-url Note To just get the link to the online documentation, then use camel doc kafka --url . 5.4.1.2. Filtering options listed in the tables Some components may have many options, and in such cases you can use the --filter option to only list the options that match the filter either in the name, description, or the group (producer, security, advanced). For example, to list only security related options: camel doc kafka --filter=security To list only something about timeout : camel doc kafka --filter=timeout 5.5. Gathering list of dependencies The dependencies are automatically resolved when you work with Camel CLI. This means that you do not have to use a build system like Maven or Gradle to add every Camel components as a dependency. However, you may want to know what dependencies are required to run the Camel integration. You can use the dependencies command to see the dependencies required. The command output does not output a detailed tree, such as mvn dependencies:tree , as the output is intended to list which Camel components, and other JARs needed (when using Kamelets). The dependency output by default is vanilla Apache Camel with the camel-main as runtime, as shown: camel dependency org.apache.camel:camel-dsl-modeline:4.8.3 org.apache.camel:camel-health:4.8.3 org.apache.camel:camel-kamelet:4.8.3 org.apache.camel:camel-log:4.8.3 org.apache.camel:camel-rest:4.8.3 org.apache.camel:camel-stream:4.8.3 org.apache.camel:camel-timer:4.8.3 org.apache.camel:camel-yaml-dsl:4.8.3 org.apache.camel.kamelets:camel-kamelets-utils:0.9.3 org.apache.camel.kamelets:camel-kamelets:0.9.3 The output is by default a line per maven dependency in GAV format (groupId:artifactId:version). You can specify the Maven format for the the output as shown: camel dependency --output=maven <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-main</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-dsl-modeline</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-health</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kamelet</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-log</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-stream</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-timer</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-yaml-dsl</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets-utils</artifactId> <version>0.9.3</version> </dependency> <dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets</artifactId> <version>0.9.3</version> </dependency> You can also choose the target runtime as either quarkus as shown: camel dependency --runtime=quarkus org.apache.camel.quarkus:camel-quarkus-core:3.15 org.apache.camel.quarkus:camel-quarkus-debug:3.15 org.apache.camel.quarkus:camel-quarkus-microprofile-health:3.15 org.apache.camel.quarkus:camel-quarkus-platform-http:3.15 org.apache.camel.quarkus:camel-timer:3.15 org.apache.camel:camel-cli-connector:{camel-sb-version} org.apache.camel:camel-management:{camel-sb-version} org.apache.camel:camel-rest:{camel-sb-version} org.apache.camel:camel-timer:{camel-sb-version} org.apache.camel:camel-xml-io-dsl:{camel-sb-version} org.apache.camel:camel-yaml-dsl:{camel-sb-version} 5.6. Open API Camel CLI allows to quickly expose an Open API service using contract first approach, where you have an existing OpenAPI specification file. Camel CLI bridges each API endpoints from the OpenAPI specification to a Camel route with the naming convention direct:<operationId> . This make it quicker to implement a Camel route for a given operation. See the OpenAPI example for more details. 5.7. Troubleshooting When you use JBang, it stores the state in ~/.jbang directory. This is also the location where JBang stores downloaded JARs. Camel CLI also downloads the needed dependencies while running. However, these dependencies are downloaded to your local Maven repository ~/.m2 . So when you troubleshoot the problems such as an outdated JAR while running the Camel CLI, try to delete these directories, or parts of it. 5.8. Exporting to Red Hat build of Apache Camel for Quarkus You can export your Camel CLI integration to a traditional Java based project. You may want to do this after you have built a prototype using Camel CLI, and are in the need of a traditional Java based project with more need for Java coding, or to use the powerful runtimes of Quarkus or vanilla Camel Main. 5.8.1. Exporting to Red Hat build of Apache Camel for Quarkus The command export --runtime=quarkus exports your current Camel CLI file(s) to a Maven based project with files organized in src/main/ folder structure. For example, to export using the quarkus runtime, the maven groupID com.foo , the artifactId acme , and the version 1.0-SNAPSHOT into the camel-quarkus-jbang directory, run: Example camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.15.3.redhat-00002 --dep=org.apache.camel.quarkus:camel-quarkus-timer,org.apache.camel.quarkus:camel-quarkus-management,org.apache.camel.quarkus:camel-quarkus-cli-connector --repos=https://maven.repository.redhat.com/ga,https://packages.atlassian.com/maven-external --directory=camel-quarkus-jbang Note This will export to the current directory, this means that files are moved into the needed folder structure. To export to another directory, run: camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --directory=../myproject When exporting, the Camel version defined in the pom.xml or build.gradle is the same version as Camel CLI uses. However, you can specify the different Camel version as shown: camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --directory=../myproject --quarkus-version=3.15.3.redhat-00002 Note See the possible options by running the camel export --help command for more details. 5.8.2. Exporting with Camel CLI included When exporting to Quarkus or Camel Main, the Camel JBang CLI is not included out of the box. To continue to use the Camel CLI (that is camel ), you need to add camel:cli-connector in the --dep option, as shown: camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --dep=camel:cli-connector --directory=../myproject 5.8.3. Configuring the export The export command by default loads the configuration from application.properties file which is used for exporting specific parameters such as selecting the runtime and java version. The following options related to exporting , can be configured in the application.properties file: Option Description camel.jbang.runtime Runtime ( quarkus , or camel-main ) camel.jbang.gav The Maven group:artifact:version camel.jbang.dependencies Additional dependencies (Use commas to separate multiple dependencies). See more details at Adding custom JARs . camel.jbang.classpathFiles Additional files to add to classpath (Use commas to separate multiple files). See more details at Adding custom JARs . camel.jbang.javaVersion Java version (11 or 17) camel.jbang.kameletsVersion Apache Camel Kamelets version camel.jbang.localKameletDir Local directory for loading Kamelets camel.jbang.quarkusGroupId Quarkus Platform Maven groupId camel.jbang.quarkusArtifactId Quarkus Platform Maven artifactId camel.jbang.quarkusVersion Quarkus Platform version camel.jbang.mavenWrapper Include Maven Wrapper files in exported project camel.jbang.gradleWrapper Include Gradle Wrapper files in exported project camel.jbang.buildTool Build tool to use (maven or gradle) camel.jbang.repos Additional maven repositories for download on-demand (Use commas to separate multiple repositories) camel.jbang.mavenSettings Optional location of maven setting.xml file to configure servers, repositories, mirrors and proxies. If set to false, not even the default ~/.m2/settings.xml will be used. camel.jbang.mavenSettingsSecurity Optional location of maven settings-security.xml file to decrypt settings.xml camel.jbang.exportDir Directory where the project will be exported. camel.jbang.platform-http.port HTTP server port to use when running standalone Camel, such as when --console is enabled (port 8080 by default). camel.jbang.console Developer console at /q/dev on local HTTP server (port 8080 by default) when running standalone Camel. camel.jbang.health Health check at /q/health on local HTTP server (port 8080 by default) when running standalone Camel. Note These are the options from the export command. To view more details and default values, run: camel export --help . 5.8.4. Configuration The Camel CLI config command is used to store and use the user configuration. This eliminates the need to specify CLI options each time. For example, to run a different Camel version, use: Example camel run * --camel-version=4.8 the camel-version can be added to the user configuration such as: camel config set camel-version=4.8 The run command uses the user configuration: camel run * The user configuration file is stored in ~/.camel-jbang-user.properties . 5.8.4.1. Set and unset configuration Every Camel CLI option is added to the user configuration. For example: Example camel config set gav=com.foo:acme:1.0-SNAPSHOT camel config set runtime=quarkus camel config set deps=org.apache.camel.quarkus:camel-timer,camel:management,camel:cli-connector camel config set camel-version=4.8 camel config set camel-quarkus-version=3.15 To export the configuration: camel export To initialize the camel app: camel init foo.yaml To run the camel app: camel run foo.yaml --https://maven.repository.redhat.com/ga,https://packages.atlassian.com/maven-external To unset user configuration keys: camel config unset camel-quarkus-version 5.8.4.2. List and get configurations User configuration keys are listed using the following: camel config list The configuration above gives the following output: runtime = quarkus deps = org.apache.camel.springboot:camel-timer-starter gav = com.foo:acme:1.0-SNAPSHOT To obtain a value for the given key, use the get command. camel config get gav com.foo:acme:1.0-SNAPSHOT 5.8.4.3. Placeholders substitutes User configuration values can be used as placeholder substitutes with command line properties, for example: Example camel config set repos=https://maven.repository.redhat.com/ga camel run 'Test.java' --logging-level=info --repos=#repos,https://packages.atlassian.com/maven-external In this example, since repos is set in the user configuration (config set) and the camel run command declares the placeholder #repos, camel run will replace the placeholder so that both repositories will be used during the execution. Notice, that to refer to the configuration value the syntax is #optionName eg #repos. Note The placeholder substitution only works for every option that a given Camel command has. You can see all the options a command has using camel run --help .
[ "jbang version", "jbang app uninstall camel", "jbang app install -Dcamel.jbang.version=4.8.2 camel@apache/camel", "camel --help", "camel run * --camel-version=4.8.3.redhat-00004", "camel config set camel-version=4.8.3.redhat-00004", "camel run *", "source <(camel completion)", "echo 'source <(camel completion)' >> ~/.bashrc", "camel init cheese.xml", "camel run cheese.xml", "camel init foo.java", "camel init foo.java --directory=foobar", "camel run one.yaml two.yaml", "camel run one.yaml hello.java", "camel run *.yaml", "camel run foo*", "camel run *", "camel run --code='from(\"kamelet:beer-source\").to(\"log:beer\")'", "camel run foo.yaml --dev", "camel run hello.java --dev", "camel run hello.java --console", "curl -s -H \"Accept: application/json\" http://0.0.0.0:8080/q/dev/top/", "curl -s -H \"Accept: application/json\" http://0.0.0.0:8080/q/dev/top/ | jq", "camel run hello.java --profile=local", "camel.main.streamCaching=false camel.main.logMask=true", "camel.component.kafka.brokers=broker1:9092,broker2:9092,broker3:9092", "camel run foo.java --download=false", "camel run foo.java --dep=com.foo:acme:1.0", "To add a Camel dependency explicitly you can use a shorthand syntax (starting with `camel:` or `camel-`):", "camel run foo.java --dep=camel-saxon", "camel run foo.java --dep=camel-saxon,com.foo:acme:1.0", "camel run foo.java --repos=https://packages.atlassian.com/maven-external", "camel.jbang.repos=https://packages.atlassian.com/maven-external", "camel run foo.java", "camel run foo.java application.properties", "camel run foo.java --profile=application", "<server> <id>external-repository</id> <username>camel</username> <password>{SSVqy/PexxQHvubrWhdguYuG7HnTvHlaNr6g3dJn7nk=}</password> </server>", "mvn -emp Master password: camel {hqXUuec2RowH8dA8vdqkF6jn4NU9ybOsDjuTmWvYj4U=}", "<settingsSecurity> <master>{hqXUuec2RowH8dA8vdqkF6jn4NU9ybOsDjuTmWvYj4U=}</master> </settingsSecurity>", "mvn -ep Password: camel {SSVqy/PexxQHvubrWhdguYuG7HnTvHlaNr6g3dJn7nk=}", "camel run foo.java --maven-settings=/path/to/settings.xml --maven-settings-security=/path/to/settings-security.xml", "camel run foo.java --maven-settings=false", "camel run github:apache:camel-kamelets-examples:jbang/hello-java/Hey.java", "camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/hello-java", "camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/languages/*.groovy", "camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/languages/rou*", "camel run https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92", "camel init https://github.com/apache/camel-kamelets-examples/tree/main/jbang/dependency-injection", "camel run *", "camel init https://github.com/apache/camel-kamelets-examples/tree/main/jbang/dependency-injection --directory=myproject", "camel run * --dev", "camel init https://github.com/apache/camel-k-examples/blob/main/generic-examples/languages/simple.groovy", "camel run simple.groovy", "camel init https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92", "camel run *", "camel init https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92 --directory=foobar", "#!/usr/bin/env jbang camel@apache/camel run apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: joke spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: chuck-norris-source properties: period: 2000 sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink properties: show-headers: false", "camel run joke.yaml", "camel run clipboard.<extension>", "<route> <from uri=\"timer:foo\"/> <log message=\"Hello World\"/> </route>", "camel run clipboard.xml", "camel ps PID NAME READY STATUS AGE 61818 sample.camel.MyCamelApplica... 1/1 Running 26m38s 62506 test1 1/1 Running 4m34s", "camel stop test1 Stopping running Camel integration (pid: 62506)", "camel stop 62506 Stopping running Camel integration (pid: 62506)", "camel stop --all Stopping running Camel integration (pid: 61818) Stopping running Camel integration (pid: 62506)", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-cli-connector</artifactId> </dependency>", "camel get PID NAME CAMEL PLATFORM READY STATUS AGE TOTAL FAILED INFLIGHT SINCE-LAST 61818 MyCamel 3.20.1-SNAPSHOT Quarkus v3.2 1/1 Running 28m34s 854 0 0 0s/0s/- 63051 test1 3.20.1-SNAPSHOT JBang 1/1 Running 18s 14 0 0 0s/0s/- 63068 mygroovy 3.20.1-SNAPSHOT JBang 1/1 Running 5s 2 0 0 0s/0s/-", "camel get route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 61818 MyCamel hello timer://hello?period=2000 Running 29m2s 870 0 0 0 0 14 0s/0s/- 63051 test1 java timer://java?period=1000 Running 46s 46 0 0 0 0 9 0s/0s/- 63068 mygroovy groovy timer://groovy?period=1000 Running 34s 34 0 0 0 0 5 0s/0s/-", "camel top PID NAME JAVA CAMEL PLATFORM STATUS AGE HEAP NON-HEAP GC THREADS CLASSES 22104 chuck 11.0.13 3.20.1-SNAPSHOT JBang Running 2m10s 131/322/4294 MB 70/73 MB 17ms (6) 7/8 7456/7456 14242 MyCamel 11.0.13 3.20.1-SNAPSHOT Quarkus 32. Running 33m40s 115/332/4294 MB 62/66 MB 37ms (6) 16/16 8428/8428 22116 bar 11.0.13 3.20.1-SNAPSHOT JBang Running 2m7s 33/268/4294 MB 54/58 MB 20ms (4) 7/8 6104/6104", "camel top route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 22104 chuck chuck-norris-source-1 timer://chuck?period=10000 Started 10s 1 0 0 163 163 163 9s 22116 bar route1 timer://yaml2?period=1000 Started 7s 7 0 0 1 0 11 0s 22104 chuck chuck kamelet://chuck-norris-source Started 10s 1 0 0 0 0 0 9s 22104 chuck log-sink-2 kamelet://source?routeId=log-sink-2 Started 10s 1 0 0 0 0 0 9s 14242 MyCamel hello timer://hello?period=2000 Started 31m41s 948 0 0 0 0 4 0s", "camel cmd stop-route chuck", "camel get route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 81663 chuck chuck kamelet://chuck-norris-source Stopped 600 0 0 0 0 1 4s 81663 chuck chuck-norris-source-1 timer://chuck?period=10000 Stopped 600 0 0 65 52 290 4s 81663 chuck log-sink-2 kamelet://source?routeId=log-sink-2 Stopped 600 0 0 0 0 1 4s 83415 bar route1 timer://yaml2?period=1000 Started 5m30s 329 0 0 0 0 10 0s 83695 MyCamel hello timer://hello?period=2000 Started 3m52s 116 0 0 0 0 9 1s", "camel cmd start-route chuck", "camel cmd stop-route --all", "camel cmd start-route --all", "camel cmd logger PID NAME AGE LOGGER LEVEL 90857 bar 2m48s root INFO 91103 foo 20s root INFO", "camel cmd logger --level=DEBUG foo", "camel get service PID NAME COMPONENT PROTOCOL SERVICE 1912 netty netty tcp tcp:localhost:4444 2023 greetings platform-http rest http://0.0.0.0:7777/camel/greetings/{name} (GET) 2023 greetings platform-http http http://0.0.0.0:7777/q/dev", "camel get circuit-breaker PID NAME COMPONENT ROUTE ID STATE PENDING SUCCESS FAIL REJECT 56033 mycb resilience4j route1 circuitBreaker1 HALF_OPEN 5 2 3 0", "///usr/bin/env jbang --quiet camel@apache/camel pipe \"USD0\" \"USD@\" ; exit USD? Will upper-case the input - from: uri: \"stream:in\" steps: - setBody: simple: \"USD{body.toUpperCase()}\" - to: \"stream:out\"", "chmod +x upper.yaml", "echo \"Hello\\nWorld\" | ./upper.yaml", "HELLO WORLD", "echo \"Hello\\nWorld\" | ./upper.yaml --logging=true", "camel run --local-kamelet-dir=/path/to/local/kamelets earthquake.yaml", "camel run --local-kamelet-dir=https://github.com/apache/camel-kamelets-examples/tree/main/custom-kamelets user.java", "- from: uri: \"platform-http:/hello\" steps: - set-body: constant: \"Hello World\"", "camel run server.yaml", "curl http://localhost:8080/hello Hello World%", "jbang --debug camel@apache/camel run hello.yaml Listening for transport dt_socket at address: 4004", "camel get health PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61005 mybind 8s camel/context R UP 2/2/- 1s/3s/-", "camel get health PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61005 mybind 3m2s camel/context R UP 77/-/3 1s/-/17s some kind of error", "camel get health --trace PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61038 mykafka 6m19s camel/context R UP 187/187/- 1s/6m16s/- 61038 mykafka 6m19s camel/kafka-consumer-kafka-not-secure... R DOWN 187/-/187 1s/-/6m16s KafkaConsumer is not ready - Error: Invalid url in bootstrap.servers: value ------------------------------------------------------------------------------------------------------------------------ STACK-TRACE ------------------------------------------------------------------------------------------------------------------------ PID: 61038 NAME: mykafka AGE: 6m19s CHECK-ID: camel/kafka-consumer-kafka-not-secured-source-1 STATE: DOWN RATE: 187 SINCE: 6m16s METADATA: bootstrap.servers = value group.id = 7d8117be-41b4-4c81-b4df-cf26b928d38a route.id = kafka-not-secured-source-1 topic = value MESSAGE: KafkaConsumer is not ready - Error: Invalid url in bootstrap.servers: value org.apache.kafka.common.KafkaException: Failed to construct kafka consumer at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:823) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:664) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:645) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:625) at org.apache.camel.component.kafka.DefaultKafkaClientFactory.getConsumer(DefaultKafkaClientFactory.java:34) at org.apache.camel.component.kafka.KafkaFetchRecords.createConsumer(KafkaFetchRecords.java:241) at org.apache.camel.component.kafka.KafkaFetchRecords.createConsumerTask(KafkaFetchRecords.java:201) at org.apache.camel.support.task.ForegroundTask.run(ForegroundTask.java:123) at org.apache.camel.component.kafka.KafkaFetchRecords.run(KafkaFetchRecords.java:125) at java.base/java.util.concurrent.ExecutorsUSDRunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.common.config.ConfigException: Invalid url in bootstrap.servers: value at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:59) at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:730) ... 13 more", "camel catalog components", "camel catalog kamelets", "camel doc kafka", "camel doc jackson", "camel doc dataformat:thrift", "camel doc aws-kinesis-sink", "camel doc kafka --open-url", "camel doc aws-kinesis-sink --open-url", "camel doc kafka --filter=security", "camel doc kafka --filter=timeout", "camel dependency org.apache.camel:camel-dsl-modeline:4.8.3 org.apache.camel:camel-health:4.8.3 org.apache.camel:camel-kamelet:4.8.3 org.apache.camel:camel-log:4.8.3 org.apache.camel:camel-rest:4.8.3 org.apache.camel:camel-stream:4.8.3 org.apache.camel:camel-timer:4.8.3 org.apache.camel:camel-yaml-dsl:4.8.3 org.apache.camel.kamelets:camel-kamelets-utils:0.9.3 org.apache.camel.kamelets:camel-kamelets:0.9.3", "camel dependency --output=maven <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-main</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-dsl-modeline</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-health</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kamelet</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-log</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-stream</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-timer</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-yaml-dsl</artifactId> <version>{camel-core-version}</version> </dependency> <dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets-utils</artifactId> <version>0.9.3</version> </dependency> <dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets</artifactId> <version>0.9.3</version> </dependency>", "camel dependency --runtime=quarkus org.apache.camel.quarkus:camel-quarkus-core:3.15 org.apache.camel.quarkus:camel-quarkus-debug:3.15 org.apache.camel.quarkus:camel-quarkus-microprofile-health:3.15 org.apache.camel.quarkus:camel-quarkus-platform-http:3.15 org.apache.camel.quarkus:camel-timer:3.15 org.apache.camel:camel-cli-connector:{camel-sb-version} org.apache.camel:camel-management:{camel-sb-version} org.apache.camel:camel-rest:{camel-sb-version} org.apache.camel:camel-timer:{camel-sb-version} org.apache.camel:camel-xml-io-dsl:{camel-sb-version} org.apache.camel:camel-yaml-dsl:{camel-sb-version}", "camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.15.3.redhat-00002 --dep=org.apache.camel.quarkus:camel-quarkus-timer,org.apache.camel.quarkus:camel-quarkus-management,org.apache.camel.quarkus:camel-quarkus-cli-connector --repos=https://maven.repository.redhat.com/ga,https://packages.atlassian.com/maven-external --directory=camel-quarkus-jbang", "camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --directory=../myproject", "camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --directory=../myproject --quarkus-version=3.15.3.redhat-00002", "camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --dep=camel:cli-connector --directory=../myproject", "camel run * --camel-version=4.8", "camel config set camel-version=4.8", "camel run *", "camel config set gav=com.foo:acme:1.0-SNAPSHOT camel config set runtime=quarkus camel config set deps=org.apache.camel.quarkus:camel-timer,camel:management,camel:cli-connector camel config set camel-version=4.8 camel config set camel-quarkus-version=3.15", "camel export", "camel init foo.yaml", "camel run foo.yaml --https://maven.repository.redhat.com/ga,https://packages.atlassian.com/maven-external", "camel config unset camel-quarkus-version", "camel config list", "runtime = quarkus deps = org.apache.camel.springboot:camel-timer-starter gav = com.foo:acme:1.0-SNAPSHOT", "camel config get gav com.foo:acme:1.0-SNAPSHOT", "camel config set repos=https://maven.repository.redhat.com/ga camel run 'Test.java' --logging-level=info --repos=#repos,https://packages.atlassian.com/maven-external" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/tooling_guide_for_red_hat_build_of_apache_camel/camel-cli-cq
B.73. postgresql
B.73. postgresql B.73.1. RHSA-2010:0908 - Moderate: postgresql security update Updated postgresql packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. PostgreSQL is an advanced object-relational database management system (DBMS). PL/Perl and PL/Tcl allow users to write PostgreSQL functions in the Perl and Tcl languages. The PostgreSQL SECURITY DEFINER parameter, which can be used when creating a new PostgreSQL function, specifies that the function will be executed with the privileges of the user that created it. CVE-2010-3433 It was discovered that a user could utilize the features of the PL/Perl and PL/Tcl languages to modify the behavior of a SECURITY DEFINER function created by a different user. If the PL/Perl or PL/Tcl language was used to implement a SECURITY DEFINER function, an authenticated database user could use a PL/Perl or PL/Tcl script to modify the behavior of that function during subsequent calls in the same session. This would result in the modified or injected code also being executed with the privileges of the user who created the SECURITY DEFINER function, possibly leading to privilege escalation. These updated postgresql packages upgrade PostgreSQL to version 8.4.5. Refer to the PostgreSQL Release Notes for a list of changes: http://www.postgresql.org/docs/8.4/static/release.html All PostgreSQL users are advised to upgrade to these updated packages, which correct this issue. If the postgresql service is running, it will be automatically restarted after installing this update.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/postgresql
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuration_reference/proc_providing-feedback-on-red-hat-documentation
Chapter 2. Preparing to install with the Assisted Installer
Chapter 2. Preparing to install with the Assisted Installer Before installing a cluster, you must ensure the cluster nodes and network meet the requirements. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users. If you use a firewall, you must configure it so that Assisted Installer can access the resources it requires to function. 2.2. Assisted Installer prerequisites The Assisted Installer validates the following prerequisites to ensure successful installation. 2.2.1. CPU Architectures The Assisted installer is supported on the following CPU architectures: x86_64 arm64 ppc64le s390x 2.2.2. Hardware For Single Node Openshift (SNO), the Assisted Installer requires one host with at least 8 CPU cores, 16 GiB RAM, and 100 GB disk size. For multi-node clusters, control plane hosts must have at least the following resources: 4 CPU cores 16.00 GiB RAM 100 GB storage 10ms write speed or less for etcd wal_fsync_duration_seconds For multi-node clusters, worker hosts must have at least the following resources: 2 CPU cores 8.00 GiB RAM 100 GB storage For hosts of type vMware , set clusterSet disk.enableUUID to true , even when the platform is not vSphere. 2.2.3. Networking The network must meet the following requirements: A DHCP server unless using static IP addressing. A base domain name. You must ensure that the following requirements are met: There is no wildcard, such as *.<cluster_name>.<base_domain> , or the installation will not proceed. A DNS A/AAAA record for api.<cluster_name>.<base_domain> . A DNS A/AAAA record with a wildcard for *.apps.<cluster_name>.<base_domain> . Port 6443 is open for the API URL if you intend to allow users outside the firewall to access the cluster via the oc CLI tool. Port 443 is open for the console if you intend to allow users outside the firewall to access the console. A DNS A/AAAA record for each node in the cluster when using User Managed Networking, or the installation will not proceed. DNS A/AAAA records are required for each node in the cluster when using Cluster Managed Networking after installation is complete in order to connect to the cluster, but installation can proceed without the A/AAAA records when using Cluster Managed Networking. A DNS PTR record for each node in the cluster if you want to boot with the preset hostname when using static IP addressing. Otherwise, the Assisted Installer has an automatic node renaming feature when using static IP addressing that will rename the nodes to their network interface MAC address. Important DNS A/AAAA record settings at top-level domain registrars can take significant time to update. Ensure the A/AAAA record DNS settings are working before installation to prevent installation delays. The OpenShift Container Platform cluster's network must also meet the following requirements: Connectivity between all cluster nodes Connectivity for each node to the internet Access to an NTP server for time synchronization between the cluster nodes 2.2.4. Preflight validations The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex post-installation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations: Ensures network connectivity Ensures sufficient network bandwidth Ensures connectivity to the registry Ensures that any upstream DNS can resolve the required domain name Ensures time synchronization between cluster nodes Verifies that the cluster nodes meet the minimum hardware requirements Validates the installation configuration parameters If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/assisted_installer_for_openshift_container_platform/preparing-to-install-with-ai
Chapter 2. Multicluster storage health
Chapter 2. Multicluster storage health To view the overall storage health status across all the clusters with OpenShift Data Foundation and manage its capacity, you must first enable the multicluster dashboard on the Hub cluster. 2.1. Enabling multicluster dashboard on Hub cluster You can enable the multicluster dashboard on the install screen either before or after installing ODF Multicluster Orchestrator with the console plugin. Prerequisites Ensure that you have installed OpenShift Container Platform version 4.15 and have administrator privileges. Ensure that you have installed Multicluster Orchestrator 4.15 operator with plugin for console enabled. Ensure that you have installed Red Hat Advanced Cluster Management for Kubernetes (RHACM) 2.10 from Operator Hub. For instructions on how to install, see Installing RHACM . Ensure you have enabled observability on RHACM. See Enabling observability guidelines . Procedure Create the configmap file named observability-metrics-custom-allowlist.yaml and add the name of the custom metric to the metrics_list.yaml parameter. You can use the following YAML to list the OpenShift Data Foundation metrics on Hub cluster. For details, see Adding custom metrics . Run the following command in the open-cluster-management-observability namespace: After observability-metrics-custom-allowlist yaml is created, RHACM will start collecting the listed OpenShift Data Foundation metrics from all the managed clusters. If you want to exclude specific managed clusters from collecting the observability data, add the following cluster label to your clusters: observability: disabled . To view the multicluster health, see chapter verifying multicluster storage dashboard . 2.2. Verifying multicluster storage health on hub cluster Prerequisites Ensure that you have enabled multicluster monitoring. For instructions, see chapter Enabling multicluster dashboard . Procedure In the OpenShift web console of Hub cluster, ensure All Clusters is selected. Navigate to Data Services and click Storage System . On the Overview tab, verify that there are green ticks in front of OpenShift Data Foundation and Systems . This indicates that the operator is running and all storage systems are available. In the Status card, Click OpenShift Data Foundation to view the operator status. Click Systems to view the storage system status. The Storage system capacity card shows the following details: Name of the storage system Cluster name Graphical representation of total and used capacity in percentage Actual values for total and used capacity in TiB
[ "kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist Namespace: open-cluster-management-observability data: metrics_list.yaml: | names: - odf_system_health_status - odf_system_map - odf_system_raw_capacity_total_bytes - odf_system_raw_capacity_used_bytes matches: - __name__=\"csv_succeeded\",exported_namespace=\"openshift-storage\",name=~\"odf-operator.*\"", "oc apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/monitoring_openshift_data_foundation/multicluster_storage_health
Preface
Preface As a data scientist, you can organize your data science work into a single project. A data science project in OpenShift AI can consist of the following components: Workbenches Creating a workbench allows you to work with models in your preferred IDE, such as JupyterLab. Cluster storage For data science projects that require data retention, you can add cluster storage to the project. Connections Adding a connection to your project allows you to connect data inputs to your workbenches. Pipelines Standardize and automate machine learning workflows to enable you to further enhance and deploy your data science models. Models and model servers Deploy a trained data science model to serve intelligent applications. Your model is deployed with an endpoint that allows applications to send requests to the model.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_on_data_science_projects/pr01
Chapter 16. kubernetes
Chapter 16. kubernetes The namespace for Kubernetes-specific metadata Data type group 16.1. kubernetes.pod_name The name of the pod Data type keyword 16.2. kubernetes.pod_id The Kubernetes ID of the pod Data type keyword 16.3. kubernetes.namespace_name The name of the namespace in Kubernetes Data type keyword 16.4. kubernetes.namespace_id The ID of the namespace in Kubernetes Data type keyword 16.5. kubernetes.host The Kubernetes node name Data type keyword 16.6. kubernetes.container_name The name of the container in Kubernetes Data type keyword 16.7. kubernetes.annotations Annotations associated with the Kubernetes object Data type group 16.8. kubernetes.labels Labels present on the original Kubernetes Pod Data type group 16.9. kubernetes.event The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows type Event in Event v1 core . Data type group 16.9.1. kubernetes.event.verb The type of event, ADDED , MODIFIED , or DELETED Data type keyword Example value ADDED 16.9.2. kubernetes.event.metadata Information related to the location and time of the event creation Data type group 16.9.2.1. kubernetes.event.metadata.name The name of the object that triggered the event creation Data type keyword Example value java-mainclass-1.14d888a4cfc24890 16.9.2.2. kubernetes.event.metadata.namespace The name of the namespace where the event originally occurred. Note that it differs from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 16.9.2.3. kubernetes.event.metadata.selfLink A link to the event Data type keyword Example value /api/v1/namespaces/javaj/events/java-mainclass-1.14d888a4cfc24890 16.9.2.4. kubernetes.event.metadata.uid The unique ID of the event Data type keyword Example value d828ac69-7b58-11e7-9cf5-5254002f560c 16.9.2.5. kubernetes.event.metadata.resourceVersion A string that identifies the server's internal version of the event. Clients can use this string to determine when objects have changed. Data type integer Example value 311987 16.9.3. kubernetes.event.involvedObject The object that the event is about. Data type group 16.9.3.1. kubernetes.event.involvedObject.kind The type of object Data type keyword Example value ReplicationController 16.9.3.2. kubernetes.event.involvedObject.namespace The namespace name of the involved object. Note that it may differ from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 16.9.3.3. kubernetes.event.involvedObject.name The name of the object that triggered the event Data type keyword Example value java-mainclass-1 16.9.3.4. kubernetes.event.involvedObject.uid The unique ID of the object Data type keyword Example value e6bff941-76a8-11e7-8193-5254002f560c 16.9.3.5. kubernetes.event.involvedObject.apiVersion The version of kubernetes master API Data type keyword Example value v1 16.9.3.6. kubernetes.event.involvedObject.resourceVersion A string that identifies the server's internal version of the pod that triggered the event. Clients can use this string to determine when objects have changed. Data type keyword Example value 308882 16.9.4. kubernetes.event.reason A short machine-understandable string that gives the reason for generating this event Data type keyword Example value SuccessfulCreate 16.9.5. kubernetes.event.source_component The component that reported this event Data type keyword Example value replication-controller 16.9.6. kubernetes.event.firstTimestamp The time at which the event was first recorded Data type date Example value 2017-08-07 10:11:57.000000000 Z 16.9.7. kubernetes.event.count The number of times this event has occurred Data type integer Example value 1 16.9.8. kubernetes.event.type The type of event, Normal or Warning . New types could be added in the future. Data type keyword Example value Normal
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/logging/cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields
7.7. Synchronous and Asynchronous Distribution
7.7. Synchronous and Asynchronous Distribution To elicit meaningful return values from certain public API methods, it is essential to use synchronized communication when using distribution mode. Example 7.1. Communication Mode example For example, with three nodes in a cluster, node A , B and C , and a key K that maps nodes A and B . Perform an operation on node C that requires a return value, for example Cache.remove(K) . To execute successfully, the operation must first synchronously forward the call to both node A and B , and then wait for a result returned from either node A or B . If asynchronous communication was used, the usefulness of the returned values cannot be guaranteed, despite the operation behaving as expected. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/about_synchronous_and_asynchronous_distribution
Chapter 38. Unregistering from Red Hat Subscription Management Services
Chapter 38. Unregistering from Red Hat Subscription Management Services A system can only be registered with one subscription service. If you need to change which service your system is registered with or need to delete the registration in general, then the method to unregister depends on which type of subscription service the system was originally registered with. 38.1. Systems Registered with Red Hat Subscription Management Several different subscription services use the same, certificate-based framework to identify systems, installed products, and attached subscriptions. These services are Customer Portal Subscription Management (hosted), Subscription Asset Manager (on-premise subscription service), and CloudForms System Engine (on-premise subscription and content delivery services). These are all part of Red Hat Subscription Management . For all services within Red Hat Subscription Management, the systems are managed with the Red Hat Subscription Manager client tools. To unregister a system registered with a Red Hat Subscription Management server, use the unregister command. Note This command must be run as root.
[ "subscription-manager unregister --username=name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-deregister_rhn_entitlement
Chapter 5. About the Migration Toolkit for Containers
Chapter 5. About the Migration Toolkit for Containers The Migration Toolkit for Containers (MTC) enables you to migrate stateful application workloads from OpenShift Container Platform 3 to 4.13 at the granularity of a namespace. Important Before you begin your migration, be sure to review the differences between OpenShift Container Platform 3 and 4 . MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. The MTC console is installed on the target cluster by default. You can configure the Migration Toolkit for Containers Operator to install the console on an OpenShift Container Platform 3 source cluster or on a remote cluster . MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. The service catalog is deprecated in OpenShift Container Platform 4. You can migrate workload resources provisioned with the service catalog from OpenShift Container Platform 3 to 4 but you cannot perform service catalog actions such as provision , deprovision , or update on these workloads after migration. The MTC console displays a message if the service catalog resources cannot be migrated. 5.1. Terminology Table 5.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 5.2. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.13 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. 5.3. About data copy methods The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 5.3.1. File system copy method MTC copies data files from the source cluster to the replication repository, and from there to the target cluster. The file system copy method uses Restic for indirect migration or Rsync for direct volume migration. Table 5.2. File system copy method summary Benefits Limitations Clusters can have different storage classes. Supported for all S3 storage providers. Optional data verification with checksum. Supports direct volume migration, which significantly increases performance. Slower than the snapshot copy method. Optional data verification significantly reduces performance. Note The Restic and Rsync PV migration assumes that the PVs supported are only volumeMode=filesystem . Using volumeMode=Block for file system migration is not supported. 5.3.2. Snapshot copy method MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster. The snapshot copy method can be used with Amazon Web Services, Google Cloud Provider, and Microsoft Azure. Table 5.3. Snapshot copy method summary Benefits Limitations Faster than the file system copy method. Cloud provider must support snapshots. Clusters must be on the same cloud provider. Clusters must be in the same location or region. Clusters must have the same storage class. Storage class must be compatible with snapshots. Does not support direct volume migration. 5.4. Direct volume migration and direct image migration You can use direct image migration (DIM) and direct volume migration (DVM) to migrate images and data directly from the source cluster to the target cluster. If you run DVM with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the persistent volume claim. DIM and DVM have significant performance benefits because the intermediate steps of backing up files from the source cluster to the replication repository and restoring files from the replication repository to the target cluster are skipped. The data is transferred with Rsync . DIM and DVM have additional prerequisites.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migrating_from_version_3_to_4/about-mtc-3-4
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 2-7.400 2013-10-31 Rudiger Landmann Rebuild with publican 4.0.0 Revision 2-7 2012-07-18 Anthony Towns Rebuild for Publican 3.0 Revision 1.0-0 Tue Sep 23 2008 Don Domingo migrated to new automated build system
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/appe-publican-revision_history
Chapter 3. Extensions to JDBC
Chapter 3. Extensions to JDBC 3.1. Prepared Statements JBoss Data Virtualization provides org.teiid.jdbc.TeiidPreparedStatement , a custom interface for the standard java.sql.PreparedStatement , and implementations org.teiid.jdbc.CallableStatementImpl and org.teiid.jdbc.PreparedStatementImpl . Prepared statements can be important in speeding up common statement execution, since they allow the server to skip parsing, resolving, and planning of the statement. The following points should be considered when using prepared statements: It is not necessary to pool client-side JBoss Data Virtualization prepared statements, because JBoss Data Virtualization performs plan caching on the server side. The number of cached plans is configurable. The plans are purged in order of least recently used (LRU). Cached plans are not distributed through a cluster. A new plan must be created for each cluster member. Plans are cached for the entire VDB or for just a particular session. The scope of a plan is detected automatically based upon the functions evaluated during its planning process. Runtime updates of costing information do not yet cause re-planning. At this time only session-scoped temporary table or internally materialized tables update their costing information. Stored procedures executed through a callable statement have their plans cached in the same way as a prepared statement. Bind variable types in function signatures, for example where t.col = abs(?) , can be determined if the function has only one signature or if the function is used in a predicate where the return type can be determined. In more complex situations it may be necessary to add a type hint with a cast or convert, for example upper(convert(?, string)) .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/chap-extensions_to_jdbc
Chapter 66. ListenerAddress schema reference
Chapter 66. ListenerAddress schema reference Used in: ListenerStatus Property Description host The DNS name or IP address of the Kafka bootstrap service. string port The port of the Kafka bootstrap service. integer
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-listeneraddress-reference
Chapter 2. FIPS settings in Red Hat build of OpenJDK 17
Chapter 2. FIPS settings in Red Hat build of OpenJDK 17 At startup, Red Hat build of OpenJDK 17 checks if the system FIPS policy is enabled. If this policy is enabled, Red Hat build of OpenJDK 17 performs a series of automatic configurations that are intended to help Java applications to comply with FIPS requirements. These automatic configurations include the following actions: Installing a restricted list of security providers that contains the FIPS-certified Network Security Services (NSS) software token module for cryptographic operations Enforcing the Red Hat Enterprise Linux (RHEL) FIPS crypto-policy for Java that limits the algorithms and parameters available Note If FIPS mode is enabled in the system while a JVM instance is running, the JVM instance must be restarted to allow changes to take effect. You can configure Red Hat build of OpenJDK 17 to bypass the described FIPS automation. For example, you might want to achieve FIPS compliance through a Hardware Security Module (HSM) instead of the NSS software token module. You can specify FIPS configurations by using system or security properties. To better understand FIPS properties, you must understand the following JDK property classes: System properties are JVM arguments prefixed with -D , which generally take the form of ‐Dproperty.name=property.value . Privileged access is not required to pass any of these values. Only the launched JVM is affected by the configuration, and persistence depends on the existence of a launcher script. UTF-8 encoded values are valid for system properties. Security properties are available in USDJRE_HOME/conf/security/java.security or in the file that the java.security.properties system property points to. Privileged access is required to modify values in the USDJRE_HOME/conf/security/java.security file. Any modification to this file persists and affects all instances of the same Red Hat build of OpenJDK 17 deployment. Non-Basic Latin Unicode characters must be encoded with \uXXXX . When system and security properties have the same name and are set to different values, the system property takes precedence. Depending on their configuration, properties might affect other properties with different names. For more information about security properties and their default values, see the java.security file. The following list details properties that affect the FIPS configuration for Red Hat build of OpenJDK 17: Property Type Default value Description security.useSystemPropertiesFile Security true When set to false , this property disables the FIPS automation, which includes global crypto-policies alignment. java.security.disableSystemPropertiesFile System false When set to true , this property disables the FIPS automation, which includes global crypto-policies alignment. This has the same effect as a security.useSystemPropertiesFile=false security property. If both properties are set to different behaviors, java.security.disableSystemPropertiesFile takes precedence. com.redhat.fips System true When set to false , this property disables the FIPS automation while still enforcing the FIPS crypto-policy. If any of the preceding properties are set to disable the FIPS automation, this property has no effect. Crypto-policies are a prerequisite for FIPS automation. fips.keystore.type Security PKCS12 This property sets the default keystore type when Red Hat build of OpenJDK 17 is in FIPS mode. Supported values are PKCS12 and PKCS11 . In addition to the previously described settings, specific configurations can be applied to use NSS DB keystores in FIPS mode. These keystores are handled by the SunPKCS11 security provider and the NSS software token, which is the security provider's PKCS#11 back end. The following list details the NSS DB FIPS properties for Red Hat build of OpenJDK 17: Property Type Default value Description fips.nssdb.path System or Security sql:/etc/pki/nssdb File-system path that points to the NSS DB location. The syntax for this property is identical to the nssSecmodDirectory attribute available in the SunPKCS11 NSS configuration file. The property allows an sql: prefix to indicate that the referred NSS DB is of SQLite type. fips.nssdb.pin System or Security pin: (empty PIN) PIN (password) for the NSS DB that fips.nssdb.path points to. You can use this property to pass the NSS DB PIN in one of the following forms: pin:<value> In this situation, <value> is a clear text PIN value (for example, pin:1234abc ). env:<value> In this situation, <value> is an environment variable that contains the PIN value (for example, env:NSSDB_PIN_VAR ). file:<value> In this situation, <value> is the path to a UTF-8 encoded file that contains the PIN value in its first line (for example, file:/path/to/pin.txt ). The pin:<value> option accommodates both cases in which the PIN value is passed as a JVM argument or programmatically through a system property. Programmatic setting of the PIN value provides flexibility for applications to decide how to obtain the PIN. The file:<value> option is compatible with NSS modutil -pwfile and -newpwfile arguments, which are used for an NSS DB PIN change. Note If a cryptographic operation requires NSS DB authentication and the status is not authenticated, Red Hat build of OpenJDK 17 performs an implicit login with this PIN value. An application can perform an explicit login by invoking KeyStore::load before any cryptographic operation. Important Perform a security assessment, so that you can decide on a configuration that protects the integrity and confidentiality of the stored keys and certificates. This assessment should consider threats, contextual information, and other security measures in place, such as operating system user isolation and file-system permissions. For example, default configuration values might not be appropriate for an application storing keys and running in a multi-user environment. Use the modutil tool in RHEL to create and manage NSS DB keystores, and use certutil or keytool to import certificates and keys. Additional resources For more information about enabling FIPS mode, see Switching the system to FIPS mode .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/configuring_red_hat_build_of_openjdk_17_on_rhel_with_fips/fips_settings
Chapter 5. Changing the update approval strategy
Chapter 5. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/updating_openshift_data_foundation/changing-the-update-approval-strategy_rhodf
Chapter 2. Customizing permissions by creating user-defined cluster roles for cluster-scoped instances
Chapter 2. Customizing permissions by creating user-defined cluster roles for cluster-scoped instances For the default cluster-scoped instance, the Red Hat OpenShift GitOps Operator grants additional permissions for managing certain cluster-scoped resources. Consequently, as a cluster administrator, when you deploy an Argo CD as a cluster-scoped instance, the Operator creates additional cluster roles and cluster role bindings for the GitOps control plane components. These cluster roles and cluster role bindings provide the additional permissions that Argo CD requires to operate at the cluster level. If you do not want the cluster-scoped instance to have all of the Operator-given permissions and choose to add or remove permissions to cluster-wide resources, you must first disable the creation of the default cluster roles for the cluster-scoped instance. Then, you can customize permissions for the following cluster-scoped instances: Default ArgoCD instance (default cluster-scoped instance) User-defined cluster-scoped Argo CD instance This guide provides instructions with examples to help you create a user-defined cluster-scoped Argo CD instance, deploy an Argo CD application in your defined namespace that contains custom configurations for your cluster, disable the creation of the default cluster roles for the cluster-scoped instance, and customize permissions for user-defined cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components. Note As a developer, if you are creating an Argo CD application and deploying cluster-wide resources, ensure that your cluster administrator grants the necessary permissions to them. Otherwise, after the Argo CD reconciliation, you will see an authentication error message in the application's Status field similar to the following example: Example authentication error message persistentvolumes is forbidden: User "system:serviceaccount:gitops-demo:argocd-argocd-application-controller" cannot create resource "persistentvolumes" in API group "" at the cluster scope. 2.1. Prerequisites You have installed Red Hat OpenShift GitOps 1.13.0 or a later version on your OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift GitOps argocd CLI. You have installed a cluster-scoped Argo CD instance in your defined namespace. For example, spring-petclinic namespace. You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components: Argo CD Application Controller Argo CD server Argo CD ApplicationSet Controller (provided the ApplicationSet Controller is created) You have deployed a cluster-configs Argo CD application with the customclusterrole path in the spring-petclinic namespace and created the test-gitops-ns namespace and test-gitops-pv persistent volume resources. Note The cluster-configs Argo CD application must be managed by a user-defined cluster-scoped instance with the following parameters set: The selfHeal field value set to true The syncPolicy field value set to automated The Label field set to the app.kubernetes.io/part-of=argocd value The Label field set to the argocd.argoproj.io/managed-by=<user_defined_namespace> value so that the Argo CD instance in your defined namespace can manage your namespace The Label field set to the app.kubernetes.io/name=<user_defined_argocd_instance> value 2.2. Disabling the creation of the default cluster roles for the cluster-scoped instance To add or remove permissions to cluster-wide resources, as needed, you must disable the creation of the default cluster roles for the cluster-scoped instance by editing the YAML file of the Argo CD custom resource (CR). Procedure In the Argo CD CR, set the value of the .spec.defaultClusterScopedRoleDisabled field to true : Example Argo CD CR apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example 1 namespace: spring-petclinic 2 # ... spec: defaultClusterScopedRoleDisabled: true 3 # ... 1 The name of the cluster-scoped instance. 2 The namespace where you want to run the cluster-scoped instance. 3 The flag value that disables the creation of the default cluster roles for the cluster-scoped instance. If you want the Operator to recreate the default cluster roles and cluster role bindings for the cluster-scoped instance, set the field value to false . Sample output argocd.argoproj.io/example configured Verify that the Red Hat OpenShift GitOps Operator has deleted the default cluster roles and cluster role bindings for the GitOps control plane components by running the following commands: USD oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component> USD oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component> Sample output No resources found The default cluster roles and cluster role bindings for the cluster-scoped instance are not created. As a cluster administrator, you can now create and customize permissions for cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components. Additional resources Installing a user-defined Argo CD instance 2.3. Customizing permissions for cluster-scoped instances As a cluster administrator, to customize permissions for cluster-scoped instances, you must create new cluster roles and cluster role bindings for the GitOps control plane components. For example purposes, the following instructions focus only on user-defined cluster-scoped instances. Procedure Open the Administrator perspective of the web console and go to User Management Roles Create Role . Use the following ClusterRole YAML template to add rules to specify the additional permissions. Example cluster role YAML template apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: example-spring-petclinic-argocd-application-controller 1 rules: - verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - '*' apiGroups: - '' resources: 2 - namespaces - persistentvolumes 1 The name of the cluster role according to the <argocd_name>-<argocd_namespace>-<control_plane_component> naming convention. 2 The resources to which you want to grant permissions at the cluster level. Click Create to add the cluster role. Find the service account used by the control plane component you are customizing permissions for, by performing the following steps: Go to Workloads Pods . From the Project list, select the project where the user-defined cluster-scoped instance is installed. Click the pod of the control plane component and go to the YAML tab. Find the spec.ServiceAccount field and note the service account. Go to User Management RoleBindings Create binding . Click Create binding . Select Binding type as Cluster-wide role binding (ClusterRoleBinding) . Enter a unique value for RoleBinding name by following the <argocd_name>-<argocd_namespace>-<control_plane_component> naming convention. Select the newly created cluster role from the drop-down list for Role name . Select the Subject as ServiceAccount and the provide the Subject namespace and name . Subject namespace : spring-petclinic Subject name : example-argocd-application-controller Note For Subject name , ensure that the value you configure is the same as the value of the spec.ServiceAccount field of the control plane component you are customizing permissions for. Click Create . You have created the required permissions for the control plane component's service account and namespace. The YAML file for the ClusterRoleBinding object looks similar to the following example: Example YAML file for a cluster role binding kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: example-spring-petclinic-argocd-application-controller subjects: - kind: ServiceAccount name: example-argocd-application-controller namespace: spring-petclinic roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: example-spring-petclinic-argocd-application-controller 2.4. Additional resources Adding permissions for cluster configuration Configuring common cluster roles by specifying user-defined cluster roles for namespace-scoped instances Customizing permissions by creating aggregated cluster roles
[ "persistentvolumes is forbidden: User \"system:serviceaccount:gitops-demo:argocd-argocd-application-controller\" cannot create resource \"persistentvolumes\" in API group \"\" at the cluster scope.", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example 1 namespace: spring-petclinic 2 spec: defaultClusterScopedRoleDisabled: true 3", "argocd.argoproj.io/example configured", "oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>", "oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>", "No resources found", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: example-spring-petclinic-argocd-application-controller 1 rules: - verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - '*' apiGroups: - '' resources: 2 - namespaces - persistentvolumes", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: example-spring-petclinic-argocd-application-controller subjects: - kind: ServiceAccount name: example-argocd-application-controller namespace: spring-petclinic roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: example-spring-petclinic-argocd-application-controller" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/declarative_cluster_configuration/customizing-permissions-by-creating-user-defined-cluster-roles-for-cluster-scoped-instances
probe::stap.pass5.end
probe::stap.pass5.end Name probe::stap.pass5.end - Finished stap pass5 (running the instrumentation) Synopsis stap.pass5.end Values session the systemtap_session variable s Description pass5.end fires just before the cleanup label
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-pass5-end
4.2. Finish Configuring the Diskless Environment
4.2. Finish Configuring the Diskless Environment To use the graphical version of the Network Booting Tool , you must be running the X Window System, have root privileges, and have the system-config-netboot RPM package installed. To start the Network Booting Tool from the desktop, go to Applications (the main menu on the panel) => System Settings => Server Settings => Network Booting Service . Or, type the command system-config-netboot at a shell prompt (for example, in an XTerm or a GNOME terminal ). If starting the Network Booting Tool for the first time, select Diskless from the First Time Druid . Otherwise, select Configure => Diskless from the pull-down menu, and then click Add . A wizard appears to step you through the process: Click Forward on the first page. On the Diskless Identifier page, enter a Name and Description for the diskless environment. Click Forward . Enter the IP address or domain name of the NFS server configured in Section 4.1, "Configuring the NFS Server" as well as the directory exported as the diskless environment. Click Forward . The kernel versions installed in the diskless environment are listed. Select the kernel version to boot on the diskless system. Click Apply to finish the configuration. After clicking Apply , the diskless kernel and image file are created based on the kernel selected. They are copied to the PXE boot directory /tftpboot/linux-install/ <os-identifier> / . The directory snapshot/ is created in the same directory as the root/ directory (for example, /diskless/i386/RHEL4-AS/snapshot/ ) with a file called files in it. This file contains a list of files and directories that must be read/write for each diskless system. Do not modify this file. If additional entries must be added to the list, create a files.custom file in the same directory as the files file, and add each additional file or directory on a separate line.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/diskless_environments-finish_configuring_the_diskless_environment
Chapter 11. SelfSubjectAccessReview [authorization.k8s.io/v1]
Chapter 11. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 11.1.1. .spec Description SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface 11.1.2. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 11.1.3. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 11.1.4. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 11.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectaccessreviews POST : create a SelfSubjectAccessReview 11.2.1. /apis/authorization.k8s.io/v1/selfsubjectaccessreviews Table 11.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SelfSubjectAccessReview Table 11.2. Body parameters Parameter Type Description body SelfSubjectAccessReview schema Table 11.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectAccessReview schema 201 - Created SelfSubjectAccessReview schema 202 - Accepted SelfSubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/selfsubjectaccessreview-authorization-k8s-io-v1
1.5. Cluster Configuration Considerations
1.5. Cluster Configuration Considerations When configuring a Red Hat High Availability Add-On cluster, you must take the following considerations into account: Red Hat does not support cluster deployments greater than 32 nodes for RHEL 7.7 (and later). It is possible, however, to scale beyond that limit with remote nodes running the pacemaker_remote service. For information on the pacemaker_remote service, see Section 9.4, "The pacemaker_remote Service" . The use of Dynamic Host Configuration Protocol (DHCP) for obtaining an IP address on a network interface that is utilized by the corosync daemons is not supported. The DHCP client can periodically remove and re-add an IP address to its assigned interface during address renewal. This will result in corosync detecting a connection failure, which will result in fencing activity from any other nodes in the cluster using corosync for heartbeat connectivity.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-configconsider-haar
probe::stap.cache_add_mod
probe::stap.cache_add_mod Name probe::stap.cache_add_mod - Adding kernel instrumentation module to cache Synopsis stap.cache_add_mod Values dest_path the path the .ko file is going to (incl filename) source_path the path the .ko file is coming from (incl filename) Description Fires just before the file is actually moved. Note: if moving fails, cache_add_src and cache_add_nss will not fire.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-cache-add-mod
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.402_release_notes/pr01
Release notes
Release notes Red Hat OpenStack Platform 17.1 Release details for Red Hat OpenStack Platform 17.1 OpenStack Documentation Team Red Hat Customer Content Services [email protected] Abstract This document outlines the major features, enhancements, and known issues in this release of Red Hat OpenStack Platform (RHOSP).
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/release_notes/index
Chapter 1. Introduction to OpenShift Data Foundation Disaster Recovery
Chapter 1. Introduction to OpenShift Data Foundation Disaster Recovery Disaster recovery (DR) is the ability to recover and continue business critical applications from natural or human created disasters. It is a component of the overall business continuance strategy of any major organization as designed to preserve the continuity of business operations during major adverse events. The OpenShift Data Foundation DR capability enables DR across multiple Red Hat OpenShift Container Platform clusters, and is categorized as follows: Metro-DR Metro-DR ensures business continuity during the unavailability of a data center with no data loss. In the public cloud these would be similar to protecting from an Availability Zone failure. Regional-DR Regional-DR ensures business continuity during the unavailability of a geographical region, accepting some loss of data in a predictable amount. In the public cloud this would be similar to protecting from a region failure. Disaster Recovery with stretch cluster Stretch cluster solution ensures business continuity with no-data loss disaster recovery protection with OpenShift Data Foundation based synchronous replication in a single OpenShift cluster, stretched across two data centers with low latency and one arbiter node. Zone failure in Metro-DR and region failure in Regional-DR is usually expressed using the terms, Recovery Point Objective (RPO) and Recovery Time Objective (RTO) . RPO is a measure of how frequently you take backups or snapshots of persistent data. In practice, the RPO indicates the amount of data that will be lost or need to be reentered after an outage. RTO is the amount of downtime a business can tolerate. The RTO answers the question, "How long can it take for our system to recover after we are notified of a business disruption?" The intent of this guide is to detail the Disaster Recovery steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then relocate the same application to the original primary cluster.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/introduction-to-odf-dr-solutions_common
Chapter 1. Release notes for Red Hat build of Quarkus 3.8
Chapter 1. Release notes for Red Hat build of Quarkus 3.8 Release notes provide information about new features, notable technical changes, features in technology preview, bug fixes, known issues, and related advisories for Red Hat build of Quarkus 3.8. Information about upgrading and backward compatibility is also provided to help you make the transition from an earlier release. 1.1. About Red Hat build of Quarkus Red Hat build of Quarkus is a Kubernetes-native Java stack optimized for containers and Red Hat OpenShift Container Platform. Quarkus is designed to work with popular Java standards, frameworks, and libraries such as Eclipse MicroProfile, Eclipse Vert.x, Apache Camel, Apache Kafka, Hibernate ORM with Jakarta Persistence, and RESTEasy Reactive (Jakarta REST). As a developer, you can choose the Java frameworks you want for your Java applications, which you can run in Java Virtual Machine (JVM) mode or compile and run in native mode. Quarkus provides a container-first approach to building Java applications. The container-first approach facilitates the containerization and efficient execution of microservices and functions. For this reason, Quarkus applications have a smaller memory footprint and faster startup times. Quarkus also optimizes the application development process with capabilities such as unified configuration, automatic provisioning of unconfigured services, live coding, and continuous testing that gives you instant feedback on your code changes. 1.2. Differences between the Quarkus community version and Red Hat build of Quarkus As an application developer, you can access two different versions of Quarkus: the Quarkus community version and the productized version, Red Hat build of Quarkus. The following table describes the differences between the Quarkus community version and Red Hat build of Quarkus. Feature Quarkus community version Red Hat build of Quarkus version Description Access to the latest community features Yes No With the Quarkus community version, you can access the latest feature developments. Red Hat does not release Red Hat build of Quarkus to correspond with every version that the community releases. The cadence of Red Hat build of Quarkus feature releases is approximately every six months. Enterprise support from Red Hat No Yes Red Hat provides enterprise support for Red Hat build of Quarkus only. To report issues about the Quarkus community version, see quarkusio/quarkus - Issues . Access to long-term support No Yes The lifecycle for a major release of Red Hat build of Quarkus is divided into two support phases; full support and maintenance support. For information about the product lifecycle, timelines, and support policies of Red Hat build of Quarkus, log in to the Red Hat Customer Portal and see the Product lifecycles and Red Hat build of Quarkus lifecycle and support policies Knowledge Base articles. Common Vulnerabilities and Exposures (CVE) fixes and bug fixes backported to earlier releases No Yes With Red Hat build of Quarkus, selected CVE fixes and bug fixes are regularly backported to supported streams. For more information about maintenance support, see Red Hat build of Quarkus lifecycle and support policies . Tested and verified with Red Hat OpenShift Container Platform and Red Hat Enterprise Linux (RHEL) No Yes Red Hat build of Quarkus is built, tested, and verified with Red Hat OpenShift Container Platform and RHEL. Red Hat provides both production and development support for supported configurations and tested integrations according to your subscription agreement. For more information, see Red Hat build of Quarkus supported configurations . Built from source using secure build systems No Yes In Red Hat build of Quarkus, the core platform and all supported extensions are provided by Red Hat using secure software delivery, which means that they are built from source, scanned for security issues, and with verified license usage. Access to support for JDK and Red Hat build of Quarkus Native builder distribution No Yes Red Hat build of Quarkus supports certified OpenJDK builds and certified native executable builders. See admonition below. For more information, see Red Hat build of Quarkus supported configurations . Important Red Hat build of Quarkus supports the building of native Linux executables by using a Red Hat build of Quarkus Native builder image, which is based on Mandrel and distributed by Red Hat. For more information, see Compiling your Quarkus applications to native executables . Building native executables by using Oracle GraalVM Community Edition (CE), Mandrel community edition, or any other distributions of GraalVM is not supported for Red Hat build of Quarkus. 1.3. New features, enhancements, and technical changes This section overviews the new features, enhancements, and technical changes introduced in Red Hat build of Quarkus 3.8. 1.3.1. Core 1.3.1.1. Support for Java 21 Java 21 is now the recommended version, although Java 17 is also supported. 1.3.1.2. Support for Java 11 has been removed With this 3.8 release, support for Java 11, which was deprecated in version 3.2, has been removed. 1.3.1.3. Red Hat build of Quarkus added support for virtual threads The use of Red Hat build of Quarkus with Virtual Threads (VTs) brings the following: Enhances the management of concurrent tasks, improving scalability and resource efficiency. Enhances the imperative programming model by improving resource efficiency with virtual threads, which are inexpensive to block. Simplifies the concurrency model, streamlining code-base maintenance. Reduces thread context switching overhead, resulting in lower latency and higher throughput. Enables better multi-core processor utilization, allowing more concurrent tasks without heavy context-switching penalties. Note Virtual threads are supported only on Java 21 JVMs. For more information, see Virtual Threads section of the Oracle Java Core Libraries Developer Guide and the OpenJDK's JEP 444: Virtual Threads . Virtual thread Limitations: Libraries pinning the carrier thread might delay adoption until the Java ecosystem fully embraces virtual-thread compatibility. Lengthy computations require careful analysis to prevent monopolization of resources. Elasticity of the carrier thread pool could lead to increased memory consumption. The thread-local object polling pattern might significantly impact allocations and memory usage. Virtual threads do not inherently solve thread safety issues, requiring diligent management. 1.3.2. Data 1.3.2.1. Hibernate ORM upgraded to version 6.4 In Red Hat build of Quarkus 3.8, Hibernate Object-Relational Mapping (ORM) is upgraded to version 6.4. For more information, see the following resources: Changes that affect compatibility with earlier versions Hibernate ORM documentation 6.4 Quarkus Using Hibernate ORM and Jakarta Persistence guide 1.3.2.2. Hibernate Reactive operating alongside Agroal In Red Hat build of Quarkus 3.8, Hibernate Reactive can co-exist alongside Agroal, which means you can use Flyway or Liquibase in your applications while using Hibernate Reactive as the Object-Relational Mapping (ORM). Note A limitation exists in Red Hat build of Quarkus whereby you cannot have both Hibernate ORM and Hibernate Reactive in the same application. For more information, see the Quarkus Using Hibernate Reactive guide. 1.3.2.3. Hibernate Reactive upgraded to version 2.2 In Red Hat build of Quarkus 3.8, the Hibernate Reactive extension is upgraded to version 2.2, which is compatible with Hibernate ORM 6.4.0. Important The Hibernate Reactive extension is available as a Technology Preview in Red Hat build of Quarkus 3.8. For more information, see the Hibernate Reactive 2.2.0 documentation. 1.3.2.4. Hibernate Search upgraded to version 7.0 In Red Hat build of Quarkus 3.8, Hibernate Search is upgraded to version 7.0. Hibernate Search offers indexing and full-text search capabilities to your Red Hat build of Quarkus applications. Version 7.0 introduces enhancements, new features, and some notable changes to the default configuration for geo-point fields. For more details, see Changes that affect compatibility with earlier versions . To learn more about Hibernate Search 7.0, see the following resources: Quarkus Hibernate Search guide Hibernate Search 7.0.1 reference documentation 1.3.2.5. New OpenSearch Dev Service Red Hat build of Quarkus 3.8 introduces a new OpenSearch Dev Service. When you use Hibernate Search , Dev Services defaults to starting Elasticsearch or OpenSearch based on your Hibernate Search configuration. To configure Dev Services to use OpenSearch, specify the following setting: quarkus.elasticsearch.devservices.distribution=opensearch For more information, see the Configuring the image section of the Quarkus "Dev Services for Elasticsearch" guide. 1.3.3. Observability 1.3.3.1. Customization of Micrometer by using MeterRegistry Red Hat build of Quarkus 3.8 introduces many ways to customize Micrometer by using the new MeterRegistryCustomizer interface implemented as Contexts and Dependency Injection (CDI) beans. You can customize Micrometer in the following ways: By using MeterFilter instances to customize metrics emitted by MeterRegistry instances. The Micrometer extension detects MeterFilter CDI beans and uses them when initializing MeterRegistry instances. By using HttpServerMetricsTagsContributor for server HTTP requests. User code can contribute arbitrary tags based on the details of an HTTP request by providing CDI beans that implement io.quarkus.micrometer.runtime.HttpServerMetricsTagsContributor . By using MeterRegistryCustomizer for arbitrary customizations to meter registries. User code can change the configuration of any MeterRegistry that is activated by providing CDI beans that implement io.quarkus.micrometer.runtime.MeterRegistryCustomizer . For more information, see the Customizing Micrometer section of the Quarkus "Micrometer Metrics" guide. 1.3.3.2. Micrometer @MeterTag supported Micrometer defines two annotations, @Counted and @Timed , which you can add to methods. With Red Hat build of Quarkus 3.8, Micrometer can add the @MeterTag annotation to parameters of methods annotated with @Counted and @Timed . The @MeterTag annotation uses the ValueResolver or ValueExpressionResolver resolvers from the io.micrometer.common.annotation package to dynamically assign additional tag values to the method counter or timer. For more information, see the Quarkus Micrometer Metrics guide. 1.3.3.3. Netty metrics supported in Micrometer Red Hat build of Quarkus 3.8 introduces support for the gathering of Netty allocator metrics from the Micrometer metrics library. The quarkus.micrometer.binder.netty.enabled property is introduced, which enables the gathering of Netty metrics if Micrometer support is also enabled. Netty allocator metrics provide insights on memory allocation and usage within your Netty framework, which can help you to gain an understanding of the performance of your Red Hat build of Quarkus applications that use Netty. The following metrics are gathered: Metric Description netty.allocator.memory.used Size, in bytes, of the memory that the allocator uses netty.allocator.memory.pinned Size, in bytes, of the memory that the allocated buffer uses. netty.allocator.pooled.arenas Number of arenas for a pooled allocator netty.allocator.pooled.cache.size Size, in bytes, of the cache for a pooled allocator netty.allocator.pooled.threadlocal.caches Number of ThreadLocal caches for a pooled allocator netty.allocator.pooled.chunk.size Size, in bytes, of memory chunks for a pooled allocator netty.eventexecutor.tasks.pending Number of pending tasks in the event executor For more information about Micrometer, see the Quarkus Micrometer Metrics guide. 1.3.3.4. Replace OkHttp tracing gRPC exporter with Vert.x In Red Hat build of Quarkus 3.8, the OpenTelemetry (OTel) extension, quarkus-opentelemetry is enhanced to replace the default OTel exporter with a Red Hat build of Quarkus implementation built on top of Vert.x. This eliminates the dependency on the OkHttp library. The exporter continues to be automatically wired with Contexts and Dependency Injection (CDI), so the quarkus.otel.traces.exporter property defaults to cdi . For more information, see the Quarkus Using OpenTelemetry guide. 1.3.4. Security 1.3.4.1. Ability to split OIDC session cookies that exceed 4KB With Red Hat build of Quarkus 3.8, you can split an OpenID Connect (OIDC) session cookie into smaller cookies if its content size exceeds 4KB. Typically, a session cookie, which is encrypted by default, comprises a concatenation of three tokens, namely, ID, access, and refresh tokens. If its size is greater than 4KB, some browsers might be unable to handle it. With this update, a session cookie exceeding 4KB in size is automatically split into multiple chunks. For more information, see the Quarkus OIDC authorization code flow mechanism for protecting web applications guide. 1.3.4.2. Create OIDC SecurityIdentity instance after the HTTP request is complete With Red Hat build of Quarkus 3.8, you can create OIDC SecurityIdentity instances for authentication purposes after a HTTP request completes. With this version, the quarkus-oidc extension includes the io.quarkus.oidc.TenantIdentityProvider interface, which you can inject and call to convert a token to a SecurityIdentity instance after an HTTP request completes. For more information, see the following Quarkus resources: OIDC authorization code flow mechanism for protecting web applications guide. Authentication after an HTTP request has completed section of the "OIDC bearer token authentication" guide. 1.3.4.3. Customization of OIDC JavaScript request checks Red Hat build of Quarkus 3.8 introduces the OIDC JavaScriptRequestChecker bean that you can use to customize JavaScript request checks. If you use applications (SPAs) and JavaScript APIs such as Fetch or XMLHttpRequest (XHR) with Red Hat build of Quarkus web applications, you must set a header in the browser script to identify the request as a JavaScript request. However, the script engine can also set an engine-specific request header itself. With this update, you can now register a custom io.quarkus.oidc.JavaScriptRequestChecker bean, which informs Red Hat build of Quarkus if the current request is a JavaScript request, thereby helping to avoid the creation of redundant headers. 1.3.4.4. Delayed OIDC JWK resolution now supported Red Hat build of Quarkus 3.8 introduces support for delayed OIDC JSON Web Key (JWK) resolution, and you can now resolve keys the moment a token is available. This release adds the quarkus.oidc.jwks.resolve-early configuration property. By default, this property is set to true , which means that JWK keys are resolved the moment you establish an OIDC provider connection. However, you can set it to false , enabling the delayed resolution of keys simultaneously to token verification. The delayed JWK resolution uses the current token instead of the read-once approach at initialization time. For example, the token might provide information on how to resolve keys correctly. 1.3.4.5. Enhanced Security with mTLS and HTTP Restrictions When mTLS client authentication ( quarkus.http.ssl.client-auth ) is set to required , Red Hat build of Quarkus automatically disables plain HTTP ports to ensure that only secure HTTPS requests are accepted. To enable plain HTTP, configure quarkus.http.ssl.client-auth to request or set both quarkus.http.ssl.client-auth=required and quarkus.http.insecure-requests=enabled . 1.3.4.6. HTTP Permissions and Roles moved to runtime configuration Red Hat build of Quarkus has updated to allow runtime configuration of HTTP Permissions and Roles, enabling flexible security settings across profiles. This resolves the issue of native executables locking to build-time security configurations. Security can now be dynamically adjusted per profile, applicable in both JVM and native modes. 1.3.4.7. Mapping OIDC scope attribute to SecurityIdentity permissions in Bearer token authentication If you use Bearer token authentication in Red Hat build of Quarkus, you can map SecurityIdentity roles from the verified JWT access tokens. Red Hat build of Quarkus 3.8 introduces the ability to map the OIDC scope parameter to permissions on the SecurityIdentity object. For example, you can use @PermissionAllowed("orders_read") to request that JWT tokens have a scope claim with an orders_read value. For more information, see the Quarkus OIDC Bearer token authentication guide. 1.3.4.8. Observing security events by using CDI With Red Hat build of Quarkus 3.8, you can use Context and Dependency Injection (CDI) to observe authentication and authorization security events. The CDI observers can be either synchronous or asynchronous and reporting of the following security events is supported: io.quarkus.security.spi.runtime.AuthenticationFailureEvent io.quarkus.security.spi.runtime.AuthenticationSuccessEvent io.quarkus.security.spi.runtime.AuthorizationFailureEvent io.quarkus.security.spi.runtime.AuthorizationSuccessEvent io.quarkus.oidc.SecurityEvent For more information, see the Observe security events section of the Quarkus "Security tips and tricks" guide. 1.3.4.9. OIDC authorization code flow nonce supported Red Hat build of Quarkus 3.8 introduces support for an OpenID Connect (OIDC) authorization code flow nonce feature. When the OIDC authorization server issues an ID token in response to an authorization request, the ID token includes a nonce claim that must match the nonce authentication request query parameter. This feature helps to mitigate replay attacks by ensuring that the ID token is returned in response to the original authorization request and is not a replayed response. 1.3.4.10. OIDC request filters supported With Red Hat build of Quarkus 3.8, you can customize OIDC client requests made by either quarkus-oidc-client or quarkus-oidc extensions to update or add new request headers by registering one or more OidcRequestFilter implementations. For example, an OIDC request filter can analyze the request body and add its digest as a new header value. For more information, see the OIDC request filters section of the Quarkus "OIDC authorization code flow mechanism for protecting web applications" guide. 1.3.4.11. New OIDC @TenantFeature annotation introduced to bind OIDC features to tenants In Red Hat build of Quarkus 3.8, a new @TenantFeature annotation is introduced to bind OpenID Connect (OIDC) features to OIDC tenants. The io.quarkus.oidc.Tenant annotation is now used for resolving tenant configuration. 1.3.4.12. OIDC token propagation supported Red Hat build of Quarkus 3.8 introduces support for OIDC token propagation. With this update, Red Hat build of Quarkus endpoints use REST clients to propagate the incoming OIDC access tokens to other secure endpoints that expect access tokens. 1.3.4.13. Role mappings for client certificates Red Hat build of Quarkus 3.8 now supports mapping the Common Name (CN) attribute from a client's X.509 certificate to roles when using the Mutual TLS (mTLS) authentication mechanism. This functionality is activated under specific conditions: This functionality is activated under specific conditions: If the mTLS authentication mechanism is enabled with either quarkus.http.ssl.client-auth=required or quarkus.http.ssl.client-auth=request The application.properties file references a role mappings file with the quarkus.http.auth.certificate-role-properties property. The role mapping file is expected to have the CN=role1,role,... ,roleN format and encoded by using UTF-8. 1.3.4.14. Support for token verification with an inlined certificate chain Red Hat build of Quarkus 3.8 introduces the verification of OIDC bearer access tokens by using the X.509 certificate chain that is inlined in the token. This means that you validate the certificate chain before you extract a public key from the leaf certificate. The leaf certificate refers to an X.509 certificate that is positioned at the end of a certificate chain. To verify the token's signature, you use this public key. 1.3.5. Tooling 1.3.5.1. Expanded update capability with OpenRewrite quarkus update now supports OpenRewrite recipes for external Red Hat build of Quarkus extensions, expanding its capabilities beyond built-in extensions only. New recipes have been introduced, enhancing migration support for external extensions. Be aware that Red Hat provides development support for using Quarkus development tools, including the Quarkus CLI to prototype, develop, test, and deploy Red Hat build of Quarkus applications. Red Hat does not support using Quarkus development tools in production environments. 1.3.6. Web 1.3.6.1. Enhanced /info endpoint through CDI integration Applications using quarkus-info can now enrich the /info endpoint with additional data through CDI integration. This feature enhances the ability to customize and extend application diagnostics and metadata visibility. Note For more information, see the Quarkus community's CDI integration guide . 1.3.6.2. Improvements to the SSE support in REST Client Reactive With Red Hat build of Quarkus 3.8, the REST Client's Server-Sent Events (SSE) capabilities are enhanced, enabling complete event returns and filtering. These updates and new descriptions in REST Client provide developers with increased control and flexibility in managing real-time data streams. 1.3.6.3. ObjectMapper customization in REST Client Reactive Jackson With Red Hat build of Quarkus 3.8, you can customize the ObjectMapper when using the rest-client-reactive-jackson extension. You can add the custom ObjectMapper , that only the client uses, by using the annotation @ClientObjectMapper . Important For any customization action where you want to inherit the default settings, you must never modify the default object mapper defaultObjectMapper . You must create a copy instead. The defaultObjectMapper is the instance of ObjectMapper that Red Hat build of Quarkus itself configures, makes available as a CDI bean, and which RESTEasy Reactive and REST Client, among other applications, use by default. For more information, see the Customizing the ObjectMapper in REST Client Reactive Jackson section of the Quarkus "Using the REST client" guide. 1.3.6.4. Path parameter support in @TestHTTPResource The @TestHTTPResource annotation now supports path parameters. Validation as a URI string is no longer applied due to non-compliance with the URI format. 1.4. Support and compatibility You can find detailed information about the supported configurations and artifacts that are compatible with Red Hat build of Quarkus 3.8 and the high-level support lifecycle policy on the Red Hat Customer Support portal as follows: For a list of supported configurations, OpenJDK versions, and tested integrations, see Red Hat build of Quarkus Supported configurations . For a list of the supported Maven artifacts, extensions, and BOMs for Red Hat build of Quarkus, see Red Hat build of Quarkus Component details . For general availability, full support, and maintenance support dates for all Red Hat products, see Red Hat Application Services Product Update and Support Policy . 1.4.1. Product updates and support lifecycle policy In Red Hat build of Quarkus, a feature release can be either a major or a minor release that introduces new features or support. Red Hat build of Quarkus release version numbers are directly aligned with the Long-Term Support (LTS) versions of the Quarkus community project . For more information, see the Long-Term Support (LTS) for Quarkus blog post. The version numbering of a Red Hat build of Quarkus feature release matches the Quarkus community version on which it is based. Important Red Hat does not release a productized version of Quarkus for every version the community releases. The cadence of the Red Hat build of Quarkus feature releases is about every six months. Red Hat build of Quarkus provides full support for a feature release right up until the release of a subsequent version. When a feature release is superseded by a new version, Red Hat continues to provide a further six months of maintenance support for the release, as outlined in the following support lifecycle chart [Fig. 1]. Figure 1. Feature release cadence and support lifecycle of Red Hat build of Quarkus During the full support phase and maintenance support phase of a release, Red Hat also provides 'service-pack (SP)' updates and 'micro' releases to fix bugs and Common Vulnerabilities and Exposures (CVE). New features in subsequent feature releases of Red Hat build of Quarkus can introduce enhancements, innovations, and changes to dependencies in the underlying technologies or platforms. For a detailed summary of what is new or changed in a successive feature release, see New features, enhancements, and technical changes . While most of the features of Red Hat build of Quarkus continue to work as expected after you upgrade to the latest release, there might be some specific scenarios where you need to change your existing applications or do some extra configuration to your environment or dependencies. Therefore, before upgrading Red Hat build of Quarkus to the latest release, always review the Changes that affect compatibility with earlier versions and Deprecated components and features sections of the release notes. For detailed information about the product lifecycle, timelines, and support policies of Red Hat build of Quarkus, log in to the Red Hat Customer Portal and see the Knowledgebase article, Red Hat build of Quarkus lifecycle and support policies . 1.4.2. Tested and verified environments Red Hat build of Quarkus 3.8 is available on the following versions of Red Hat OpenShift Container Platform: 4.15, 4.12, and Red Hat Enterprise Linux 8.9. For a list of supported configurations, log in to the Red Hat Customer Portal and see the Knowledgebase solution Red Hat build of Quarkus Supported configurations . 1.4.3. Development support Red Hat provides development support for the following Red Hat build of Quarkus features, plugins, extensions, and dependencies: Features Continuous Testing Dev Services Dev UI Local development mode Remote development mode Plugins Maven Protocol Buffers Plugin 1.4.3.1. Development tools Red Hat provides development support for using Quarkus development tools, including the Quarkus CLI and the Maven and Gradle plugins, to prototype, develop, test, and deploy Red Hat build of Quarkus applications. Red Hat does not support using Quarkus development tools in production environments. For more information, see the Red Hat Knowledgebase article Development Support Scope of Coverage . 1.5. Deprecated components and features The components and features listed in this section are deprecated with Red Hat build of Quarkus 3.8. They are included and supported in this release. However, no enhancements will be made to these components and features, and they might be removed in the future. For a list of the components and features that are deprecated in this release, log in to the Red Hat Customer Portal and view the Red Hat build of Quarkus Component details page. 1.5.1. Deprecation of DeploymentConfig With Red Hat build of Quarkus 3.8, the DeploymentConfig object, deprecated in OpenShift, is also deprecated in Red Hat build of Quarkus. Now, Deployment is the default and preferred deployment kind for the quarkus-openshift extension. If you redeploy applications that you deployed before by using DeploymentConfig , by default, those applications use Deployment but do not remove the DeploymentConfig . This leads to a deployment of both new and old applications, so, you must remove the DeploymentConfig manually. However, if you want to continue to use DeploymentConfig , it is still possible to do so by explicitly setting quarkus.openshift.deployment-kind to DeploymentConfig . For more information, see Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform . 1.5.2. Deprecation of OpenShift Service Binding Operator The OpenShift Service Binding Operator is deprecated in OpenShift Container Platform (OCP) 4.13 and later and is planned to be removed in a future OCP release. 1.5.3. Deprecation of quarkus-reactive-routes Starting with version 2.13, quarkus-reactive-routes is deprecated and is planned to be removed in a future version. SmallRye JWT no longer contains quarkus-reactive-routes ; thus, its automatic inclusion is discontinued. To maintain functionality, add quarkus-reactive-routes to your build configurations. 1.5.4. Discontinuation of quarkus-test-infinispan-client artifact The quarkus-test-infinispan-client artifact has been discontinued and is no longer part of Red Hat build of Quarkus. This change follows its redundancy, as it was not used outside the Quarkus core repository and had been replaced by Dev Services for Infinispan. 1.6. Technology previews This section lists features and extensions that are now available as a Technology Preview in Red Hat build of Quarkus 3.8. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat recommends that you do not use them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about Red Hat Technology Preview features, see Technology Preview Features Scope . 1.6.1. Hibernate Search management endpoint introduced Red Hat build of Quarkus 3.8 exposes an HTTP management endpoint for Hibernate Search as a Technology Preview feature. With this feature, you can trigger mass indexing of data and other maintenance tasks. By default, this endpoint is not enabled. To enable it, set the following configuration properties to true : quarkus.management.enabled=true quarkus.hibernate-search-orm.management.enabled=true Red Hat build of Quarkus exposes the management endpoint under /q/hibernate-search/reindex on the management interface, which is exposed on port 9000 by default. For more information, see the following resources: Quarkus Hibernate Search guide Quarkus Management interface reference guide 1.6.2. List of extensions that are in technology preview RESTEasy Reactive JAXB, quarkus-resteasy-reactive-jaxb . JAXB serialization support for RESTEasy Reactive. This extension is not compatible with the quarkus-resteasy extension, or any of the extensions that depend on it. SmallRye Stork, quarkus-smallrye-stork . SmallRye Stork is a dynamic service discovery and selection framework for locating and selecting service instances. Elasticsearch REST client, quarkus-elasticsearch-rest-client . Connect to an Elasticsearch cluster using the REST low level client. Hibernate Reactive, quarkus-hibernate-reactive . A reactive API for Hibernate ORM, supporting non-blocking database drivers and a reactive style of interaction with the database. MongoDB client, quarkus-mongodb-client . Connect to MongoDB in either imperative or reactive style. Reactive MS SQL client, quarkus-reactive-mssql-client . Connect to the Microsoft SQL Server database using the reactive pattern. Reactive Oracle client, quarkus-reactive-oracle-client . Connect to the Oracle database using the reactive pattern. Apache Kafka Streams, quarkus-kafka-streams . Implement stream processing applications based on Apache Kafka. Kubernetes Service Binding, quarkus-kubernetes-service-binding . Read runtime configuration based on the Kubernetes Service Binding Specification. OpenShift Client, quarkus-openshift-client . Interact with OpenShift and develop OpenShift Operators. OpenID Connect Token Propagation, quarkus-oidc-token-propagation . Use a Jakarta REST Client filter to propagate an incoming Bearer access token or token acquired from Authorization Code Flow as an HTTP Authorization Bearer token. OpenID Connect Token Propagation Reactive, quarkus-oidc-token-propagation-reactive . Use Reactive REST Client to propagate an incoming Bearer access token or a token acquired from Authorization Code Flow as an HTTP Authorization Bearer token. 1.7. Changes that affect compatibility with earlier versions This section describes changes in Red Hat build of Quarkus 3.8 that affect the compatibility of applications built with earlier product versions. Review these breaking changes and take the steps required to ensure that your applications continue functioning after you update them to Red Hat build of Quarkus 3.8. To automate many of these changes, use the quarkus update command to update your projects to the latest Red Hat build of Quarkus version . 1.7.1. Core 1.7.1.1. Changes in Stork load-balancer configuration You can no longer use the configuration names stork."service-name".load-balancer and quarkus.stork."service-name".load-balancer for configuring the Stork load balancer. Instead, use quarkus.stork."service-name".load-balancer.type for configuration settings. 1.7.1.2. Dependency management update for OkHttp and Okio OkHttp and Okio have been removed from the Quarkus Platform BOM, and their versions are no longer enforced, addressing issues related to outdated dependencies. This change affects test framework dependencies and streamlines runtime dependencies. Developers using these dependencies must now specify their versions in build files. Additionally, the quarkus-test-infinispan-client artifact has been removed due to the availability of robust Dev Services support for Infinispan. 1.7.1.3. Java version requirement update Beginning with this version of Red Hat build of Quarkus, support for Java 11, deprecated in the version, has been removed. Java 21 is now the recommended version, although Java 17 is also supported. 1.7.1.4. JAXB limitations with collections in RESTEasy Reactive In Red Hat build of Quarkus, using RESTEasy Reactive with Java Architecture for XML Binding (JAXB) does not support using collections, arrays, and maps as parameters or return types in REST methods. To overcome this limitation of JAXB, encapsulate these types within a class annotated with @XmlRootElement . 1.7.1.5. Mandatory specification of @StaticInitSafe at build time During the static initialization phase, Red Hat build of Quarkus collects the configuration to inject in CDI beans. The collected values are then compared with their runtime initialization counterparts, and if a mismatch is detected, the application startup fails. With Red Hat build of Quarkus 3.8, you can now annotate configuration objects with the @io.quarkus.runtime.annotations.StaticInitSafe annotation to inform users that the injected configuration: is set at build time cannot be changed is safe to be used at runtime, instructing Red Hat build of Quarkus to not fail the startup on configuration mismatch 1.7.1.6. Qute: Isolated execution of tag templates by default User tags in templates are now executed in isolation by default, restricting access to the calling template's context. This update can alter data handling within tag templates, potentially impacting their current functionality. To bypass this isolation and maintain access to the parent context, include _isolated=false or _unisolated in the tag call, for example, # itemDetail item showImage=true _isolated=false . This approach allows tags to access data from the parent context as before. This change minimizes unintended data exposure from the parent context to the tag, enhancing template data integrity. However, it might necessitate updates to existing templates reliant on shared context access, representing a notable change that could affect users unfamiliar with this isolation mechanism. 1.7.1.7. Qute: Resolving type pollution issues ResultNode class is updated to be an abstract class, not an interface, and should not be user-implemented despite being in the public API. The Qute API now limits CompletionStage implementations to java.util.concurrent.CompletableFuture and io.quarkus.qute.CompletedStage by default, a restriction alterable with -Dquarkus.qute.unrestricted-completion-stage-support=true . 1.7.1.8. quarkus-rest-client extensions renamed to quarkus-resteasy-client With Red Hat build of Quarkus 3.8, the following quarkus-rest-client extensions are renamed: Old name New name quarkus-rest-client quarkus-resteasy-client quarkus-rest-client-mutiny quarkus-resteasy-client-mutiny quarkus-rest-client-jackson quarkus-resteasy-client-jackson quarkus-rest-client-jaxb quarkus-resteasy-client-jaxb quarkus-rest-client-jsonb quarkus-resteasy-client-jsonb 1.7.1.9. Removing URI validation when @TestHTTPResource is injected The @TestHTTPResource annotation now supports path parameters. Validation as a URI string is no longer applied due to non-compliance with the URI format. 1.7.1.10. Updates to GraalVM SDK 23.1.2 with dependency adjustments The GraalVM SDK version has been updated to 23.1.2 in Red Hat build of Quarkus 3.8. Developers using extensions requiring GraalVM substitutions should switch from org.graalvm.sdk:graal-sdk to org.graalvm.sdk:nativeimage to access necessary classes. For those that use org.graalvm.js:js , replace this dependency with org.graalvm.polyglot:js-community for the community version. For the enterprise version, replace this dependency with org.graalvm.polyglot:js . The adjustment for the graal-sdk is automated with quarkus update . However, changes to the js dependency must be made manually. Even though it is highly unlikely, this change could affect users who depend on: org.graalvm.sdk:collections org.graalvm.sdk:word 1.7.1.11. Various adjustments to QuarkusComponentTest In this release, QuarkusComponentTest has undergone several adjustments. It remains experimental and is not supported by Red Hat build of Quarkus. This experimental status indicates that the API might change at any time, reflecting feedback received. The QuarkusComponentTestExtension is now immutable, requiring programmatic registration through the simplified constructor QuarkusComponentTestExtension(Class...) or the QuarkusComponentTestExtension.builder() method. The test instance lifecycle, either Lifecycle#PER_METHOD (default) or Lifecycle#PER_CLASS , dictates when the CDI container starts and stops; PER_METHOD starts the container before each test and stops it afterward, whereas PER_CLASS starts it before all tests and stops it after all tests. This represents a change from versions, where the container always started before and stopped after all tests. 1.7.2. Data 1.7.2.1. Hibernate ORM upgraded to 6.4 In Red Hat build of Quarkus 3.8, Hibernate Object-Relational Mapping (ORM) was upgraded to version 6.4 and introduced the following breaking changes: Compatibility with some older database versions is dropped. For more information about supported versions, see Supported dialects . Numeric literals are now interpreted as defined in Jakarta Persistence 3.2. For more information, see the Hibernate ORM 6.4 migration guide. 1.7.2.2. Hibernate Search upgraded to 7.0 In Red Hat build of Quarkus 3.8, Hibernate Search was upgraded to version 7.0 and introduced the following breaking changes: The values accepted by the quarkus.hibernate-search-orm.coordination.entity-mapping.outbox-event.uuid-type and quarkus.hibernate-search-orm.coordination.entity-mapping.agent.uuid-type configuration properties changed: uuid-binary is deprecated in favor of binary uuid-char is deprecated in favor of char The default value for the quarkus.hibernate-search-orm.elasticsearch.query.shard-failure.ignore property changed from true to false , meaning that Hibernate Search now throws an exception if at least one shard fails during a search operation. To get the behavior, set this configuration property to true . Note If you define multiple backends, you must set this configuration property for each Elasticsearch backend. The complement operator (~) in the regular expression predicate was removed with no alternative to replace it. Hibernate Search dependencies no longer have an -orm6 suffix in their artifact ID; for example, applications now depend on the hibernate-search-mapper-orm module instead of hibernate-search-mapper-orm-orm6 . For more information, see the following resources: Hibernate Search documentation Hibernate Search 7.0.0.Final: Migration guide from 6.2 1.7.2.3. SQL Server Dev Services upgraded to 2022-latest Dev Services for SQL Server updated its default image from mcr.microsoft.com/mssql/server:2019-latest to mcr.microsoft.com/mssql/server:2022-latest . Users preferring the version can specify an alternative by using the config property detailed in the References section in the Red Hat build of Quarkus "Configure data sources" guide. 1.7.2.4. Upgrade to Flyway adds additional dependency for Oracle users In Red Hat build of Quarkus 3.8, the Flyway extension is upgraded to Flyway 9.20.0, which delivers an additional dependency, flyway-database-oracle , for Oracle users. Oracle users must update the pom.xml file to include the flyway-database-oracle dependency. To do so, do the following: <dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-database-oracle</artifactId> </dependency> For more information, see the Quarkus Using Flyway guide. 1.7.3. Native 1.7.3.1. Strimzi OAuth support issue in the Kafka extension The Kafka extension's Strimzi OAuth support in quarkus-bom now uses io.strimzi:strimzi-kafka-oauth version 0.14.0, introducing a known issue that leads to native build failures. The error, Substitution target for `io.smallrye.reactive.kafka.graal.Target_com_jayway_jsonpath_internal_DefaultsImpl is not loaded can be bypassed by adding io.strimzi:kafka-oauth-common to your project's classpath. 1.7.4. Observability 1.7.4.1. @AddingSpanAttributes annotation added When using Opentelemetry (oTel) instrumentation with Red Hat build of Quarkus 3.8, you can now annotate a method in any Context Dependency Injection (CDI)-aware bean by using the io.opentelemetry.instrumentation.annotations.AddingSpanAttributes annotation, which does not create a new span but adds annotated method parameters to attributes in the current span. Note If you mistakenly annotate a method with both @AddingSpanAttributes and @WithSpan annotations, the @WithSpan annotation takes precedence. For more information, see the CDI section of the Quarkus "Using OpenTelemetry" guide. 1.7.4.2. quarkus-smallrye-metrics extension no longer supported With Red Hat build of Quarkus 3.8, the quarkus-smallrye-metrics extension is no longer supported. Now, it is available as a community extension only. Its use in production environments is discouraged. From Red Hat build of Quarkus 3.8, quarkus-smallrye-metrics is replaced by the fully supported quarkus-micrometer extension. 1.7.4.3. quarkus-smallrye-opentracing extension no longer supported With Red Hat build of Quarkus 3.8, SmallRye OpenTracing is no longer supported. To continue using distributed tracing, migrate your applications to SmallRye OpenTelemetry, which is now fully supported with this release and no longer a Technology Preview feature. If you still need to use quarkus-smallrye-opentracing , adjust your application to use the extensions from Quarkiverse by updating the groupId and specifying the version manually. 1.7.4.4. Refactoring of Scheduler and OpenTelemetry Tracing extensions In Red Hat build of Quarkus 3.8, integration of OpenTelemetry Tracing and the quarkus-scheduler extension has been refactored. Before this update, only @Scheduled methods had a new io.opentelemetry.api.trace.Span class, which is associated automatically when you enable tracing. That is, when the quarkus.scheduler.tracing.enabled configuration property is set to true , and the quarkus-opentelemetry extension is available. With this 3.8 release, all scheduled jobs, including those that are scheduled programmatically, have a Span associated automatically when tracing is enabled. The unique job identifier for each scheduled method is either generated, is specified by setting the io.quarkus.scheduler.Scheduled#identity attribute or with the JobDefinition method. Before this update, span names followed the <simpleclassname>.<methodName> format. For more information, see the following Quarkus resources: Scheduler reference Using OpenTelemetry 1.7.5. Security 1.7.5.1. Enhanced Security with mTLS and HTTP Restrictions When mTLS client authentication ( quarkus.http.ssl.client-auth ) is set to required , Red Hat build of Quarkus automatically disables plain HTTP ports to ensure that only secure HTTPS requests are accepted. To enable plain HTTP, configure quarkus.http.ssl.client-auth to request or set both quarkus.http.ssl.client-auth=required and quarkus.http.insecure-requests=enabled . 1.7.5.2. JWT extension removes unnecessary Reactive Routes dependency The JWT extension no longer transitively depends on the Reactive Routes extension. If your application uses both JWT and Reactive Routes features but does not declare an explicit dependency on Reactive Routes, you must add this dependency. 1.7.5.3. Keycloak Authorization dropped the keycloak-adapter-core dependency The quarkus-keycloak-authorization extension no longer includes the org.keycloak:keycloak-adapter-core dependency due to its update to Keycloak 22.0.0 and its irrelevance to the extension's functionality. In future Keycloak versions, it is planned to remove the Keycloak Java adapters code. If your application requires this dependency, manually add it to your project's pom.xml . 1.7.5.4. Using CDI interceptors to resolve OIDC tenants in RESTEasy Classic no longer supported You can no longer use Context and Dependency Injection (CDI) annotations and interceptors to resolve tenant OIDC configuration for RESTEasy Classic applications. Due to security checks that are enforced before CDI interceptors and checks requiring authentication are triggered, using CDI interceptors to resolve multiple OIDC provider configuration identifiers no longer works. Use @Tenant annotation or custom io.quarkus.oidc.TenantResolver instead. For more information, see the Resolve with annotations section of the Quarkus "Using OIDC multitenancy guide". 1.7.5.5. Using OIDC @Tenant annotation to bind OIDC features to tenants no longer possible In Red Hat build of Quarkus 3.8, you must now use the quarkus.oidc.TenantFeature annotation instead of quarkus.oidc.Tenant to bind OpenID Connect (OIDC) features to OIDC tenants. The quarkus.oidc.Tenant annotation is now used for resolving tenant configuration. 1.7.5.6. Security profile flexibility enhancement Red Hat build of Quarkus 3.8 allows runtime configuration of HTTP permissions and roles, enabling flexible security settings across profiles. This resolves the issue of native executables locking to build-time security configurations. Security can now be dynamically adjusted per profile, applicable in both JVM and native modes. 1.7.6. Standards 1.7.6.1. Correction in GraphQL directive application The application of annotation-based GraphQL directives has been corrected to ensure they are only applied to the schema element types for which they are declared. For example, if a directive was declared to apply to the GraphQL element type FIELD but was erroneously applied to a different element type, it was still visible in the schema on the element where it should not be applicable, leading to an invalid schema. This was now corrected, and directives have their usage checked against their applicability declaration. If you had directives applied incorrectly in this way, they will no longer appear in the schema, and Red Hat build of Quarkus 3.8 will log a warning during the build. 1.7.7. OpenAPI standardizes content type defaults for POJOs and primitives This change has standardized the default content type for generating OpenAPI documentation when a @ContentType annotation is not provided. Previously, the default content type varied across different extensions, such as RestEasy Reactive, RestEasy Classic, Spring Web, and OpenAPI. For instance, OpenAPI always used JSON as the default, whereas RestEasy used JSON for object types and text for primitive types. Now, all extensions have adopted uniform default settings, ensuring consistency: Primitive types are now uniformly set to text/plain . Complex POJO (Plain Old Java Object) types default to application/json . This unification ensures that while the behavior across extensions is consistent, it differentiates appropriately based on the type of data, with primitives using text/plain and POJOs using application/json . This approach does not imply that the same content type is used for all Java types but rather that all extensions now handle content types in the same manner, tailored to the nature of the data. 1.7.8. Web 1.7.8.1. Improved SSE handling in REST Client Red Hat build of Quarkus 3.8 has enhanced its REST Client's Server-Sent Events (SSE) capabilities, enabling complete event returns and filtering. These updates and new descriptions in REST Client provide developers with increased control and flexibility in managing real-time data streams. 1.7.8.2. Manual addition of the Reactive Routes dependency Until version 3.8, the Red Hat build of Quarkus SmallRye JWT automatically incorporated quarkus-reactive-routes , a feature discontinued from version 3.8 onwards. To ensure continued functionality, manually add quarkus-reactive-routes as a dependency in your build configuration. 1.8. Known issues Review the following known issues for insights into Red Hat build of Quarkus 3.8 limitations and workarounds. 1.8.1. Infinispan client extension does not work on FIPS and Native Mandrel 23.1 In native mode, while using the native builder container for Red Hat build of Quarkus 3.8 from registry.redhat.io/quarkus/mandrel-for-jdk-21-rhel8 , the Red Hat build of Quarkus Infinispan client extension does not work on Federal Information Processing Standards (FIPS)-enabled systems. Workaround: Avoid using native mode with this native image builder. No workaround is available at this time. 1.8.2. Native build failures with Strimzi OAuth client update to 0.14.0 The Strimzi OAuth Client encounters a known issue due to an update of the io.strimzi:strimzi-kafka-oauth dependency to 0.14.0, leading to native build failures indicated by the following error: Substitution target for io.smallrye.reactive.kafka.graal.Target_com_jayway_jsonpath_internal_DefaultsImpl is not loaded. Workaround: To workaround this issue, include the io.strimzi:kafka-oauth-common dependency in the classpath. 1.8.3. Missing native library for the Kafka Streams extension on AArch64 Applications that use the quarkus-kafka-streams extension have runtime failures on AArch64 systems due to the absence of the native library librocksdbjni-linux-AArch64.so . This issue throws a java.lang.RuntimeException: librocksdbjni-linux-AArch64.so was not found inside JAR error during application startup. This error prevents the successful initialization of the RocksDB component, which is crucial for Kafka Streams applications. Workaround: No workaround is available at this time. Example java.lang.RuntimeException: librocksdbjni-linux-AArch64.so error 1.8.4. Missing native library for the Kafka Streams extension on Microsoft Windows Applications that use the quarkus-kafka-streams extension on Microsoft Windows have runtime failures due to the absence of the native library librocksdbjni-win64.dll . This issue throws in a java.lang.RuntimeException: librocksdbjni-win64.dll was not found inside JAR error during the application startup process. This error prevents the successful initialization of the RocksDB component, which is crucial for Kafka Streams applications. Workaround: No workaround is available at this time. Example java.lang.RuntimeException: librocksdbjni-win64.dll error 1.8.5. Clarification on missing Vert.x classes during native builds During native builds, developers might get java.lang.ClassNotFoundException errors for Vert.x classes such as io.vertx.core.http.impl.Http1xServerResponse and io.vertx.core.parsetools.impl.RecordParserImpl . These errors occur when building applications, including those that use the quarkus-qpid-jms extension, without including Vert.x as a direct or transitive dependency. It is crucial to clarify that quarkus-qpid-jms does not use Vert.x directly. The issue arises from the quarkus-netty extension, which quarkus-qpid-jms uses. The quarkus-netty extension is responsible for registering these Vert.x classes for runtime initialization during native builds, without verifying their presence. This leads to the noted exceptions when there is no other extension in the build that introduces Vert.x. These ClassNotFoundException errors are logged during the build process but do not impact the functionality of the applications. They are a result of the native build process and the way dependencies are handled within Red Hat build of Quarkus, specifically through the quarkus-netty module. Example java.lang.ClassNotFoundException error Workaround: To prevent these log entries and ensure all dependencies are properly recognized, you can optionally add the quarkus-vertx extension to your project. 1.8.6. AArch64 support limitations in JVM mode testing on OpenShift The testing pipeline for JVM mode on Red Hat OpenShift Container Platform with AArch64, operational since Red Hat build of Quarkus 3.2, has a few known limitations around AArch64: Red Hat Serverless is not supported on AArch64. A feature request for support of Red Hat Serverless on OpenShift clusters running on the AArch64 architecture is tracked in SRVCOM-2472 , with plans to include support in Serverless 1.33. Red Hat AMQ Streams is not supported on AArch64. Because AMQ Streams is not yet supported on AArch64, the support for this integration has not been tested yet. This issue is currently not tracked in Red Hat's issue management system. Red Hat Single Sign-On is not supported on AArch64. Because Red Hat Single Sign-On and Red Hat build of Keycloak are not supported on AArch64 yet, integration with Red Hat build of Quarkus applications has not been tested yet. Service Binding is not supported on AArch64. Because the bound services supported in the Technology Preview of service binding integration with Red Hat build of Quarkus are not supported on AArch64 yet, this integration has not been tested yet. Additionally, the OpenShift Service Binding Operator is deprecated in OpenShift Container Platform (OCP) 4.13 and later and is planned to be removed in a future OCP release. AArch64 support is limited to the Red Hat Universal Base Image (UBI) containers and does not extend to bare-metal environments. Workaround: No workarounds are available at this time. 1.8.7. Dependency on org.apache.maven:maven:pom:3.6.3 might cause proxy issues The dependency on org.apache.maven:maven:pom:3.6.3 might be resolved when using certain Quarkus extensions. This is not specific to the Gradle plugin but impacts any project with io.smallrye:smallrye-parent:pom:37 in its parent Project Object Model (POM) hierarchy. This dependency can cause build failures for environments behind a proxy that restricts access to org.apache.maven artifacts with version 3.6.x. None of the binary packages from Maven 3.6.3 are downloaded as dependencies of the Quarkus core framework or supported Quarkus extensions. Workaround: No workaround is available at this time. For more information, see QUARKUS-1025 - Gradle plugin drags in maven core 3.6.x . Red Hat build of Quarkus 3.8 provides increased stability and includes fixes to bugs that have a significant impact on users. To get the latest fixes for Red Hat build of Quarkus, ensure you are using the latest available version, which is 3.8.6.SP3-redhat-00002. 1.9. Updates for Red Hat build of Quarkus 3.8.6 SP3 1.9.1. Security fixes CVE-2025-1634 io.quarkus:quarkus-resteasy : Memory leak in Quarkus RESTEasy Classic when a client requests timeout CVE-2025-24970 io.netty/netty-handler : SSLHandler does not correctly validate packets, which can lead to a native crash when using native SSLEngine CVE-2025-1247 io.quarkus/quarkus-netty : Quarkus REST endpoint request parameter leakage due to shared instance 1.9.2. Advisories Before you start using and deploying Red Hat build of Quarkus 3.8.6.SP3, review the following advisory related to the release. RHSA-2025:1884 1.10. Updates for Red Hat build of Quarkus 3.8.6 SP1 1.10.1. Bug fixes To view the issues that have been resolved for this release, see Red Hat build of Quarkus 3.8.6 SP1 bug fixes . 1.10.2. Security fixes CVE-2024-7254 com.google.protobuf/protobuf : StackOverflow vulnerability in Protocol Buffers CVE-2024-40094 com.graphql-java.graphql-java : Allocation of Resources Without Limits or Throttling in GraphQL Java CVE-2021-44549 org.eclipse.angus/angus-mail : Enabling Secure Server Identity Checks for Safer SMTPS Communication CVE-2024-47561 org.apache.avro/avro : Schema parsing may trigger Remote Code Execution (RCE) 1.10.3. Advisories Before you start using and deploying Red Hat build of Quarkus 3.8.6.SP1, review the following advisory related to the release. RHBA-2024:7670 1.11. Updates for Red Hat build of Quarkus 3.8.6 Red Hat build of Quarkus 3.8 provides increased stability and includes fixes to bugs that have a significant impact on users. To get the latest fixes for Red Hat build of Quarkus, ensure you are using the latest available version, which is 3.8.6.SP3-redhat-00002. 1.11.1. Bug fixes To view the issues that have been resolved for this release, see Red Hat build of Quarkus 3.8.6 bug fixes . 1.11.2. Security fixes CVE-2024-8391 io.vertx.vertx-grpc-server : Vertx gRPC server does not limit the maximum message size CVE-2024-8391 io.vertx.vertx-grpc-client : Vertx gRPC server does not limit the maximum message size CVE-2024-3653 io.quarkus/quarkus-undertow : undertow : LearningPushHandler can lead to remote memory DoS attacks 1.11.3. Advisories Before you start using and deploying Red Hat build of Quarkus 3.8.6, review the following advisory related to the release. RHSA-2024:6437 1.12. Updates for Red Hat build of Quarkus 3.8.5 SP1 Red Hat build of Quarkus 3.8 provides increased stability and includes fixes to bugs that have a significant impact on users. To get the latest fixes for Red Hat build of Quarkus, ensure you are using the latest available version, which is 3.8.6.SP3-redhat-00002. 1.12.1. Bug fixes To view the issues that have been resolved for this release, see Red Hat build of Quarkus 3.8.5 SP1 bug fixes . 1.12.2. Advisories Before you start using and deploying Red Hat build of Quarkus 3.8.5.SP1, review the following advisory related to the release. RHBA-2024:4723 1.13. Updates for Red Hat build of Quarkus 3.8.5 Red Hat build of Quarkus 3.8 provides increased stability and includes fixes to bugs that have a significant impact on users. To get the latest fixes for Red Hat build of Quarkus, ensure you are using the latest available version, which is 3.8.6.SP3-redhat-00002. 1.13.1. Bug fixes To view the issues that have been resolved for this release, see Red Hat build of Quarkus 3.8.5 bug fixes . 1.13.2. Security fixes CVE-2024-34447 org.bouncycastle-bctls : org.bouncycastle : Use of incorrectly-resolved name or reference CVE-2024-30171 org.bouncycastle-bcprov-jdk18on : BouncyCastle vulnerable to a timing variant of Bleichenbacher (Marvin Attack) CVE-2024-29857 org.bouncycastle:bcprov-jdk18on : org.bouncycastle : Importing an EC certificate with crafted F2m parameters might lead to Denial of Service CVE-2024-30172 org.bouncycastle-bcprov-jdk18on : Infinite loop in ED25519 verification in the ScalarUtil class 1.13.3. Advisories Before you start using and deploying Red Hat build of Quarkus 3.8.5, review the following advisory related to the release. RHSA-2024:4326 1.14. Updates for Red Hat build of Quarkus 3.8.4 Red Hat build of Quarkus 3.8 provides increased stability and includes fixes to bugs that have a significant impact on users. To get the latest fixes for Red Hat build of Quarkus, ensure you are using the latest available version, which is 3.8.6.SP3-redhat-00002. 1.14.1. Bug fixes To view the issues that have been resolved for this release, see Red Hat build of Quarkus 3.8.4 bug fixes . 1.14.2. Security fixes CVE-2024-2700 io.quarkus/quarkus-core : Leak of local configuration properties into Quarkus applications CVE-2024-29025 io.netty/netty-codec-http : Allocation of Resources Without Limits or Throttling 1.14.3. Advisories Before you start using and deploying Red Hat build of Quarkus 3.8.4, review the following advisory related to the release. RHSA-2024:2106 1.15. Updates for Red Hat build of Quarkus 3.8.3 Red Hat build of Quarkus 3.8 provides increased stability and includes fixes to bugs that have a significant impact on users. To get the latest fixes for Red Hat build of Quarkus, ensure you are using the latest available version, which is 3.8.6.SP3-redhat-00002. 1.15.1. Bug fixes QUARKUS-2289 Unable to mount file into container on MacOS QUARKUS-3206 RabbitMQ TCP connections fail to close while using the Dev Mode Live-Reload feature of Quarkus QUARKUS-3804 Regression in 2.13.9 when using domain socket in Vert.x QUARKUS-4061 Quarkus maven plugin creates projects with non-relocated dependencies QUARKUS-4065 HTTP endpoint returns empty body when using RHBQ, but not upstream Quarkus 1.15.2. Advisories Before you start using and deploying Red Hat build of Quarkus 3.8.3, review the following advisory related to the release. RHEA-2024:2057 1.16. Additional resources Migrating applications to Red Hat build of Quarkus 3.8 guide. Getting Started with Red Hat build of Quarkus Revised on 2025-02-26 19:18:31 UTC
[ "<dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-database-oracle</artifactId> </dependency>", "09:32:54,059 INFO [app] ERROR: Failed to start application (with profile [prod]) 09:32:54,059 INFO [app] java.lang.RuntimeException: Failed to start quarkus 09:32:54,060 INFO [app] at io.quarkus.runner.ApplicationImpl.doStart(Unknown Source) 09:32:54,060 INFO [app] at io.quarkus.runtime.Application.start(Application.java:101) 09:32:54,060 INFO [app] at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:111) 09:32:54,061 INFO [app] at io.quarkus.runtime.Quarkus.run(Quarkus.java:71) 09:32:54,061 INFO [app] at io.quarkus.runtime.Quarkus.run(Quarkus.java:44) 09:32:54,061 INFO [app] at io.quarkus.runtime.Quarkus.run(Quarkus.java:124) 09:32:54,062 INFO [app] at io.quarkus.runner.GeneratedMain.main(Unknown Source) 09:32:54,062 INFO [app] Caused by: java.lang.ExceptionInInitializerError 09:32:54,063 INFO [app] at io.quarkus.kafka.streams.runtime.KafkaStreamsRecorder.loadRocksDb(KafkaStreamsRecorder.java:14) 09:32:54,063 INFO [app] at io.quarkus.deployment.steps.KafkaStreamsProcessorUSDloadRocksDb1611413226.deploy_0(Unknown Source) 09:32:54,063 INFO [app] at io.quarkus.deployment.steps.KafkaStreamsProcessorUSDloadRocksDb1611413226.deploy(Unknown Source) 09:32:54,064 INFO [app] ... 7 more 09:32:54,064 INFO [app] Caused by: java.lang.RuntimeException: librocksdbjni-linux-AArch64.so was not found inside JAR. 09:32:54,065 INFO [app] at org.rocksdb.NativeLibraryLoader.loadLibraryFromJarToTemp(NativeLibraryLoader.java:118) 09:32:54,065 INFO [app] at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:102) 09:32:54,065 INFO [app] at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:82) 09:32:54,066 INFO [app] at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:70) 09:32:54,066 INFO [app] at org.rocksdb.RocksDB.<clinit>(RocksDB.java:39) 09:32:54,067 INFO [app] ... 10 more", "13:07:08,118 INFO [app] ERROR: Failed to start application (with profile [prod]) 13:07:08,118 INFO [app] java.lang.RuntimeException: Failed to start quarkus 13:07:08,118 INFO [app] at io.quarkus.runner.ApplicationImpl.doStart(Unknown Source) 13:07:08,118 INFO [app] at io.quarkus.runtime.Application.start(Application.java:101) 13:07:08,118 INFO [app] at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:111) 13:07:08,118 INFO [app] at io.quarkus.runtime.Quarkus.run(Quarkus.java:71) 13:07:08,118 INFO [app] at io.quarkus.runtime.Quarkus.run(Quarkus.java:44) 13:07:08,118 INFO [app] at io.quarkus.runtime.Quarkus.run(Quarkus.java:124) 13:07:08,118 INFO [app] at io.quarkus.runner.GeneratedMain.main(Unknown Source) 13:07:08,118 INFO [app] Caused by: java.lang.ExceptionInInitializerError 13:07:08,118 INFO [app] at io.quarkus.kafka.streams.runtime.KafkaStreamsRecorder.loadRocksDb(KafkaStreamsRecorder.java:14) 13:07:08,118 INFO [app] at io.quarkus.deployment.steps.KafkaStreamsProcessorUSDloadRocksDb1611413226.deploy_0(Unknown Source) 13:07:08,118 INFO [app] at io.quarkus.deployment.steps.KafkaStreamsProcessorUSDloadRocksDb1611413226.deploy(Unknown Source) 13:07:08,118 INFO [app] ... 11 more 13:07:08,118 INFO [app] Caused by: java.lang.RuntimeException: librocksdbjni-win64.dll was not found inside JAR. 13:07:08,118 INFO [app] at org.rocksdb.NativeLibraryLoader.loadLibraryFromJarToTemp(NativeLibraryLoader.java:118) 13:07:08,118 INFO [app] at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:102) 13:07:08,118 INFO [app] at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:82) 13:07:08,118 INFO [app] at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:70) 13:07:08,118 INFO [app] at org.rocksdb.RocksDB.<clinit>(RocksDB.java:39) 13:07:08,118 INFO [app] ... 14 more", "java.lang.ClassNotFoundException: io.vertx.core.http.impl.Http1xServerResponse at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) at java.base/jdk.internal.loader.ClassLoadersUSDAppClassLoader.loadClass(ClassLoaders.java:188) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526) at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageClassLoader.loadClass(NativeImageClassLoader.java:652) ... (further stack trace details) java.lang.ClassNotFoundException: io.vertx.core.parsetools.impl.RecordParserImpl at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) at java.base/jdk.internal.loader.ClassLoadersUSDAppClassLoader.loadClass(ClassLoaders.java:188) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526) at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageClassLoader.loadClass(NativeImageClassLoader.java:652) ... (further stack trace details)" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/release_notes_for_red_hat_build_of_quarkus_3.8/assembly_release-notes-quarkus_quarkus-release-notes
roxctl CLI
roxctl CLI Red Hat Advanced Cluster Security for Kubernetes 4.5 roxctl CLI Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/roxctl_cli/index
Chapter 4. Configuring CPUs on Compute nodes
Chapter 4. Configuring CPUs on Compute nodes Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC). Use the following features to tune your instances for optimal CPU performance: CPU pinning : Pin virtual CPUs to physical CPUs. Emulator threads : Pin emulator threads associated with the instance to physical CPUs. CPU feature flags : Configure the standard set of CPU feature flags that are applied to instances to improve live migration compatibility across Compute nodes. 4.1. Configuring CPU pinning on Compute nodes You can configure each instance CPU process to run on a dedicated host CPU by enabling CPU pinning on the Compute nodes. When an instance uses CPU pinning, each instance vCPU process is allocated its own host pCPU that no other instance vCPU process can use. Instances that run on Compute nodes with CPU pinning enabled have a NUMA topology. Each NUMA node of the instance NUMA topology maps to a NUMA node on the host Compute node. You can configure the Compute scheduler to schedule instances with dedicated (pinned) CPUs and instances with shared (floating) CPUs on the same Compute node. To configure CPU pinning on Compute nodes that have a NUMA topology, you must complete the following: Designate Compute nodes for CPU pinning. Configure the Compute nodes to reserve host cores for pinned instance vCPU processes, floating instance vCPU processes, and host processes. Deploy the data plane. Create a flavor for launching instances that require CPU pinning. Create a flavor for launching instances that use shared, or floating, CPUs. Note Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts. 4.1.1. Prerequisites You know the NUMA topology of your Compute node. The oc command line tool is installed on your workstation. You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges. 4.1.2. Designating and configuring Compute nodes for CPU pinning To designate Compute nodes for instances with pinned CPUs, you must create and configure a new OpenStackDataPlaneNodeSet custom resource (CR) to configure the nodes that are designated for CPU pinning. Configure CPU pinning on your Compute nodes based on the NUMA topology of the nodes. Reserve some CPU cores across all the NUMA nodes for the host processes for efficiency. Assign the remaining CPU cores to managing your instances. This procedure uses the following NUMA topology, with eight CPU cores spread across two NUMA nodes, to illustrate how to configure CPU pinning: Table 4.1. Example of NUMA Topology NUMA Node 0 NUMA Node 1 Core 0 Core 1 Core 4 Core 5 Core 2 Core 3 Core 6 Core 7 The procedure reserves cores 0 and 4 for host processes, cores 1, 3, 5 and 7 for instances that require CPU pinning, and cores 2 and 6 for floating instances that do not require CPU pinning. Note The following procedure applies to new OpenStackDataPlaneNodeSet CRs that have not yet been provisioned. To reconfigure an existing OpenStackDataPlaneNodeSet that has already been provisioned, you must first drain the guest instances from all the nodes in the OpenStackDataPlaneNodeSet . Note Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts. Prerequisites You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes for which you want to designate and configure CPU pinning. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide. Procedure Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [compute] and [default]: 1 The name of the new Compute configuration file. The nova-operator generates the default configuration file with the name 01-nova.conf . Do not use the default name, because it would override the infrastructure configuration, such as the transport_url . The nova-compute service applies every file under /etc/nova/nova.conf.d/ in lexicographical order, therefore configurations defined in later files override the same configurations defined in an earlier file. 2 Reserves physical CPU cores for the shared instances. 3 Reserves physical CPU cores for the dedicated instances. 4 Specifies the amount memory to reserve per NUMA node. For more information about creating ConfigMap objects, see Creating and using config maps . Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_cpu_pinning_deploy.yaml on your workstation: For more information about creating an OpenStackDataPlaneDeployment CR, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide. In the compute_cpu_pinning_deploy.yaml , specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for CPU pinning. Warning You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes. Warning If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps: Check the services list of the node set and find the name of the DataPlaneService that points to nova. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova . If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config , then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap . If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets. Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment. Save the compute_cpu_pinning_deploy.yaml deployment file. Deploy the data plane: Verify that the data plane is deployed: Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane: 4.1.3. Creating a dedicated CPU flavor for instances To enable your cloud users to create instances that have dedicated CPUs, you can create a flavor with a dedicated CPU policy for launching instances. Prerequisites Simultaneous multithreading (SMT) is configured on the host if you intend to use the required cpu_thread_policy . You can have a mix of SMT and non-SMT Compute hosts. Flavors with the require cpu_thread_policy will land on SMT hosts, and flavors with isolate will land on non-SMT. The Compute node is configured to allow CPU pinning. For more information, see Configuring CPU pinning on the Compute nodes . Procedure Create a flavor for instances that require CPU pinning: If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. Note To set hw:mem_page_size to small or any , you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances. To request pinned CPUs, set the hw:cpu_policy property of the flavor to dedicated : Optional: To place each vCPU on thread siblings, set the hw:cpu_thread_policy property of the flavor to require : Note If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set hw:cpu_thread_policy to prefer instead of require . The prefer policy is the default policy that ensures that thread siblings are used when available. If you use hw:cpu_thread_policy=isolate , you must have SMT disabled or use a platform that does not support SMT. To verify the flavor creates an instance with dedicated CPUs, use your new flavor to launch an instance: 4.1.4. Creating a shared CPU flavor for instances To enable your cloud users to create instances that use shared, or floating, CPUs, you can create a flavor with a shared CPU policy for launching instances. Prerequisites The Compute node is configured to reserve physical CPU cores for the shared CPUs. For more information, see Configuring CPU pinning on the Compute nodes . Procedure Create a flavor for instances that do not require CPU pinning: To request floating CPUs, set the hw:cpu_policy property of the flavor to shared : 4.1.5. Creating a mixed CPU flavor for instances To enable your cloud users to create instances that have a mix of dedicated and shared CPUs, you can create a flavor with a mixed CPU policy for launching instances. Procedure Create a flavor for instances that require a mix of dedicated and shared CPUs: Specify which CPUs must be dedicated or shared: Replace <CPU_MASK> with the CPUs that must be either dedicated or shared: To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to 2-3 to specify that CPUs 2 and 3 are dedicated and all the remaining CPUs are shared. To specify shared CPUs, prepend the CPU number or CPU range with a caret (^). For example, set the property to ^0-1 to specify that CPUs 0 and 1 are shared and all the remaining CPUs are dedicated. If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. Note To set hw:mem_page_size to small or any , you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances. 4.1.6. Configuring CPU pinning on Compute nodes with simultaneous multithreading (SMT) If a Compute node supports simultaneous multithreading (SMT), group thread siblings together in either the dedicated or the shared set. Thread siblings share some common hardware which means it is possible for a process running on one thread sibling to impact the performance of the other thread sibling. For example, the host identifies four logical CPU cores in a dual core CPU with SMT: 0, 1, 2, and 3. Of these four, there are two pairs of thread siblings: Thread sibling 1: logical CPU cores 0 and 2 Thread sibling 2: logical CPU cores 1 and 3 In this scenario, do not assign logical CPU cores 0 and 1 as dedicated and 2 and 3 as shared. Instead, assign 0 and 2 as dedicated and 1 and 3 as shared. The files /sys/devices/system/cpu/cpuN/topology/thread_siblings_list , where N is the logical CPU number, contain the thread pairs. You can use the following command to identify which logical CPU cores are thread siblings: The following output indicates that logical CPU core 0 and logical CPU core 2 are threads on the same core: 4.1.7. Additional resources Discovering your NUMA node topology
[ "apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 25-nova-cpu-pinning.conf: | 1 [compute] cpu_shared_set = 2,6 2 cpu_dedicated_set = 1,3,5,7 3 [DEFAULT] reserved_huge_pages = node:0,size:4,count:131072 4 reserved_huge_pages = node:1,size:4,count:131072", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-cpu-pinning", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-cpu-pinning spec: nodeSets: - openstack-edpm - compute-cpu-pinning - - <nodeSet_name>", "oc create -f compute_cpu_pinning_deploy.yaml", "oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-cpu-pinning True Deployed", "oc rsh -n openstack openstackclient openstack hypervisor list", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <num_guest_vcpus> pinned_cpus", "openstack --os-compute-api=2.86 flavor set --property hw:mem_page_size=<page_size> pinned_cpus", "openstack --os-compute-api=2.86 flavor set --property hw:cpu_policy=dedicated pinned_cpus", "openstack --os-compute-api=2.86 flavor set --property hw:cpu_thread_policy=require pinned_cpus", "openstack server create --flavor pinned_cpus --image <image> pinned_cpu_instance", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> floating_cpus", "openstack --os-compute-api=2.86 flavor set --property hw:cpu_policy=shared floating_cpus", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <number_of_reserved_vcpus> --property hw:cpu_policy=mixed mixed_CPUs_flavor", "openstack --os-compute-api=2.86 flavor set --property hw:cpu_dedicated_mask=<CPU_MASK> mixed_CPUs_flavor", "openstack --os-compute-api=2.86 flavor set --property hw:mem_page_size=<page_size> mixed_CPUs_flavor", "grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u", "/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2 /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-CPUs-on-Compute-nodes
Chapter 7. Configuring identity providers
Chapter 7. Configuring identity providers 7.1. Configuring an htpasswd identity provider Configure the htpasswd identity provider to allow users to log in to OpenShift Container Platform with credentials from an htpasswd file. To define an htpasswd identity provider, perform the following tasks: Create an htpasswd file to store the user and password information. Create a secret to represent the htpasswd file. Define an htpasswd identity provider resource that references the secret. Apply the resource to the default OAuth configuration to add the identity provider. 7.1.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.1.2. About htpasswd authentication Using htpasswd authentication in OpenShift Container Platform allows you to identify users based on an htpasswd file. An htpasswd file is a flat file that contains the user name and hashed password for each user. You can use the htpasswd utility to create this file. Warning Do not use htpasswd authentication in OpenShift Container Platform for production environments. Use htpasswd authentication only for development environments. 7.1.3. Creating the htpasswd file See one of the following sections for instructions about how to create the htpasswd file: Creating an htpasswd file using Linux Creating an htpasswd file using Windows 7.1.3.1. Creating an htpasswd file using Linux To use the htpasswd identity provider, you must generate a flat file that contains the user names and passwords for your cluster by using htpasswd . Prerequisites Have access to the htpasswd utility. On Red Hat Enterprise Linux this is available by installing the httpd-tools package. Procedure Create or update your flat file with a user name and hashed password: USD htpasswd -c -B -b </path/to/users.htpasswd> <username> <password> The command generates a hashed version of the password. For example: USD htpasswd -c -B -b users.htpasswd <username> <password> Example output Adding password for user user1 Continue to add or update credentials to the file: USD htpasswd -B -b </path/to/users.htpasswd> <user_name> <password> 7.1.3.2. Creating an htpasswd file using Windows To use the htpasswd identity provider, you must generate a flat file that contains the user names and passwords for your cluster by using htpasswd . Prerequisites Have access to htpasswd.exe . This file is included in the \bin directory of many Apache httpd distributions. Procedure Create or update your flat file with a user name and hashed password: > htpasswd.exe -c -B -b <\path\to\users.htpasswd> <username> <password> The command generates a hashed version of the password. For example: > htpasswd.exe -c -B -b users.htpasswd <username> <password> Example output Adding password for user user1 Continue to add or update credentials to the file: > htpasswd.exe -b <\path\to\users.htpasswd> <username> <password> 7.1.4. Creating the htpasswd secret To use the htpasswd identity provider, you must define a secret that contains the htpasswd user file. Prerequisites Create an htpasswd file. Procedure Create a Secret object that contains the htpasswd users file: USD oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1 1 The secret key containing the users file for the --from-file argument must be named htpasswd , as shown in the above command. Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents> 7.1.5. Sample htpasswd CR The following custom resource (CR) shows the parameters and acceptable values for an htpasswd identity provider. htpasswd CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.1.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.1.7. Updating users for an htpasswd identity provider You can add or remove users from an existing htpasswd identity provider. Prerequisites You have created a Secret object that contains the htpasswd user file. This procedure assumes that it is named htpass-secret . You have configured an htpasswd identity provider. This procedure assumes that it is named my_htpasswd_provider . You have access to the htpasswd utility. On Red Hat Enterprise Linux this is available by installing the httpd-tools package. You have cluster administrator privileges. Procedure Retrieve the htpasswd file from the htpass-secret Secret object and save the file to your file system: USD oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd Add or remove users from the users.htpasswd file. To add a new user: USD htpasswd -bB users.htpasswd <username> <password> Example output Adding password for user <username> To remove an existing user: USD htpasswd -D users.htpasswd <username> Example output Deleting password for user <username> Replace the htpass-secret Secret object with the updated users in the users.htpasswd file: USD oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f - Tip You can alternatively apply the following YAML to replace the secret: apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents> If you removed one or more users, you must additionally remove existing resources for each user. Delete the User object: USD oc delete user <username> Example output user.user.openshift.io "<username>" deleted Be sure to remove the user, otherwise the user can continue using their token as long as it has not expired. Delete the Identity object for the user: USD oc delete identity my_htpasswd_provider:<username> Example output identity.user.openshift.io "my_htpasswd_provider:<username>" deleted 7.1.8. Configuring identity providers using the web console Configure your identity provider (IDP) through the web console instead of the CLI. Prerequisites You must be logged in to the web console as a cluster administrator. Procedure Navigate to Administration Cluster Settings . Under the Configuration tab, click OAuth . Under the Identity Providers section, select your identity provider from the Add drop-down menu. Note You can specify multiple IDPs through the web console without overwriting existing IDPs. 7.2. Configuring a Keystone identity provider Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. This configuration allows users to log in to OpenShift Container Platform with their Keystone credentials. 7.2.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.2.2. About Keystone authentication Keystone is an OpenStack project that provides identity, token, catalog, and policy services. You can configure the integration with Keystone so that the new OpenShift Container Platform users are based on either the Keystone user names or unique Keystone IDs. With both methods, users log in by entering their Keystone user name and password. Basing the OpenShift Container Platform users on the Keystone ID is more secure because if you delete a Keystone user and create a new Keystone user with that user name, the new user might have access to the old user's resources. 7.2.3. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object that contains the key and certificate by using the following command: USD oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key> 7.2.4. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.2.5. Sample Keystone CR The following custom resource (CR) shows the parameters and acceptable values for a Keystone identity provider. Keystone CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 Keystone domain name. In Keystone, usernames are domain-specific. Only a single domain is supported. 4 The URL to use to connect to the Keystone server (required). This must use https. 5 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. 6 Optional: Reference to an OpenShift Container Platform Secret object containing the client certificate to present when making requests to the configured URL. 7 Reference to an OpenShift Container Platform Secret object containing the key for the client certificate. Required if tlsClientCert is specified. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.2.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.3. Configuring an LDAP identity provider Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. 7.3.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.3.2. About LDAP authentication During authentication, the LDAP directory is searched for an entry that matches the provided user name. If a single unique match is found, a simple bind is attempted using the distinguished name (DN) of the entry plus the provided password. These are the steps taken: Generate a search filter by combining the attribute and filter in the configured url with the user-provided user name. Search the directory using the generated filter. If the search does not return exactly one entry, deny access. Attempt to bind to the LDAP server using the DN of the entry retrieved from the search, and the user-provided password. If the bind is unsuccessful, deny access. If the bind is successful, build an identity using the configured attributes as the identity, email address, display name, and preferred user name. The configured url is an RFC 2255 URL, which specifies the LDAP host and search parameters to use. The syntax of the URL is: For this URL: URL component Description ldap For regular LDAP, use the string ldap . For secure LDAP (LDAPS), use ldaps instead. host:port The name and port of the LDAP server. Defaults to localhost:389 for ldap and localhost:636 for LDAPS. basedn The DN of the branch of the directory where all searches should start from. At the very least, this must be the top of your directory tree, but it could also specify a subtree in the directory. attribute The attribute to search for. Although RFC 2255 allows a comma-separated list of attributes, only the first attribute will be used, no matter how many are provided. If no attributes are provided, the default is to use uid . It is recommended to choose an attribute that will be unique across all entries in the subtree you will be using. scope The scope of the search. Can be either one or sub . If the scope is not provided, the default is to use a scope of sub . filter A valid LDAP search filter. If not provided, defaults to (objectClass=*) When doing searches, the attribute, filter, and provided user name are combined to create a search filter that looks like: For example, consider a URL of: When a client attempts to connect using a user name of bob , the resulting search filter will be (&(enabled=true)(cn=bob)) . If the LDAP directory requires authentication to search, specify a bindDN and bindPassword to use to perform the entry search. 7.3.3. Creating the LDAP secret To use the identity provider, you must define an OpenShift Container Platform Secret object that contains the bindPassword field. Procedure Create a Secret object that contains the bindPassword field: USD oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1 1 The secret key containing the bindPassword for the --from-literal argument must be called bindPassword . Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password> 7.3.4. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.3.5. Sample LDAP CR The following custom resource (CR) shows the parameters and acceptable values for an LDAP identity provider. LDAP CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: "" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: "ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid" 11 1 This provider name is prefixed to the returned user ID to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 List of attributes to use as the identity. First non-empty attribute is used. At least one attribute is required. If none of the listed attribute have a value, authentication fails. Defined attributes are retrieved as raw, allowing for binary values to be used. 4 List of attributes to use as the email address. First non-empty attribute is used. 5 List of attributes to use as the display name. First non-empty attribute is used. 6 List of attributes to use as the preferred user name when provisioning a user for this identity. First non-empty attribute is used. 7 Optional DN to use to bind during the search phase. Must be set if bindPassword is defined. 8 Optional reference to an OpenShift Container Platform Secret object containing the bind password. Must be set if bindDN is defined. 9 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Only used when insecure is false . 10 When true , no TLS connection is made to the server. When false , ldaps:// URLs connect using TLS, and ldap:// URLs are upgraded to TLS. This must be set to false when ldaps:// URLs are in use, as these URLs always attempt to connect using TLS. 11 An RFC 2255 URL which specifies the LDAP host and search parameters to use. Note To whitelist users for an LDAP integration, use the lookup mapping method. Before a login from LDAP would be allowed, a cluster administrator must create an Identity object and a User object for each LDAP user. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.3.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.4. Configuring a basic authentication identity provider Configure the basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic back-end integration mechanism. 7.4.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.4.2. About basic authentication Basic authentication is a generic back-end integration mechanism that allows users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Because basic authentication is generic, you can use this identity provider for advanced authentication configurations. Important Basic authentication must use an HTTPS connection to the remote server to prevent potential snooping of the user ID and password and man-in-the-middle attacks. With basic authentication configured, users send their user name and password to OpenShift Container Platform, which then validates those credentials against a remote server by making a server-to-server request, passing the credentials as a basic authentication header. This requires users to send their credentials to OpenShift Container Platform during login. Note This only works for user name/password login mechanisms, and OpenShift Container Platform must be able to make network requests to the remote authentication server. User names and passwords are validated against a remote URL that is protected by basic authentication and returns JSON. A 401 response indicates failed authentication. A non- 200 status, or the presence of a non-empty "error" key, indicates an error: {"error":"Error message"} A 200 status with a sub (subject) key indicates success: {"sub":"userid"} 1 1 The subject must be unique to the authenticated user and must not be able to be modified. A successful response can optionally provide additional data, such as: A display name using the name key. For example: {"sub":"userid", "name": "User Name", ...} An email address using the email key. For example: {"sub":"userid", "email":"[email protected]", ...} A preferred user name using the preferred_username key. This is useful when the unique, unchangeable subject is a database key or UID, and a more human-readable name exists. This is used as a hint when provisioning the OpenShift Container Platform user for the authenticated identity. For example: {"sub":"014fbff9a07c", "preferred_username":"bob", ...} 7.4.3. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object that contains the key and certificate by using the following command: USD oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key> 7.4.4. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.4.5. Sample basic authentication CR The following custom resource (CR) shows the parameters and acceptable values for a basic authentication identity provider. Basic authentication CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret 1 This provider name is prefixed to the returned user ID to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 URL accepting credentials in Basic authentication headers. 4 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. 5 Optional: Reference to an OpenShift Container Platform Secret object containing the client certificate to present when making requests to the configured URL. 6 Reference to an OpenShift Container Platform Secret object containing the key for the client certificate. Required if tlsClientCert is specified. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.4.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.4.7. Example Apache HTTPD configuration for basic identity providers The basic identify provider (IDP) configuration in OpenShift Container Platform 4 requires that the IDP server respond with JSON for success and failures. You can use CGI scripting in Apache HTTPD to accomplish this. This section provides examples. Example /etc/httpd/conf.d/login.conf Example /var/www/cgi-bin/login.cgi Example /var/www/cgi-bin/fail.cgi 7.4.7.1. File requirements These are the requirements for the files you create on an Apache HTTPD web server: login.cgi and fail.cgi must be executable ( chmod +x ). login.cgi and fail.cgi must have proper SELinux contexts if SELinux is enabled: restorecon -RFv /var/www/cgi-bin , or ensure that the context is httpd_sys_script_exec_t using ls -laZ . login.cgi is only executed if your user successfully logs in per Require and Auth directives. fail.cgi is executed if the user fails to log in, resulting in an HTTP 401 response. 7.4.8. Basic authentication troubleshooting The most common issue relates to network connectivity to the backend server. For simple debugging, run curl commands on the master. To test for a successful login, replace the <user> and <password> in the following example command with valid credentials. To test an invalid login, replace them with false credentials. USD curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp Successful responses A 200 status with a sub (subject) key indicates success: {"sub":"userid"} The subject must be unique to the authenticated user, and must not be able to be modified. A successful response can optionally provide additional data, such as: A display name using the name key: {"sub":"userid", "name": "User Name", ...} An email address using the email key: {"sub":"userid", "email":"[email protected]", ...} A preferred user name using the preferred_username key: {"sub":"014fbff9a07c", "preferred_username":"bob", ...} The preferred_username key is useful when the unique, unchangeable subject is a database key or UID, and a more human-readable name exists. This is used as a hint when provisioning the OpenShift Container Platform user for the authenticated identity. Failed responses A 401 response indicates failed authentication. A non- 200 status or the presence of a non-empty "error" key indicates an error: {"error":"Error message"} 7.5. Configuring a request header identity provider Configure the request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. 7.5.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.5.2. About request header authentication A request header identity provider identifies users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. The request header identity provider cannot be combined with other identity providers that use direct password logins, such as htpasswd, Keystone, LDAP or basic authentication. Note You can also use the request header identity provider for advanced configurations such as the community-supported SAML authentication . Note that this solution is not supported by Red Hat. For users to authenticate using this identity provider, they must access https:// <namespace_route> /oauth/authorize (and subpaths) via an authenticating proxy. To accomplish this, configure the OAuth server to redirect unauthenticated requests for OAuth tokens to the proxy endpoint that proxies to https:// <namespace_route> /oauth/authorize . To redirect unauthenticated requests from clients expecting browser-based login flows: Set the provider.loginURL parameter to the authenticating proxy URL that will authenticate interactive clients and then proxy the request to https:// <namespace_route> /oauth/authorize . To redirect unauthenticated requests from clients expecting WWW-Authenticate challenges: Set the provider.challengeURL parameter to the authenticating proxy URL that will authenticate clients expecting WWW-Authenticate challenges and then proxy the request to https:// <namespace_route> /oauth/authorize . The provider.challengeURL and provider.loginURL parameters can include the following tokens in the query portion of the URL: USD{url} is replaced with the current URL, escaped to be safe in a query parameter. For example: https://www.example.com/sso-login?then=USD{url} USD{query} is replaced with the current query string, unescaped. For example: https://www.example.com/auth-proxy/oauth/authorize?USD{query} Important As of OpenShift Container Platform 4.1, your proxy must support mutual TLS. 7.5.2.1. SSPI connection support on Microsoft Windows Important Using SSPI connection support on Microsoft Windows is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The OpenShift CLI ( oc ) supports the Security Support Provider Interface (SSPI) to allow for SSO flows on Microsft Windows. If you use the request header identity provider with a GSSAPI-enabled proxy to connect an Active Directory server to OpenShift Container Platform, users can automatically authenticate to OpenShift Container Platform by using the oc command line interface from a domain-joined Microsoft Windows computer. 7.5.3. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.5.4. Sample request header CR The following custom resource (CR) shows the parameters and acceptable values for a request header identity provider. Request header CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: "https://www.example.com/challenging-proxy/oauth/authorize?USD{query}" 3 loginURL: "https://www.example.com/login-proxy/oauth/authorize?USD{query}" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login 1 This provider name is prefixed to the user name in the request header to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 Optional: URL to redirect unauthenticated /oauth/authorize requests to, that will authenticate browser-based clients and then proxy their request to https:// <namespace_route> /oauth/authorize . The URL that proxies to https:// <namespace_route> /oauth/authorize must end with /authorize (with no trailing slash), and also proxy subpaths, in order for OAuth approval flows to work properly. USD{url} is replaced with the current URL, escaped to be safe in a query parameter. USD{query} is replaced with the current query string. If this attribute is not defined, then loginURL must be used. 4 Optional: URL to redirect unauthenticated /oauth/authorize requests to, that will authenticate clients which expect WWW-Authenticate challenges, and then proxy them to https:// <namespace_route> /oauth/authorize . USD{url} is replaced with the current URL, escaped to be safe in a query parameter. USD{query} is replaced with the current query string. If this attribute is not defined, then challengeURL must be used. 5 Reference to an OpenShift Container Platform ConfigMap object containing a PEM-encoded certificate bundle. Used as a trust anchor to validate the TLS certificates presented by the remote server. Important As of OpenShift Container Platform 4.1, the ca field is required for this identity provider. This means that your proxy must support mutual TLS. 6 Optional: list of common names ( cn ). If set, a valid client certificate with a Common Name ( cn ) in the specified list must be presented before the request headers are checked for user names. If empty, any Common Name is allowed. Can only be used in combination with ca . 7 Header names to check, in order, for the user identity. The first header containing a value is used as the identity. Required, case-insensitive. 8 Header names to check, in order, for an email address. The first header containing a value is used as the email address. Optional, case-insensitive. 9 Header names to check, in order, for a display name. The first header containing a value is used as the display name. Optional, case-insensitive. 10 Header names to check, in order, for a preferred user name, if different than the immutable identity determined from the headers specified in headers . The first header containing a value is used as the preferred user name when provisioning. Optional, case-insensitive. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.5.5. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.5.6. Example Apache authentication configuration using request header This example configures an Apache authentication proxy for the OpenShift Container Platform using the request header identity provider. Custom proxy configuration Using the mod_auth_gssapi module is a popular way to configure the Apache authentication proxy using the request header identity provider; however, it is not required. Other proxies can easily be used if the following requirements are met: Block the X-Remote-User header from client requests to prevent spoofing. Enforce client certificate authentication in the RequestHeaderIdentityProvider configuration. Require the X-Csrf-Token header be set for all authentication requests using the challenge flow. Make sure only the /oauth/authorize endpoint and its subpaths are proxied; redirects must be rewritten to allow the backend server to send the client to the correct location. The URL that proxies to https://<namespace_route>/oauth/authorize must end with /authorize with no trailing slash. For example, https://proxy.example.com/login-proxy/authorize?... must proxy to https://<namespace_route>/oauth/authorize?... . Subpaths of the URL that proxies to https://<namespace_route>/oauth/authorize must proxy to subpaths of https://<namespace_route>/oauth/authorize . For example, https://proxy.example.com/login-proxy/authorize/approve?... must proxy to https://<namespace_route>/oauth/authorize/approve?... . Note The https://<namespace_route> address is the route to the OAuth server and can be obtained by running oc get route -n openshift-authentication . Configuring Apache authentication using request header This example uses the mod_auth_gssapi module to configure an Apache authentication proxy using the request header identity provider. Prerequisites Obtain the mod_auth_gssapi module from the Optional channel . You must have the following packages installed on your local machine: httpd mod_ssl mod_session apr-util-openssl mod_auth_gssapi Generate a CA for validating requests that submit the trusted header. Define an OpenShift Container Platform ConfigMap object containing the CA. This is done by running: USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1 1 The CA must be stored in the ca.crt key of the ConfigMap object. Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> Generate a client certificate for the proxy. You can generate this certificate by using any x509 certificate tooling. The client certificate must be signed by the CA you generated for validating requests that submit the trusted header. Create the custom resource (CR) for your identity providers. Procedure This proxy uses a client certificate to connect to the OAuth server, which is configured to trust the X-Remote-User header. Create the certificate for the Apache configuration. The certificate that you specify as the SSLProxyMachineCertificateFile parameter value is the proxy's client certificate that is used to authenticate the proxy to the server. It must use TLS Web Client Authentication as the extended key type. Create the Apache configuration. Use the following template to provide your required settings and values: Important Carefully review the template and customize its contents to fit your environment. Note The https://<namespace_route> address is the route to the OAuth server and can be obtained by running oc get route -n openshift-authentication . Update the identityProviders stanza in the custom resource (CR): identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: "https://<namespace_route>/challenging-proxy/oauth/authorize?USD{query}" loginURL: "https://<namespace_route>/login-proxy/oauth/authorize?USD{query}" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User Verify the configuration. Confirm that you can bypass the proxy by requesting a token by supplying the correct client certificate and header: # curl -L -k -H "X-Remote-User: joe" \ --cert /etc/pki/tls/certs/authproxy.pem \ https://<namespace_route>/oauth/token/request Confirm that requests that do not supply the client certificate fail by requesting a token without the certificate: # curl -L -k -H "X-Remote-User: joe" \ https://<namespace_route>/oauth/token/request Confirm that the challengeURL redirect is active: # curl -k -v -H 'X-Csrf-Token: 1' \ https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token Copy the challengeURL redirect to use in the step. Run this command to show a 401 response with a WWW-Authenticate basic challenge, a negotiate challenge, or both challenges: # curl -k -v -H 'X-Csrf-Token: 1' \ <challengeURL_redirect + query> Test logging in to the OpenShift CLI ( oc ) with and without using a Kerberos ticket: If you generated a Kerberos ticket by using kinit , destroy it: # kdestroy -c cache_name 1 1 Make sure to provide the name of your Kerberos cache. Log in to the oc tool by using your Kerberos credentials: # oc login -u <username> Enter your Kerberos password at the prompt. Log out of the oc tool: # oc logout Use your Kerberos credentials to get a ticket: # kinit Enter your Kerberos user name and password at the prompt. Confirm that you can log in to the oc tool: # oc login If your configuration is correct, you are logged in without entering separate credentials. 7.6. Configuring a GitHub or GitHub Enterprise identity provider Configure the github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. OAuth facilitates a token exchange flow between OpenShift Container Platform and GitHub or GitHub Enterprise. You can use the GitHub integration to connect to either GitHub or GitHub Enterprise. For GitHub Enterprise integrations, you must provide the hostname of your instance and can optionally provide a ca certificate bundle to use in requests to the server. Note The following steps apply to both GitHub and GitHub Enterprise unless noted. 7.6.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.6.2. About GitHub authentication Configuring GitHub authentication allows users to log in to OpenShift Container Platform with their GitHub credentials. To prevent anyone with any GitHub user ID from logging in to your OpenShift Container Platform cluster, you can restrict access to only those in specific GitHub organizations. 7.6.3. Registering a GitHub application To use GitHub or GitHub Enterprise as an identity provider, you must register an application to use. Procedure Register an application on GitHub: For GitHub, click Settings Developer settings OAuth Apps Register a new OAuth application . For GitHub Enterprise, go to your GitHub Enterprise home page and then click Settings Developer settings Register a new application . Enter an application name, for example My OpenShift Install . Enter a homepage URL, such as https://oauth-openshift.apps.<cluster-name>.<cluster-domain> . Optional: Enter an application description. Enter the authorization callback URL, where the end of the URL contains the identity provider name : For example: Click Register application . GitHub provides a client ID and a client secret. You need these values to complete the identity provider configuration. 7.6.4. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object containing a string by using the following command: USD oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret> You can define a Secret object containing the contents of a file by using the following command: USD oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config 7.6.5. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Note This procedure is only required for GitHub Enterprise. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.6.6. Sample GitHub CR The following custom resource (CR) shows the parameters and acceptable values for a GitHub identity provider. GitHub CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b 1 This provider name is prefixed to the GitHub numeric user ID to form an identity name. It is also used to build the callback URL. 2 Controls how mappings are established between this provider's identities and User objects. 3 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Only for use in GitHub Enterprise with a non-publicly trusted root certificate. 4 The client ID of a registered GitHub OAuth application . The application must be configured with a callback URL of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name> . 5 Reference to an OpenShift Container Platform Secret object containing the client secret issued by GitHub. 6 For GitHub Enterprise, you must provide the hostname of your instance, such as example.com . This value must match the GitHub Enterprise hostname value in in the /setup/settings file and cannot include a port number. If this value is not set, then either teams or organizations must be defined. For GitHub, omit this parameter. 7 The list of organizations. Either the organizations or teams field must be set unless the hostname field is set, or if mappingMethod is set to lookup . Cannot be used in combination with the teams field. 8 The list of teams. Either the teams or organizations field must be set unless the hostname field is set, or if mappingMethod is set to lookup . Cannot be used in combination with the organizations field. Note If organizations or teams is specified, only GitHub users that are members of at least one of the listed organizations will be allowed to log in. If the GitHub OAuth application configured in clientID is not owned by the organization, an organization owner must grant third-party access to use this option. This can be done during the first GitHub login by the organization's administrator, or from the GitHub organization settings. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.6.7. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Obtain a token from the OAuth server. As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token. You can also access this page from the web console by navigating to (?) Help Command Line Tools Copy Login Command . Log in to the cluster, passing in the token to authenticate. USD oc login --token=<token> Note This identity provider does not support logging in with a user name and password. Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.7. Configuring a GitLab identity provider Configure the gitlab identity provider using GitLab.com or any other GitLab instance as an identity provider. 7.7.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.7.2. About GitLab authentication Configuring GitLab authentication allows users to log in to OpenShift Container Platform with their GitLab credentials. If you use GitLab version 7.7.0 to 11.0, you connect using the OAuth integration . If you use GitLab version 11.1 or later, you can use OpenID Connect (OIDC) to connect instead of OAuth. 7.7.3. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object containing a string by using the following command: USD oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret> You can define a Secret object containing the contents of a file by using the following command: USD oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config 7.7.4. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Note This procedure is only required for GitHub Enterprise. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.7.5. Sample GitLab CR The following custom resource (CR) shows the parameters and acceptable values for a GitLab identity provider. GitLab CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map 1 This provider name is prefixed to the GitLab numeric user ID to form an identity name. It is also used to build the callback URL. 2 Controls how mappings are established between this provider's identities and User objects. 3 The client ID of a registered GitLab OAuth application . The application must be configured with a callback URL of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name> . 4 Reference to an OpenShift Container Platform Secret object containing the client secret issued by GitLab. 5 The host URL of a GitLab provider. This could either be https://gitlab.com/ or any other self hosted instance of GitLab. 6 Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.7.6. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Log in to the cluster as a user from your identity provider, entering the password when prompted. USD oc login -u <username> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.8. Configuring a Google identity provider Configure the google identity provider using the Google OpenID Connect integration . 7.8.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.8.2. About Google authentication Using Google as an identity provider allows any Google user to authenticate to your server. You can limit authentication to members of a specific hosted domain with the hostedDomain configuration attribute. Note Using Google as an identity provider requires users to get a token using <namespace_route>/oauth/token/request to use with command-line tools. 7.8.3. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object containing a string by using the following command: USD oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret> You can define a Secret object containing the contents of a file by using the following command: USD oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config 7.8.4. Sample Google CR The following custom resource (CR) shows the parameters and acceptable values for a Google identity provider. Google CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: "example.com" 5 1 This provider name is prefixed to the Google numeric user ID to form an identity name. It is also used to build the redirect URL. 2 Controls how mappings are established between this provider's identities and User objects. 3 The client ID of a registered Google project . The project must be configured with a redirect URI of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name> . 4 Reference to an OpenShift Container Platform Secret object containing the client secret issued by Google. 5 A hosted domain used to restrict sign-in accounts. Optional if the lookup mappingMethod is used. If empty, any Google account is allowed to authenticate. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.8.5. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Obtain a token from the OAuth server. As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token. You can also access this page from the web console by navigating to (?) Help Command Line Tools Copy Login Command . Log in to the cluster, passing in the token to authenticate. USD oc login --token=<token> Note This identity provider does not support logging in with a user name and password. Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.9. Configuring an OpenID Connect identity provider Configure the oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . 7.9.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 7.9.2. About OpenID Connect authentication The Authentication Operator in OpenShift Container Platform requires that the configured OpenID Connect identity provider implements the OpenID Connect Discovery specification. Note ID Token and UserInfo decryptions are not supported. By default, the openid scope is requested. If required, extra scopes can be specified in the extraScopes field. Claims are read from the JWT id_token returned from the OpenID identity provider and, if specified, from the JSON returned by the UserInfo URL. At least one claim must be configured to use as the user's identity. The standard identity claim is sub . You can also indicate which claims to use as the user's preferred user name, display name, and email address. If multiple claims are specified, the first one with a non-empty value is used. The following table lists the standard claims: Claim Description sub Short for "subject identifier." The remote identity for the user at the issuer. preferred_username The preferred user name when provisioning a user. A shorthand name that the user wants to be referred to as, such as janedoe . Typically a value that corresponding to the user's login or username in the authentication system, such as username or email. email Email address. name Display name. See the OpenID claims documentation for more information. Note Unless your OpenID Connect identity provider supports the resource owner password credentials (ROPC) grant flow, users must get a token from <namespace_route>/oauth/token/request to use with command-line tools. 7.9.3. Supported OIDC providers Red Hat tests and supports specific OpenID Connect (OIDC) providers with OpenShift Container Platform. The following OpenID Connect (OIDC) providers are tested and supported with OpenShift Container Platform. Using an OIDC provider that is not on the following list might work with OpenShift Container Platform, but the provider was not tested by Red Hat and therefore is not supported by Red Hat. Active Directory Federation Services for Windows Server Note Currently, it is not supported to use Active Directory Federation Services for Windows Server with OpenShift Container Platform when custom claims are used. GitLab Google Keycloak Microsoft identity platform (Azure Active Directory v2.0) Note Currently, it is not supported to use Microsoft identity platform when group names are required to be synced. Okta Ping Identity Red Hat Single Sign-On 7.9.4. Creating the secret Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys. Procedure Create a Secret object containing a string by using the following command: USD oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config Tip You can alternatively apply the following YAML to create the secret: apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret> You can define a Secret object containing the contents of a file by using the following command: USD oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config 7.9.5. Creating a config map Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Note This procedure is only required for GitHub Enterprise. Procedure Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object. USD oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM> 7.9.6. Sample OpenID Connect CRs The following custom resources (CRs) show the parameters and acceptable values for an OpenID Connect identity provider. If you must specify a custom certificate bundle, extra scopes, extra authorization request parameters, or a userInfo URL, use the full OpenID Connect CR. Standard OpenID Connect CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6 1 This provider name is prefixed to the value of the identity claim to form an identity name. It is also used to build the redirect URL. 2 Controls how mappings are established between this provider's identities and User objects. 3 The client ID of a client registered with the OpenID provider. The client must be allowed to redirect to https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name> . 4 A reference to an OpenShift Container Platform Secret object containing the client secret. 5 The list of claims to use as the identity. The first non-empty claim is used. 6 The Issuer Identifier described in the OpenID spec. Must use https without query or fragment component. Full OpenID Connect CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: ... clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: "true" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com 1 Optional: Reference to an OpenShift Container Platform config map containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. 2 Optional: The list of scopes to request, in addition to the openid scope, during the authorization token request. 3 Optional: A map of extra parameters to add to the authorization token request. 4 The list of claims to use as the preferred user name when provisioning a user for this identity. The first non-empty claim is used. 5 The list of claims to use as the display name. The first non-empty claim is used. 6 The list of claims to use as the email address. The first non-empty claim is used. 7 The list of claims to use to synchronize groups from the OpenID Connect provider to OpenShift Container Platform upon user login. The first non-empty claim is used. Additional resources See Identity provider parameters for information on parameters, such as mappingMethod , that are common to all identity providers. 7.9.7. Adding an identity provider to your cluster After you install your cluster, add an identity provider to it so your users can authenticate. Prerequisites Create an OpenShift Container Platform cluster. Create the custom resource (CR) for your identity providers. You must be logged in as an administrator. Procedure Apply the defined CR: USD oc apply -f </path/to/CR> Note If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply . In this case you can safely ignore this warning. Obtain a token from the OAuth server. As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token. You can also access this page from the web console by navigating to (?) Help Command Line Tools Copy Login Command . Log in to the cluster, passing in the token to authenticate. USD oc login --token=<token> Note If your OpenID Connect identity provider supports the resource owner password credentials (ROPC) grant flow, you can log in with a user name and password. You might need to take steps to enable the ROPC grant flow for your identity provider. After the OIDC identity provider is configured in OpenShift Container Platform, you can log in by using the following command, which prompts for your user name and password: USD oc login -u <identity_provider_username> --server=<api_server_url_and_port> Confirm that the user logged in successfully, and display the user name. USD oc whoami 7.9.8. Configuring identity providers using the web console Configure your identity provider (IDP) through the web console instead of the CLI. Prerequisites You must be logged in to the web console as a cluster administrator. Procedure Navigate to Administration Cluster Settings . Under the Configuration tab, click OAuth . Under the Identity Providers section, select your identity provider from the Add drop-down menu. Note You can specify multiple IDPs through the web console without overwriting existing IDPs.
[ "htpasswd -c -B -b </path/to/users.htpasswd> <username> <password>", "htpasswd -c -B -b users.htpasswd <username> <password>", "Adding password for user user1", "htpasswd -B -b </path/to/users.htpasswd> <user_name> <password>", "> htpasswd.exe -c -B -b <\\path\\to\\users.htpasswd> <username> <password>", "> htpasswd.exe -c -B -b users.htpasswd <username> <password>", "Adding password for user user1", "> htpasswd.exe -b <\\path\\to\\users.htpasswd> <username> <password>", "oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1", "apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd", "htpasswd -bB users.htpasswd <username> <password>", "Adding password for user <username>", "htpasswd -D users.htpasswd <username>", "Deleting password for user <username>", "oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -", "apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>", "oc delete user <username>", "user.user.openshift.io \"<username>\" deleted", "oc delete identity my_htpasswd_provider:<username>", "identity.user.openshift.io \"my_htpasswd_provider:<username>\" deleted", "oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "ldap://host:port/basedn?attribute?scope?filter", "(&(<filter>)(<attribute>=<username>))", "ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)", "oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1", "apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: \"\" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: \"ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid\" 11", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "{\"error\":\"Error message\"}", "{\"sub\":\"userid\"} 1", "{\"sub\":\"userid\", \"name\": \"User Name\", ...}", "{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}", "{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}", "oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "<VirtualHost *:443> # CGI Scripts in here DocumentRoot /var/www/cgi-bin # SSL Directives SSLEngine on SSLCipherSuite PROFILE=SYSTEM SSLProxyCipherSuite PROFILE=SYSTEM SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Configure HTTPD to execute scripts ScriptAlias /basic /var/www/cgi-bin # Handles a failed login attempt ErrorDocument 401 /basic/fail.cgi # Handles authentication <Location /basic/login.cgi> AuthType Basic AuthName \"Please Log In\" AuthBasicProvider file AuthUserFile /etc/httpd/conf/passwords Require valid-user </Location> </VirtualHost>", "#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"sub\":\"userid\", \"name\":\"'USDREMOTE_USER'\"}' exit 0", "#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"error\": \"Login failure\"}' exit 0", "curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp", "{\"sub\":\"userid\"}", "{\"sub\":\"userid\", \"name\": \"User Name\", ...}", "{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}", "{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: \"https://www.example.com/challenging-proxy/oauth/authorize?USD{query}\" 3 loginURL: \"https://www.example.com/login-proxy/oauth/authorize?USD{query}\" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "LoadModule request_module modules/mod_request.so LoadModule auth_gssapi_module modules/mod_auth_gssapi.so Some Apache configurations might require these modules. LoadModule auth_form_module modules/mod_auth_form.so LoadModule session_module modules/mod_session.so Nothing needs to be served over HTTP. This virtual host simply redirects to HTTPS. <VirtualHost *:80> DocumentRoot /var/www/html RewriteEngine On RewriteRule ^(.*)USD https://%{HTTP_HOST}USD1 [R,L] </VirtualHost> <VirtualHost *:443> # This needs to match the certificates you generated. See the CN and X509v3 # Subject Alternative Name in the output of: # openssl x509 -text -in /etc/pki/tls/certs/localhost.crt ServerName www.example.com DocumentRoot /var/www/html SSLEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key SSLCACertificateFile /etc/pki/CA/certs/ca.crt SSLProxyEngine on SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt # It is critical to enforce client certificates. Otherwise, requests can # spoof the X-Remote-User header by accessing the /oauth/authorize endpoint # directly. SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem # To use the challenging-proxy, an X-Csrf-Token must be present. RewriteCond %{REQUEST_URI} ^/challenging-proxy RewriteCond %{HTTP:X-Csrf-Token} ^USD [NC] RewriteRule ^.* - [F,L] <Location /challenging-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" # For Kerberos AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On # For ldap: # AuthBasicProvider ldap # AuthLDAPURL \"ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)\" </Location> <Location /login-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On ErrorDocument 401 /login.html </Location> </VirtualHost> RequestHeader unset X-Remote-User", "identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: \"https://<namespace_route>/challenging-proxy/oauth/authorize?USD{query}\" loginURL: \"https://<namespace_route>/login-proxy/oauth/authorize?USD{query}\" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User", "curl -L -k -H \"X-Remote-User: joe\" --cert /etc/pki/tls/certs/authproxy.pem https://<namespace_route>/oauth/token/request", "curl -L -k -H \"X-Remote-User: joe\" https://<namespace_route>/oauth/token/request", "curl -k -v -H 'X-Csrf-Token: 1' https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token", "curl -k -v -H 'X-Csrf-Token: 1' <challengeURL_redirect + query>", "kdestroy -c cache_name 1", "oc login -u <username>", "oc logout", "kinit", "oc login", "https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>", "https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map", "oc apply -f </path/to/CR>", "oc login -u <username>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: \"example.com\" 5", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc whoami", "oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>", "oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config", "oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: \"true\" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com", "oc apply -f </path/to/CR>", "oc login --token=<token>", "oc login -u <identity_provider_username> --server=<api_server_url_and_port>", "oc whoami" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authentication_and_authorization/configuring-identity-providers
8.102. kdesdk
8.102. kdesdk 8.102.1. RHBA-2014:0485 - kdesdk bug fix update Updated kdesdk packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The kdesdk packages contain the KDE Software Development Kit (SDK) which is a collection of applications and tools used by developers. These applications and tools include - cervisia: a CVS frontend; kate: an advanced text editor; kbugbuster: a tool to manage the KDE bug report system; kcachegrind: a browser for data produced by profiling tools (for example cachegrind); kompare: a diff tool; kuiviewer: a tool for displaying a designer's UI files; lokalize: a computer-aided translation system focusing on productivity and performance; and umbrello: a UML modeller and UML diagram tool. Bug Fixes BZ# 857002 Previously, the umbrello UML modeller used logic based on recursive calls. As a consequence if a user created a diagram that had dependency graph cycles, umbrello entered an infinite loop and terminated unexpectedly with a segmentation fault. With this update, the application logic has been changed to use stack-based parent resolution. As a result, umbrello no longer terminates in the described scenario. BZ# 908709 Prior to this update, the kompare utility hid underscore characters located at the bottom of a highlighted difference block when using certain fonts. This update fixes this bug. As a result, kompare correctly displays underscore characters in the described situation. Users of kdesdk are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/kdesdk
Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift
Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster. 3.1. Default Red Hat OpenStack Services on OpenShift networks The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment: Control plane network: This network is used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances. External network: (Optional) You can configure an external network if one is required for your environment. For example, you might create an external network for any of the following purposes: To provide virtual machine instances with Internet access. To create flat provider networks that are separate from the control plane. To configure VLAN provider networks on a separate bridge from the control plane. To provide access to virtual machine instances with floating IPs on a network other than the control plane network. Internal API network: This network is used for internal communication between RHOSO components. Storage network: This network is used for block storage, RBD, NFS, FC, and iSCSI. Tenant (project) network: This network is used for data communication between virtual machine instances within the cloud deployment. Storage Management network: (Optional) This network is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data. Note For more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide . The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment. Note By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC. Table 3.1. Default RHOSO networks Network name VLAN CIDR NetConfig allocationRange MetalLB IPAddressPool range net-attach-def ipam range OCP worker nncp range ctlplane n/a 192.168.122.0/24 192.168.122.100 - 192.168.122.250 192.168.122.80 - 192.168.122.90 192.168.122.30 - 192.168.122.70 192.168.122.10 - 192.168.122.20 external n/a 10.0.0.0/24 10.0.0.100 - 10.0.0.250 n/a n/a internalapi 20 172.17.0.0/24 172.17.0.100 - 172.17.0.250 172.17.0.80 - 172.17.0.90 172.17.0.30 - 172.17.0.70 172.17.0.10 - 172.17.0.20 storage 21 172.18.0.0/24 172.18.0.100 - 172.18.0.250 n/a 172.18.0.30 - 172.18.0.70 172.18.0.10 - 172.18.0.20 tenant 22 172.19.0.0/24 172.19.0.100 - 172.19.0.250 n/a 172.19.0.30 - 172.19.0.70 172.19.0.10 - 172.19.0.20 storageMgmt 23 172.20.0.0/24 172.20.0.100 - 172.20.0.250 n/a 172.20.0.30 - 172.20.0.70 172.20.0.10 - 172.20.0.20 3.2. Preparing RHOCP for RHOSO networks The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes. Note The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide: Installing the Kubernetes NMState Operator Configuring MetalLB address pools 3.2.1. Preparing RHOCP with isolated network interfaces Create a NodeNetworkConfigurationPolicy ( nncp ) CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster. Procedure Create a NodeNetworkConfigurationPolicy ( nncp ) CR file on your workstation, for example, openstack-nncp.yaml . Retrieve the names of the worker nodes in the RHOCP cluster: Discover the network configuration: Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1 . Repeat this step for each worker node. In the nncp CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks . In the following example, the nncp CR configures the enp6s0 interface for worker node 1, osp-enp6s0-worker-1 , to use VLAN interfaces with IPv4 addresses for network isolation: Create the nncp CR in the cluster: Verify that the nncp CR is created: 3.2.2. Attaching service pods to the isolated networks Create a NetworkAttachmentDefinition ( net-attach-def ) custom resource (CR) for each isolated network to attach the service pods to the networks. Procedure Create a NetworkAttachmentDefinition ( net-attach-def ) CR file on your workstation, for example, openstack-net-attach-def.yaml . In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following examples create a NetworkAttachmentDefinition resource for the internalapi , storage , ctlplane , and tenant networks of type macvlan : 1 The namespace where the services are deployed. 2 The node interface name associated with the network, as defined in the nncp CR. 3 The whereabouts CNI IPAM plugin to assign IPs to the created pods from the range .30 - .70 . 4 The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange . Create the NetworkAttachmentDefinition CR in the cluster: Verify that the NetworkAttachmentDefinition CR is created: 3.2.3. Preparing RHOCP for RHOSO network VIPS The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network. Procedure Create an IPAddressPool CR file on your workstation, for example, openstack-ipaddresspools.yaml . In the IPAddressPool CR file, configure an IPAddressPool resource on the isolated network to specify the IP address ranges over which MetalLB has authority: 1 The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange . For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide. Create the IPAddressPool CR in the cluster: Verify that the IPAddressPool CR is created: Create a L2Advertisement CR file on your workstation, for example, openstack-l2advertisement.yaml . In the L2Advertisement CR file, configure L2Advertisement CRs to define which node advertises a service to the local network. Create one L2Advertisement resource for each network. In the following example, each L2Advertisement CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN: 1 The interface where the VIPs requested from the VLAN address pool are announced. For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide. Create the L2Advertisement CRs in the cluster: Verify that the L2Advertisement CRs are created: If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface. Check the network back end used by your cluster: If the back end is OVNKubernetes, then run the following command to enable global IP forwarding: 3.3. Creating the data plane network To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI , Storage , and External . Each network definition must include the IP address assignment. Tip Use the following commands to view the NetConfig CRD definition and specification schema: Procedure Create a file named openstack_netconfig.yaml on your workstation. Add the following configuration to openstack_netconfig.yaml to create the NetConfig CR: In the openstack_netconfig.yaml file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks . The following example creates isolated networks for the data plane: 1 The name of the network, for example, CtlPlane . 2 The IPv4 subnet specification. 3 The name of the subnet, for example, subnet1 . 4 The NetConfig allocationRange . The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range. 5 Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. 6 The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks . Save the openstack_netconfig.yaml definition file. Create the data plane network: To verify that the data plane network is created, view the openstacknetconfig resource: If you see errors, check the underlying network-attach-definition and node network configuration policies:
[ "oc get nodes -l node-role.kubernetes.io/worker -o jsonpath=\"{.items[*].metadata.name}\"", "oc get nns/<worker_node> -o yaml | more", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: internalapi state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: storage state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: tenant state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true - description: Configuring enp6s0 ipv4: address: - ip: 192.168.122.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false mtu: 1500 name: enp6s0 state: up type: ethernet nodeSelector: kubernetes.io/hostname: worker-1 node-role.kubernetes.io/worker: \"\"", "oc apply -f openstack-nncp.yaml", "oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"internalapi\", \"type\": \"macvlan\", \"master\": \"internalapi\", 2 \"ipam\": { 3 \"type\": \"whereabouts\", \"range\": \"172.17.0.0/24\", \"range_start\": \"172.17.0.30\", 4 \"range_end\": \"172.17.0.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ctlplane namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"ctlplane\", \"type\": \"macvlan\", \"master\": \"enp6s0\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.122.0/24\", \"range_start\": \"192.168.122.30\", \"range_end\": \"192.168.122.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"storage\", \"type\": \"macvlan\", \"master\": \"storage\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.18.0.0/24\", \"range_start\": \"172.18.0.30\", \"range_end\": \"172.18.0.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: tenant namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"tenant\", \"type\": \"macvlan\", \"master\": \"tenant\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.19.0.0/24\", \"range_start\": \"172.19.0.30\", \"range_end\": \"172.19.0.70\" } }", "oc apply -f openstack-net-attach-def.yaml", "oc get net-attach-def -n openstack", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: internalapi namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.90 1 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: ctlplane spec: addresses: - 192.168.122.80-192.168.122.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: storage spec: addresses: - 172.18.0.80-172.18.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: tenant spec: addresses: - 172.19.0.80-172.19.0.90 autoAssign: true avoidBuggyIPs: false", "oc apply -f openstack-ipaddresspools.yaml", "oc describe -n metallb-system IPAddressPool", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: internalapi namespace: metallb-system spec: ipAddressPools: - internalapi interfaces: - internalapi 1 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: ctlplane namespace: metallb-system spec: ipAddressPools: - ctlplane interfaces: - enp6s0 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: storage namespace: metallb-system spec: ipAddressPools: - storage interfaces: - storage --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: tenant namespace: metallb-system spec: ipAddressPools: - tenant interfaces: - tenant", "oc apply -f openstack-l2advertisement.yaml", "oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES ctlplane [\"ctlplane\"] [\"enp6s0\"] internalapi [\"internalapi\"] [\"internalapi\"] storage [\"storage\"] [\"storage\"] tenant [\"tenant\"] [\"tenant\"]", "oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'", "oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}' --type=merge", "oc describe crd netconfig oc explain netconfig.spec", "apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack", "spec: networks: - name: CtlPlane 1 dnsDomain: ctlplane.example.com subnets: 2 - name: subnet1 3 allocationRanges: 4 - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: InternalApi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 excludeAddresses: 5 - 172.17.0.10 - 172.17.0.12 cidr: 172.17.0.0/24 vlan: 20 6 - name: External dnsDomain: external.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: Storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: Tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22", "oc create -f openstack_netconfig.yaml -n openstack", "oc get netconfig/openstacknetconfig -n openstack", "oc get network-attachment-definitions -n openstack oc get nncp" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_red_hat_openstack_services_on_openshift/assembly_preparing-RHOSO-networks_preparing
Chapter 7. Known issues
Chapter 7. Known issues This section describes the known issues in Red Hat OpenShift Data Foundation 4.16. 7.1. Disaster recovery Creating an application namespace for the managed clusters Application namespace needs to exist on RHACM managed clusters for disaster recovery (DR) related pre-deployment actions and hence is pre-created when an application is deployed at the RHACM hub cluster. However, if an application is deleted at the hub cluster and its corresponding namespace is deleted on the managed clusters, they reappear on the managed cluster. Workaround: openshift-dr maintains a namespace manifestwork resource in the managed cluster namespace at the RHACM hub. These resources need to be deleted after the application deletion. For example, as a cluster administrator, execute the following command on the hub cluster: ( BZ#2059669 ) ceph df reports an invalid MAX AVAIL value when the cluster is in stretch mode When a crush rule for a Red Hat Ceph Storage cluster has multiple "take" steps, the ceph df report shows the wrong maximum available size for the map. The issue will be fixed in an upcoming release. ( BZ#2100920 ) Both the DRPCs protect all the persistent volume claims created on the same namespace The namespaces that host multiple disaster recovery (DR) protected workloads, protect all the persistent volume claims (PVCs) within the namespace for each DRPlacementControl resource in the same namespace on the hub cluster that does not specify and isolate PVCs based on the workload using its spec.pvcSelector field. This results in PVCs that match the DRPlacementControl spec.pvcSelector across multiple workloads. Or, if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Workaround: Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. Result: PVCs are no longer managed by multiple DRPlacementControl resources and do not cause any operation and data inconsistencies. ( BZ#2128860 ) MongoDB pod is in CrashLoopBackoff because of permission errors reading data in cephrbd volume The OpenShift projects across different managed clusters have different security context constraints (SCC), which specifically differ in the specified UID range and/or FSGroups . This leads to certain workload pods and containers failing to start post failover or relocate operations within these projects, due to filesystem access errors in their logs. Workaround: Ensure workload projects are created on all managed clusters with the same project-level SCC labels, allowing them to use the same filesystem context when failed over or relocated. Pods will no longer fail post-DR actions on filesystem-related access errors. ( BZ#2081855 ) Disaster recovery workloads remain stuck when deleted When deleting a workload from a cluster, the corresponding pods might not terminate with events such as FailedKillPod . This might cause delay or failure in garbage collecting dependent DR resources such as the PVC , VolumeReplication , and VolumeReplicationGroup . It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected. Workaround: Reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected. ( BZ#2159791 ) Regional DR CephFS based application failover show warning about subscription After the application is failed over or relocated, the hub subscriptions show up errors stating, "Some resources failed to deploy. Use View status YAML link to view the details." This is because the application persistent volume claims (PVCs) that use CephFS as the backing storage provisioner, deployed using Red Hat Advanced Cluster Management for Kubernetes (RHACM) subscriptions, and are DR protected are owned by the respective DR controllers. Workaround: There are no workarounds to rectify the errors in the subscription status. However, the subscription resources that failed to deploy can be checked to make sure they are PVCs. This ensures that the other resources do not have problems. If the only resources in the subscription that fail to deploy are the ones that are DR protected, the error can be ignored. ( BZ-2264445 ) Disabled PeerReady flag prevents changing the action to Failover The DR controller executes full reconciliation as and when needed. When a cluster becomes inaccessible, the DR controller performs a sanity check. If the workload is already relocated, this sanity check causes the PeerReady flag associated with the workload to be disabled, and the sanity check does not complete due to the cluster being offline. As a result, the disabled PeerReady flag prevents you from changing the action to Failover. Workaround: Use the command-line interface to change the DR action to Failover despite the disabled PeerReady flag. ( BZ-2264765 ) Ceph becomes inaccessible and IO is paused when connection is lost between the two data centers in stretch cluster When two data centers lose connection with each other but are still connected to the Arbiter node, there is a flaw in the election logic that causes an infinite election between the monitors. As a result, the monitors are unable to elect a leader and the Ceph cluster becomes unavailable. Also, IO is paused during the connection loss. Workaround: Shut down the monitors in one of the data centers where monitors are out of quorum (you can find this by running ceph -s command) and reset the connection scores of the remaining monitors. As a result, monitors can form a quorum and Ceph becomes available again and IOs resume. ( Partner BZ#2265992 ) RBD applications fail to Relocate when using stale Ceph pool IDs from replacement cluster For the applications created before the new peer cluster is created, it is not possible to mount the RBD PVC because when a peer cluster is replaced, it is not possible to update the CephBlockPoolID's mapping in the CSI configmap. Workaround: Update the rook-ceph-csi-mapping-config configmap with cephBlockPoolID's mapping on the peer cluster that is not replaced. This enables mounting the RBD PVC for the application. ( BZ#2267731 ) Information about lastGroupSyncTime is lost after hub recovery for the workloads which are primary on the unavailable managed cluster Applications that are previously failed over to a managed cluster do not report a lastGroupSyncTime , thereby causing the trigger of the alert VolumeSynchronizationDelay . This is because when the ACM hub and a managed cluster that are part of the DRPolicy are unavailable, a new ACM hub cluster is reconstructed from the backup. Workaround: If the managed cluster to which the workload was failed over is unavailable, you can still failover to a surviving managed cluster. ( BZ#2275320 ) MCO operator reconciles the veleroNamespaceSecretKeyRef and CACertificates fields When the OpenShift Data Foundation operator is upgraded, the CACertificates and veleroNamespaceSecretKeyRef fields under s3StoreProfiles in the Ramen config are lost. Workaround: If the Ramen config has the custom values for the CACertificates and veleroNamespaceSecretKeyRef fields, then set those custom values after the upgrade is performed. ( BZ#2277941 ) Instability of the token-exchange-agent pod after upgrade The token-exchange-agent pod on the managed cluster is unstable as the old deployment resources are not cleaned up properly. This might cause application failover action to fail. Workaround: Refer the knowledgebase article, "token-exchange-agent" pod on managed cluster is unstable after upgrade to ODF 4.16.0 . Result: If the workaround is followed, "token-exchange-agent" pod is stabilized and failover action works as expected. ( BZ#2293611 ) virtualmachines.kubevirt.io resource fails restore due to mac allocation failure on relocate When a virtual machine is relocated to the preferred cluster, it might fail to complete relocation due to unavailability of the mac address. This happens if the virtual machine is not fully cleaned up on the preferred cluster when it is failed over to the failover cluster. Ensure that the workload is completely removed from the preferred cluster before relocating the workload. ( BZ#2295404 ) Post hub recovery, subscription app pods are not coming up after Failover Post hub recovery, the subscription application pods do not come up after failover from primary to the secondary managed clusters. RBAC error occurs in AppSub subscription resource on managed cluster. This is due to a timing issue in the backup and restore scenario. When application-manager pod is restarted on each managed cluster, the hub subscription and channel resources are not recreated in the new hub. As a result, the child AppSub subscription resource is reconciled with an error. Workaround: Fetch the name of the appsub using the following command: Add a new label with any value to the AppSub on the hub using the following command: In case the child appsub error still exists showing unknown certificate issue, restart the application-manager pod on the managed cluster to which the workloads are failedover. ( BZ#2295782 ) Failover process fails when the ReplicationDestination resource has not been created yet If the user initiates a failover before the LastGroupSyncTime is updated, the failover process might fail. This failure is accompanied by an error message indicating that the ReplicationDestination does not exist. Workaround: Edit the ManifestWork for the VRG on the hub cluster. Delete the following section from the manifest: Save the changes. Applying this workaround correctly ensures that the VRG skips attempting to restore the PVC using the ReplicationDestination resource. If the PVC already exists, the application uses it as is. If the PVC does not exist, a new PVC is created. ( BZ#2283038 ) 7.2. Multicloud Object Gateway Multicloud Object Gateway instance fails to finish initialization Due to a race in timing between the pod code run and OpenShift loading the Certificate Authority (CA) bundle into the pod, the pod is unable to communicate with the cloud storage service. As a result, the default backing store cannot be created. Workaround: Restart the Multicloud Object Gateway (MCG) operator pod: With the workaround the backing store is reconciled and works. ( BZ#2271580 ) Upgrade to OpenShift Data Foundation 4.16 results in noobaa-db pod CrashLoopBackOff state Upgrading to OpenShift Data Foundation 4.16 from OpenShift Data Foundation 4.15 fails when the PostgreSQL upgrade fails in Multicloud Object Gateway which always start with PostgresSQL version 15. If there is a PostgreSQL upgrade failure, the NooBaa-db-pg-0 pod fails to start. Workaround: Refer to the knowledgebase article Recover NooBaa's PostgreSQL upgrade failure in OpenShift Data Foundation 4.16 . ( BZ#2298152 ) 7.3. Ceph Poor performance of the stretch clusters on CephFS Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server (MDS) on multi-site Data Foundation clusters. ( BZ#1982116 ) SELinux relabelling issue with a very high number of files When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by the Kubelet. This issue is observed with any filesystem based volumes having very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are different ways to workaround this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251 . ( Jira#3327 ) Ceph reports no active mgr after workload deployment After workload deployment, Ceph manager loses connectivity to MONs or is unable to respond to its liveness probe. This causes the OpenShift Data Foundation cluster status to report that there is "no active mgr". This causes multiple operations that use the Ceph manager for request processing to fail. For example, volume provisioning, creating CephFS snapshots, and others. To check the status of the OpenShift Data Foundation cluster, use the command oc get cephcluster -n openshift-storage . In the status output, the status.ceph.details.MGR_DOWN field will have the message "no active mgr" if your cluster has this issue. Workaround: Restart the Ceph manager pods using the following commands: After running these commands, the OpenShift Data Foundation cluster status reports a healthy cluster, with no warnings or errors regarding MGR_DOWN . ( BZ#2244873 ) 7.4. CSI Driver Automatic flattening of snapshots is not working When there is a single common parent RBD PVC, if volume snapshot, restore, and delete snapshot are performed in a sequence more than 450 times, it is further not possible to take volume snapshot or clone of the common parent RBD PVC. To workaround this issue, instead of performing volume snapshot, restore, and delete snapshot in a sequence, you can use PVC to PVC clone to completely avoid this issue. If you hit this issue, contact customer support to perform manual flattening of the final restored PVCs to continue to take volume snapshot or clone of the common parent PVC again. ( BZ#2232163 ) 7.5. OpenShift Data Foundation console Optimize DRPC creation when multiple workloads are deployed in a single namespace When multiple applications refer to the same placement, then enabling DR for any of the applications enables it for all the applications that refer to the placement. If the applications are created after the creation of the DRPC, the PVC label selector in the DRPC might not match the labels of the newer applications. Workaround: In such cases, disabling DR and enabling it again with the right label selector is recommended. ( BZ#2294704 ) Last snapshot synced is missing for appset based applications on the DR monitoring dashboard ApplicationSet type applications do not display last volume snapshot sync time on the monitoring dashboard. Workaround: Go to Applications navigation under ACM perspective and filter the desired application from the list. Then from the Data policy column (popover) note the "Sync status". ( BZ#2295324 ) 7.6. OCS operator Incorrect unit for the ceph_mds_mem_rss metric in the graph When you search for the ceph_mds_mem_rss metrics in the OpenShift user interface (UI), the graphs show the y-axis in Megabytes (MB), as Ceph returns ceph_mds_mem_rss metric in Kilobytes (KB). This can cause confusion while comparing the results for the MDSCacheUsageHigh alert. Workaround: Use ceph_mds_mem_rss * 1000 while searching this metric in the OpenShift UI to see the y-axis of the graph in GB. This makes it easier to compare the results shown in the MDSCacheUsageHigh alert. ( BZ#2261881 ) Increasing MDS memory is erasing CPU values when pods are in CLBO state When the metadata server (MDS) memory is increased while the MDS pods are in a crash loop back off (CLBO) state, CPU request or limit for the MDS pods is removed. As a result, the CPU request or the limit that is set for the MDS changes. Workaround: Run the oc patch command to adjust the CPU limits. For example: ( BZ#2265563 ) 7.7. Non-availability of IBM Z platform IBM Z platform is not available with OpenShift Data foundation 4.16 release. IBM Z will be available with full features and functionality in an upcoming release. ( BZ#2279527 )
[ "oc delete manifestwork -n <managedCluster namespace> <drPlacementControl name>-<namespace>-ns-mw", "% oc get appsub -n <namespace of sub app>", "% oc edit appsub -n <appsub-namespace> <appsub>-subscription-1", "% oc delete pods -n open-cluster-management-agent-addon application-manager-<>-<>", "/spec/workload/manifests/0/spec/volsync", "oc delete pod noobaa-operator-<ID>", "oc scale deployment -n openshift-storage rook-ceph-mgr-a --replicas=0", "oc scale deployment -n openshift-storage rook-ceph-mgr-a --replicas=1", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"3\"}, \"requests\": {\"cpu\": \"3\"}}}}}'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/4.16_release_notes/known-issues
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 6.4.0-11 Fri Jun 07 2017 David Le Sage Updates for 6.4.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/appe-revision_history
18.8. Date & Time
18.8. Date & Time To configure time zone, date, and optionally settings for network time, select Date & Time at the Installation Summary screen. There are three ways for you to select a time zone: Using your mouse, click on the interactive map to select a specific city. A red pin appears indicating your selection. You can also scroll through the Region and City drop-down menus at the top of the screen to select your time zone. Select Etc at the bottom of the Region drop-down menu, then select your time zone in the menu adjusted to GMT/UTC, for example GMT+1 . If your city is not available on the map or in the drop-down menu, select the nearest major city in the same time zone. Alternatively you can use a Kickstart file, which will allow you to specify some additional time zones which are not available in the graphical interface. See the timezone command in timezone (required) for details. Note The list of available cities and regions comes from the Time Zone Database (tzdata) public domain, which is maintained by the Internet Assigned Numbers Authority (IANA). Red Hat cannot add cities or regions into this database. You can find more information at the official website, available at http://www.iana.org/time-zones . Specify a time zone even if you plan to use NTP (Network Time Protocol) to maintain the accuracy of the system clock. If you are connected to the network, the Network Time switch will be enabled. To set the date and time using NTP, leave the Network Time switch in the ON position and click the configuration icon to select which NTP servers Red Hat Enterprise Linux should use. To set the date and time manually, move the switch to the OFF position. The system clock should use your time zone selection to display the correct date and time at the bottom of the screen. If they are still incorrect, adjust them manually. Note that NTP servers might be unavailable at the time of installation. In such a case, enabling them will not set the time automatically. When the servers become available, the date and time will update. Once you have made your selection, click Done to return to the Installation Summary screen. Note To change your time zone configuration after you have completed the installation, visit the Date & Time section of the Settings dialog window.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-date-time-configuration-s390
Chapter 10. Planning for Installation on IBM Power Systems
Chapter 10. Planning for Installation on IBM Power Systems This chapter outlines the decisions and preparations you will need to make when deciding how to proceed with the installation. 10.1. Upgrade or Install? While automated in-place upgrades are now supported, the support is currently limited to AMD64 and Intel 64 systems. If you have an existing installation of a release of Red Hat Enterprise Linux on an IBM Power Systems server, you must perform a clean install to migrate to Red Hat Enterprise Linux 7. A clean install is performed by backing up all data from the system, formatting disk partitions, performing an installation of Red Hat Enterprise Linux from installation media, and then restoring any user data.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-installation-planning-ppc
Chapter 2. Import Cluster
Chapter 2. Import Cluster You can import and start managing a cluster once Web Administration is installed and ready. For installation instructions, see the Installing Web Administration chapter of the Red Hat Gluster Storage Web Administration Quick Start Guide. 2.1. Importing Cluster The following procedure outlines the steps to import a Gluster cluster. Procedure. Importing Cluster Log in to the Web Administration interface. Enter the username and password, and click Log In . Note The default username is admin and the default password is adminuser . Figure 2.1. Login Page In the default landing interface, a list of all the clusters ready to be imported is displayed. Locate the cluster to be imported and click Import . Figure 2.2. Import Cluster Enter a user-friendly Cluster name. By default, the Enable for all volumes option is selected. Figure 2.3. Cluster name Note If the cluster name is not provided, the system will assign a randomly generated uuid as a cluster name. However, it is advisable to enter a user-friendly cluster name to easily locate the cluster from the clusters list. Click Import to continue. The cluster import request is submitted. To view the task progress, click View Task Progress . Figure 2.4. Task Detail Navigate to the All Clusters interface view. The Cluster is successfully imported and ready for use. Figure 2.5. Cluster Ready 2.1.1. Troubleshooting Import Cluster Scenario: The Import cluster UI button is disabled after a failed cluster import operation. In this scenario, when cluster import fails, the Import button is disabled. Resolution Resolve the issue by investigating why the import cluster operation failed. To see details of the failed operation, navigate to the All Clusters interface of the Web Administration environment. In the clusters list, locate the cluster that you attempted to import and click on View Details to the Import Failed status label. Examine the reason of the failed cluster import operation and resolve the issue. After resolving the issue, unmanage the cluster and then reimport the cluster. For unmanaging cluster instructions, navigate to the Unmanaging Cluster section of this Guide. 2.1.2. Volume Profiling Volume profiling enables additional telemetry information to be collected on a per volume basis for a given cluster, which helps in troubleshooting, capacity planning, and performance tuning. Volume profiling can be enabled or disabled on a per cluster basis and per volume basis when a cluster is actively managed and monitored using the Web Administration interface. Note Enabling volume profiling results in richer set of metrics being collected which may cause performance degradation to occur as system resources, for example, CPU and memory, may get used for volume profiling data collection. Volume Profiling at Cluster Level To enable or disable volume profiling at cluster level: Log in to the Web Administration interface From the Clusters list, locate the cluster to disable Volume Profiling. Note Clusters list is the default landing interface after login and the Interface switcher is on All Clusters . At the right-hand side, to the Dashboard button, click the vertical ellipsis. An inline menu is opened. Click Disable Profiling or Enable Profiling depending on the current state. In the example screen below, Volume Profiling option is enabled. Click Disable Profiling to disable. Figure 2.6. Disable Volume Profiling The disable profiling task is submitted and processed. After processing, Volume Profiling is successfully disabled. Figure 2.7. Disable Volume Profiling Volume Profiling at Volume Level To enable or disable volume profiling at Volume level: Log in to the Web Administration interface and select the specific cluster from the Interface switcher drop-down. After selecting the specific cluster, the left vertical navigation pane is exposed. From the navigation pane, click Volumes . The Volumes view is displayed listing all the Volumes part of the cluster. Figure 2.8. Volumes View Locate the volume and click Disable Profiling or Enable Profiling depending on the current state. In the example screen below, Volume Profiling is enabled. To disable volume profiling, click Disable Profiling . Figure 2.9. Disable Volume Profiling The disable profiling task is submitted and processed. After processing, Volume Profiling is successfully disabled. Figure 2.10. Disable Volume Profiling Volume Profiling Metrics When volume profiling is disabled, the following metrics will not be displayed in the Grafana Dashboard. Based on the metrics required to view, enable or disable volume profiling accordingly. For detailed information on Volume Profiling, see the Monitoring Red Hat Gluster Storage Gluster Workload chapter of the Red Hat Gluster Storage Administration Guide. Table 2.1. Volume Profiling Metrics Grafana Dashboard Level Dashboard Section Panel and Metrics Cluster Dashboard At-a-glance IOPS Host Dashboard At-a-glance Brick IOPS Volume Dashboard Performance IOPS Volume Dashboard Profiling Information File Operations For Locks Volume Dashboard Profiling Information Top File Operations Volume Dashboard Profiling Information File Operations for Read/Write Volume Dashboard Profiling Information File Operations for Inode Operations Volume Dashboard Profiling Information File Operations for Entry Operations Brick Dashboard At-a-glance IOPS
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/monitoring_guide/import_cluster
Providing feedback on Red Hat build of Quarkus documentation
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/getting_started_with_red_hat_build_of_quarkus/proc_providing-feedback-on-red-hat-documentation_quarkus-getting-started
Configuring and managing virtualization
Configuring and managing virtualization Red Hat Enterprise Linux 8 Setting up your host, creating and administering virtual machines, and understanding virtualization features in Red Hat Enterprise Linux 8 Red Hat Customer Content Services
[ "yum module install virt", "yum install virt-install virt-viewer", "systemctl start libvirtd", "virt-host-validate [...] QEMU: Checking for device assignment IOMMU support : PASS QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments) LXC: Checking for Linux >= 2.6.26 : PASS [...] LXC: Checking for cgroup 'blkio' controller mount-point : PASS LXC: Checking if device /sys/fs/fuse/connections exists : FAIL (Load the 'fuse' module to enable /proc/ overrides)", "QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)", "virt-install --graphics vnc --name demo-guest1 --memory 2048 --vcpus 2 --disk size=80 --os-variant win10 --cdrom /home/username/Downloads/Win10install.iso", "virt-install --graphics vnc --name demo-guest2 --memory 4096 --vcpus 4 --disk none --livecd --os-variant rhel8.0 --cdrom /home/username/Downloads/rhel8.iso", "virt-install --graphics vnc --name demo-guest3 --memory 2048 --vcpus 2 --os-variant rhel8.0 --import --disk /home/username/backup/disk.qcow2", "virt-install --graphics vnc --name demo-guest4 --memory 2048 --vcpus 2 --disk size=160 --os-variant rhel8.0 --location http://example.com/OS-install --initrd-inject /home/username/ks.cfg --extra-args=\"inst.ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8\"", "virt-install --name demo-guest5 --memory 16384 --vcpus 16 --disk size=280 --os-variant rhel8.0 --location RHEL8.iso --graphics none --extra-args='console=ttyS0'", "virt-install --connect qemu+ssh://[email protected]/system --name demo-guest6 --memory 16384 --vcpus 16 --disk size=280 --os-variant rhel8.0 --location RHEL8.iso --graphics none --extra-args='console=ttyS0'", "{PackageManagerCommand} info libvirt-daemon-config-network Installed Packages Name : libvirt-daemon-config-network [...]", "virsh net-list --all Name State Autostart Persistent -------------------------------------------- default active yes yes", "virsh net-autostart default Network default marked as autostarted virsh net-start default Network default started", "error: failed to get network 'default' error: Network not found: no network with matching name 'default'", "{PackageManagerCommand} reinstall libvirt-daemon-config-network", "error: Failed to start network default error: internal error: Network is already in use by interface ens2", "virsh start demo-guest1 Domain 'demo-guest1' started", "virsh -c qemu+ssh://[email protected]/system start demo-guest1 [email protected]'s password: Domain 'demo-guest1' started", "virsh autostart demo-guest1 Domain ' demo-guest1 ' marked as autostarted", "mkdir -p /etc/systemd/system/libvirtd.service.d/", "touch /etc/systemd/system/libvirtd.service.d/10-network-online.conf", "[Unit] After=network-online.target", "virsh dominfo demo-guest1 Id: 2 Name: demo-guest1 UUID: e46bc81c-74e2-406e-bd7a-67042bae80d1 OS Type: hvm State: running CPU(s): 2 CPU time: 385.9s Max memory: 4194304 KiB Used memory: 4194304 KiB Persistent: yes Autostart: enable Managed save: no Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c873,c919 (enforcing)", "cat /etc/systemd/system/libvirtd.service.d/10-network-online.conf [Unit] After=network-online.target", "virt-viewer guest-name", "virt-viewer --direct --connect qemu+ssh://[email protected]/system guest-name [email protected]'s password:", "yum install libvirt-nss", "passwd: compat shadow: compat group: compat hosts: files libvirt_guest dns", "ssh [email protected] [email protected]'s password: Last login: Mon Sep 24 12:05:36 2021 root~#", "ssh root@testguest1 root@testguest1's password: Last login: Wed Sep 12 12:05:36 2018 root~]#", "virsh list --all Id Name State ---------------------------------------------------- 2 testguest1 running - testguest2 shut off", "sudo grep GRUB_TERMINAL /etc/default/grub GRUB_TERMINAL=serial", "virsh dumpxml vm-name | grep console <console type='pty' tty='/dev/pts/2'> </console>", "cat /proc/cmdline BOOT_IMAGE=/vmlinuz-3.10.0-948.el7.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0 ,9600n8 rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb", "grubby --update-kernel=ALL --args=\"console=ttyS0\"", "grub2-editenv - unset kernelopts", "systemctl status [email protected][email protected] - Serial Getty on ttyS0 Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; preset: enabled)", "virsh console guest1 --safe Connected to domain 'guest1' Escape character is ^] Subscription-name Kernel 3.10.0-948.el7.x86_64 on an x86_64 localhost login:", "virsh -c qemu+ssh://root@ 192.0.2.1/system list [email protected]'s password: Id Name State --------------------------------- 1 remote-guest running", "virsh -c remote-host list [email protected]'s password: Id Name State --------------------------------- 1 remote-guest running", "vi ~/.ssh/config Host example-host-alias User root Hostname 192.0.2.1", "vi /etc/libvirt/libvirt.conf uri_aliases = [ \" example-qemu-host-alias =qemu+ssh:// example-host-alias /system\", ]", "virsh -c example-qemu-host-alias list [email protected]'s password: Id Name State ---------------------------------------- 1 example-remote-guest running", "These can be used in cases when no URI is supplied by the application (@uri_default also prevents probing of the hypervisor driver). # uri_default = \"example-qemu-host-alias\"", "virsh list [email protected]'s password: Id Name State --------------------------------- 1 example-remote-guest running", "virsh dumpxml <vm-name> | grep graphics <graphics type='vnc' ports='-1' autoport= yes listen= 127.0.0.1 > </graphics>", "virsh edit <vm-name>", "<graphics type='vnc' ports='-1' autoport= yes listen= 127.0.0.1 passwd=' <password> '>", "<graphics type='vnc' ports='-1' autoport= yes listen= 127.0.0.1 passwd=' <password> ' passwdValidTo='2025-02-01T15:30:00' >", "virsh start <vm-name>", "virt-viewer <vm-name>", "virsh shutdown demo-guest1 Domain 'demo-guest1' is being shutdown", "virsh -c qemu+ssh://[email protected]/system shutdown demo-guest1 [email protected]'s password: Domain 'demo-guest1' is being shutdown", "virsh destroy demo-guest1 Domain 'demo-guest1' destroyed", "virsh list --all Id Name State ------------------------------------------ 1 demo-guest1 shut off", "virsh undefine guest1 --remove-all-storage --nvram Domain 'guest1' has been undefined Volume 'vda'(/home/images/guest1.qcow2) removed.", "grep ^platform /proc/cpuinfo/ platform : PowerNV", "modprobe kvm_hv", "lsmod | grep kvm", "yum module install virt", "yum install virt-install", "systemctl start libvirtd", "virt-host-validate [...] QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'memory' controller mount-point : PASS [...] QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for cgroup 'blkio' controller mount-point : PASS QEMU: Checking if IOMMU is enabled by kernel : PASS", "QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)", "qemu-kvm: Failed to allocate KVM HPT of order 33 (try smaller maxmem?): Cannot allocate memory", "grep sie /proc/cpuinfo features : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te sie", "modprobe kvm", "lsmod | grep kvm", "yum module install virt:rhel/common", "for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virtUSD{drv}d{,-ro,-admin}.socket; done", "virt-host-validate [...] QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'memory' controller mount-point : PASS [...]", "QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)", "hostnamectl | grep \"Operating System\" Operating System: Red Hat Enterprise Linux 8.5 (Ootpa) yum module list --installed [...] Advanced Virtualization for RHEL 8 IBM Z Systems (RPMs) Name Stream Profiles Summary virt av [e] common [i] Virtualization module", "yum disable virt:av", "yum module reset virt -y", "yum update", "yum module info virt Name : virt Stream : rhel [d][e][a] Version : 8050020211203195115 [...]", "<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/path/to/qcow2'/> <target dev='vda' bus='virtio'/> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/> <boot order='1' loadparm='2'/> </disk>", "<devices> <watchdog model='diag288' action='poweroff'/> </devices>", "pxelinux default linux label linux kernel kernel.img initrd initrd.img append ip=dhcp inst.repo=example.com/redhat/BaseOS/s390x/os/", "<cpu mode='host-model' check='partial'> <model fallback='allow'/> </cpu>", "<cpu mode='custom' match='exact' check='partial'> <model fallback='allow'>zEC12</model> <feature policy='force' name='ppa15'/> <feature policy='force' name='bpb'/> </cpu>", "touch qemuga.xml", "<channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/f16x86_64.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "virsh attach-device <vm-name> qemuga.xml --live --config", "virsh attach-device <vm-name> qemuga.xml --config", "yum install qemu-guest-agent", "systemctl start qemu-guest-agent", "setsebool virt_qemu_ga_read_nonsecurity_files on", "setsebool virt_qemu_ga_manage_ssh on", "yum install cockpit-machines", "virsh list --all Id Name State ---------------------------------- 1 testguest1 running - testguest2 shut off - testguest3 shut off - testguest4 shut off", "virsh dominfo testguest1 Id: 1 Name: testguest1 UUID: a973666f-2f6e-415a-8949-75a7a98569e1 OS Type: hvm State: running CPU(s): 2 CPU time: 188.3s Max memory: 4194304 KiB Used memory: 4194304 KiB Persistent: yes Autostart: disable Managed save: no Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c486,c538 (enforcing)", "virsh dumpxml testguest2 <domain type='kvm' id='1'> <name>testguest2</name> <uuid>a973434f-2f6e-4esa-8949-76a7a98569e1</uuid> <metadata> [...]", "virsh domblklist testguest3 Target Source --------------------------------------------------------------- vda /var/lib/libvirt/images/testguest3.qcow2 sda - sdb /home/username/Downloads/virt-p2v-1.36.10-1.el7.iso", "virsh domfsinfo testguest3 Mountpoint Name Type Target ------------------------------------ / dm-0 xfs /boot vda1 xfs", "virsh vcpuinfo testguest4 VCPU: 0 CPU: 3 State: running CPU time: 103.1s CPU Affinity: yyyy VCPU: 1 CPU: 0 State: running CPU time: 88.6s CPU Affinity: yyyy", "virsh net-list --all Name State Autostart Persistent --------------------------------------------- default active yes yes labnet active yes yes", "virsh net-info default Name: default UUID: c699f9f6-9202-4ca8-91d0-6b8cb9024116 Active: yes Persistent: yes Autostart: yes Bridge: virbr0", "virsh dumpxml testguest1", "<domain type='kvm'> <name>Testguest1</name> <uuid>ec6fbaa1-3eb4-49da-bf61-bb02fbec4967</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory>", "<vcpu placement='static'>1</vcpu>", "<os> <type arch='x86_64' machine='pc-q35-4.1'>hvm</type> <boot dev='hd'/> </os>", "<features> <acpi/> <apic/> </features>", "<cpu mode='host-model' check='partial'/>", "<clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock>", "<on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash>", "<pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm>", "<devices> <emulator>/usr/bin/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/Testguest.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdb' bus='ide'/> <readonly/> </disk>", "<controller type='usb' index='0' model='qemu-xhci' ports='15'/> <controller type='sata' index='0'/> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> </controller> <controller type='virtio-serial' index='0'/>", "<interface type='network'> <mac address='52:54:00:65:29:21'/> <source network='default'/> <model type='rtl8139'/> </interface>", "<serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel>", "<input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/>", "<graphics type='spice' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> <image compression='off'/> </graphics> <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics>", "<sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video>", "<redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='1'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </memballoon> </devices> </domain>", "virsh managedsave demo-guest1 Domain 'demo-guest1' saved by libvirt", "virsh list --managed-save --all Id Name State ---------------------------------------------------- - demo-guest1 saved - demo-guest2 shut off", "virsh list --with-managed-save --all Id Name State ---------------------------------------------------- - demo-guest1 shut off", "virsh start demo-guest1 Domain 'demo-guest1' started", "virsh -c qemu+ssh://[email protected]/system start demo-guest1 [email protected]'s password: Domain 'demo-guest1' started", "yum install libguestfs-tools-c", "ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2", "whoami root", "cp /var/lib/libvirt/images/a-really-important-vm.qcow2 /var/lib/libvirt/images/a-really-important-vm-original.qcow2", "virt-sysprep -a /var/lib/libvirt/images/a-really-important-vm.qcow2 [ 0.0] Examining the guest [ 7.3] Performing \"abrt-data\" [ 7.3] Performing \"backup-files\" [ 9.6] Performing \"bash-history\" [ 9.6] Performing \"blkid-tab\" [...]", "virt-diff -a /var/lib/libvirt/images/a-really-important-vm-orig.qcow2 -A /var/lib/libvirt/images/a-really-important-vm.qcow2 - - 0644 1001 /etc/group- - - 0000 797 /etc/gshadow- = - 0444 33 /etc/machine-id [...] - - 0600 409 /home/username/.bash_history - d 0700 6 /home/username/.ssh - - 0600 868 /root/.bash_history [...]", "ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2", "rm -f /etc/udev/rules.d/70-persistent-net.rules", "DEVICE=eth[x] BOOTPROTO=none ONBOOT=yes #NETWORK=192.0.2.0 <- REMOVE #NETMASK=255.255.255.0 <- REMOVE #IPADDR=192.0.2.1 <- REMOVE #HWADDR=xx:xx:xx:xx:xx <- REMOVE #USERCTL=no <- REMOVE # Remove any other *unique or non-desired settings, such as UUID.*", "DEVICE=eth[x] BOOTPROTO=dhcp ONBOOT=yes", "rm /etc/sysconfig/rhn/systemid", "subscription-manager unsubscribe --all # subscription-manager unregister # subscription-manager clean", "subscription-manager clean", "subscription-manager register --consumerid=71rd64fx-6216-4409-bf3a-e4b7c7bd8ac9", "rm -rf /etc/ssh/ssh_host_example", "rm /etc/lvm/devices/system.devices", "rm ~/.config/gnome-initial-setup-done", "virt-clone --original example-VM-1 --auto-clone Allocating 'example-VM-1-clone.qcow2' | 50.0 GB 00:05:37 Clone 'example-VM-1-clone' created successfully.", "virt-clone --original example-VM-2 --name example-VM-3 --file /var/lib/libvirt/images/ disk-1-example-VM-2 .qcow2 --file /var/lib/libvirt/images/ disk-2-example-VM-2 .qcow2 Allocating 'disk-1-example-VM-2-clone.qcow2' | 78.0 GB 00:05:37 Allocating 'disk-2-example-VM-2-clone.qcow2' | 80.0 GB 00:05:37 Clone 'example-VM-3' created successfully.", "virsh migrate --offline --persistent example-VM-3 qemu+ssh://[email protected]/system [email protected]'s password: scp /var/lib/libvirt/images/ <disk-1-example-VM-2-clone> .qcow2 [email protected]/ <user@remote_host.com> ://var/lib/libvirt/images/ scp /var/lib/libvirt/images/ <disk-2-example-VM-2-clone> .qcow2 [email protected]/ <user@remote_host.com> ://var/lib/libvirt/images/", "virsh list --all Id Name State --------------------------------------- - example-VM-1 shut off - example-VM-1-clone shut off", "virsh start example-VM-1-clone Domain 'example-VM-1-clone' started", "virsh domdirtyrate-calc <example_VM> 30", "virsh domstats <example_VM> --dirtyrate Domain: 'example-VM' dirtyrate.calc_status=2 dirtyrate.calc_start_time=200942 dirtyrate.calc_period=30 dirtyrate.megabytes_per_second=2", "systemctl enable --now libvirtd.service", "virsh migrate --offline --persistent <example_VM> qemu+ssh:// example-destination /system", "virsh migrate --live --persistent <example_VM> qemu+ssh:// example-destination /system", "virsh migrate --live --persistent --parallel --parallel-connections 4 <example_VM> qemu+ssh:// <example-destination> /system", "virsh migrate-setmaxdowntime <example_VM> <time_interval_in_milliseconds>", "virsh migrate --live --persistent --postcopy --timeout <time_interval_in_seconds> --timeout-postcopy <example_VM> qemu+ssh:// <example-destination> /system", "virsh migrate --live --persistent --auto-converge <example_VM> qemu+ssh:// <example-destination> /system", "virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut off", "virsh list --all Id Name State ---------------------------------- 10 example-VM-1 running", "virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut off", "virsh list --all Id Name State ---------------------------------- 10 example-VM-1 running", "virsh domdirtyrate-calc vm-name 30", "virsh domstats vm-name --dirtyrate Domain: ' vm-name ' dirtyrate.calc_status=2 dirtyrate.calc_start_time=200942 dirtyrate.calc_period=30 dirtyrate.megabytes_per_second=2", "setsebool virt_use_nfs 1", "ssh root@ example-shared-storage root@example-shared-storage's password: Last login: Mon Sep 24 12:05:36 2019 root~#", "mkdir /var/lib/libvirt/shared-images", "scp /var/lib/libvirt/images/ example-disk-1 .qcow2 root@ example-shared-storage :/var/lib/libvirt/shared-images/ example-disk-1 .qcow2", "/var/lib/libvirt/shared-images example-source-machine (rw,no_root_squash) example-destination-machine (rw,no\\_root_squash)", "exportfs -a", "mount example-shared-storage :/var/lib/libvirt/shared-images /var/lib/libvirt/images", "virsh domcapabilities | xmllint --xpath \"//cpu/mode[@name='host-model']\" - > domCaps-CPUs.xml", "cat domCaps-CPUs.xml <cpu> <model fallback=\"forbid\">Skylake-Client-IBRS</model> <vendor>Intel</vendor> <feature policy=\"require\" name=\"ss\"/> <feature policy=\"require\" name=\"vmx\"/> <feature policy=\"require\" name=\"pdcm\"/> <feature policy=\"require\" name=\"hypervisor\"/> <feature policy=\"require\" name=\"tsc_adjust\"/> <feature policy=\"require\" name=\"clflushopt\"/> <feature policy=\"require\" name=\"umip\"/> <feature policy=\"require\" name=\"md-clear\"/> <feature policy=\"require\" name=\"stibp\"/> <feature policy=\"require\" name=\"arch-capabilities\"/> <feature policy=\"require\" name=\"ssbd\"/> <feature policy=\"require\" name=\"xsaves\"/> <feature policy=\"require\" name=\"pdpe1gb\"/> <feature policy=\"require\" name=\"invtsc\"/> <feature policy=\"require\" name=\"ibpb\"/> <feature policy=\"require\" name=\"ibrs\"/> <feature policy=\"require\" name=\"amd-stibp\"/> <feature policy=\"require\" name=\"amd-ssbd\"/> <feature policy=\"require\" name=\"rsba\"/> <feature policy=\"require\" name=\"skip-l1dfl-vmentry\"/> <feature policy=\"require\" name=\"pschange-mc-no\"/> <feature policy=\"disable\" name=\"hle\"/> <feature policy=\"disable\" name=\"rtm\"/> </cpu>", "virsh domcapabilities | xmllint --xpath \"//cpu/mode[@name='host-model']\" - <mode name=\"host-model\" supported=\"yes\"> <model fallback=\"forbid\">IvyBridge-IBRS</model> <vendor>Intel</vendor> <feature policy=\"require\" name=\"ss\"/> <feature policy=\"require\" name=\"vmx\"/> <feature policy=\"require\" name=\"pdcm\"/> <feature policy=\"require\" name=\"pcid\"/> <feature policy=\"require\" name=\"hypervisor\"/> <feature policy=\"require\" name=\"arat\"/> <feature policy=\"require\" name=\"tsc_adjust\"/> <feature policy=\"require\" name=\"umip\"/> <feature policy=\"require\" name=\"md-clear\"/> <feature policy=\"require\" name=\"stibp\"/> <feature policy=\"require\" name=\"arch-capabilities\"/> <feature policy=\"require\" name=\"ssbd\"/> <feature policy=\"require\" name=\"xsaveopt\"/> <feature policy=\"require\" name=\"pdpe1gb\"/> <feature policy=\"require\" name=\"invtsc\"/> <feature policy=\"require\" name=\"ibpb\"/> <feature policy=\"require\" name=\"amd-ssbd\"/> <feature policy=\"require\" name=\"skip-l1dfl-vmentry\"/> <feature policy=\"require\" name=\"pschange-mc-no\"/> </mode>", "cat domCaps-CPUs.xml <cpu> <model fallback=\"forbid\">Skylake-Client-IBRS</model> <vendor>Intel</vendor> <feature policy=\"require\" name=\"ss\"/> <feature policy=\"require\" name=\"vmx\"/> <feature policy=\"require\" name=\"pdcm\"/> <feature policy=\"require\" name=\"hypervisor\"/> <feature policy=\"require\" name=\"tsc_adjust\"/> <feature policy=\"require\" name=\"clflushopt\"/> <feature policy=\"require\" name=\"umip\"/> <feature policy=\"require\" name=\"md-clear\"/> <feature policy=\"require\" name=\"stibp\"/> <feature policy=\"require\" name=\"arch-capabilities\"/> <feature policy=\"require\" name=\"ssbd\"/> <feature policy=\"require\" name=\"xsaves\"/> <feature policy=\"require\" name=\"pdpe1gb\"/> <feature policy=\"require\" name=\"invtsc\"/> <feature policy=\"require\" name=\"ibpb\"/> <feature policy=\"require\" name=\"ibrs\"/> <feature policy=\"require\" name=\"amd-stibp\"/> <feature policy=\"require\" name=\"amd-ssbd\"/> <feature policy=\"require\" name=\"rsba\"/> <feature policy=\"require\" name=\"skip-l1dfl-vmentry\"/> <feature policy=\"require\" name=\"pschange-mc-no\"/> <feature policy=\"disable\" name=\"hle\"/> <feature policy=\"disable\" name=\"rtm\"/> </cpu> <cpu> <model fallback=\"forbid\">IvyBridge-IBRS</model> <vendor>Intel</vendor> <feature policy=\"require\" name=\"ss\"/> <feature policy=\"require\" name=\"vmx\"/> <feature policy=\"require\" name=\"pdcm\"/> <feature policy=\"require\" name=\"pcid\"/> <feature policy=\"require\" name=\"hypervisor\"/> <feature policy=\"require\" name=\"arat\"/> <feature policy=\"require\" name=\"tsc_adjust\"/> <feature policy=\"require\" name=\"umip\"/> <feature policy=\"require\" name=\"md-clear\"/> <feature policy=\"require\" name=\"stibp\"/> <feature policy=\"require\" name=\"arch-capabilities\"/> <feature policy=\"require\" name=\"ssbd\"/> <feature policy=\"require\" name=\"xsaveopt\"/> <feature policy=\"require\" name=\"pdpe1gb\"/> <feature policy=\"require\" name=\"invtsc\"/> <feature policy=\"require\" name=\"ibpb\"/> <feature policy=\"require\" name=\"amd-ssbd\"/> <feature policy=\"require\" name=\"skip-l1dfl-vmentry\"/> <feature policy=\"require\" name=\"pschange-mc-no\"/> </cpu>", "virsh hypervisor-cpu-baseline domCaps-CPUs.xml <cpu mode='custom' match='exact'> <model fallback='forbid'>IvyBridge-IBRS</model> <vendor>Intel</vendor> <feature policy='require' name='ss'/> <feature policy='require' name='vmx'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='pcid'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='umip'/> <feature policy='require' name='md-clear'/> <feature policy='require' name='stibp'/> <feature policy='require' name='arch-capabilities'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='pdpe1gb'/> <feature policy='require' name='invtsc'/> <feature policy='require' name='ibpb'/> <feature policy='require' name='amd-ssbd'/> <feature policy='require' name='skip-l1dfl-vmentry'/> <feature policy='require' name='pschange-mc-no'/> </cpu>", "virsh edit <vm_name>", "virsh shutdown <vm_name> virsh start <vm_name>", "virt-xml --network=? --network options: [...] address.unit boot_order clearxml driver_name [...]", "virt-xml testguest --add-device --disk /var/lib/libvirt/images/newdisk.qcow2,format=qcow2,size=20 Domain 'testguest' defined successfully. Changes will take effect after the domain is fully powered off.", "virt-xml testguest2 --add-device --update --hostdev 002.004 Device hotplug successful. Domain 'testguest2' defined successfully.", "virsh dumpxml testguest [...] <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x4146'/> <product id='0x902e'/> <address bus='2' device='4'/> </source> <alias name='hostdev0'/> <address type='usb' bus='0' port='3'/> </hostdev> [...]", "virt-xml --network=? --network options: [...] address.unit boot_order clearxml driver_name [...]", "virsh dumpxml testguest1 > testguest1.xml cat testguest1.xml <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>testguest1</name> <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid> [...] </domain>", "virt-xml testguest --edit --cpu host-model,clearxml=yes Domain 'testguest' defined successfully.", "virsh dumpxml testguest [...] <cpu mode='host-model' check='partial'> <model fallback='allow'/> </cpu> [...]", "virsh define testguest.xml", "virsh dumpxml testguest1 > testguest1.xml cat testguest1.xml <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>testguest1</name> <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid> [...] </domain>", "virt-xml testguest --remove-device --disk target=vdb Domain 'testguest' defined successfully. Changes will take effect after the domain is fully powered off.", "virt-xml testguest2 --remove-device --update --hostdev type=usb Device hotunplug successful. Domain 'testguest2' defined successfully.", "virsh define testguest.xml", "virsh edit example-VM-1", "virsh dumpxml example-VM-1 > example-VM-1 .xml", "virsh edit <example-VM-1>", "virsh dumpxml testguest1 > testguest1.xml cat testguest1.xml <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>testguest1</name> <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid> [...] </domain>", "virsh define testguest1.xml", "lsusb [...] Bus 001 Device 003: ID 2567:0a2b Intel Corp. Bus 001 Device 005: ID 0407:6252 Kingston River 2.0 [...]", "virt-xml example-VM-1 --add-device --hostdev 001.005 Domain 'example-VM-1' defined successfully.", "virsh dumpxml example-VM-1 [...] <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x0407'/> <product id='0x6252'/> <address bus='1' device='5'/> </source> <alias name='hostdev0'/> <address type='usb' bus='0' port='3'/> </hostdev> [...]", "lsusb [...] Bus 001 Device 003: ID 2567:0a2b Intel Corp. Bus 001 Device 005: ID 0407:6252 Kingston River 2.0 [...]", "virt-xml example-VM-1 --remove-device --hostdev 001.005 Domain 'example-VM-1' defined successfully.", "virt-xml testguest --add-device --smartcard mode=passthrough,type=spicevmc Domain 'testguest' defined successfully. Changes will take effect after the domain is fully powered off.", "virsh dumpxml testguest", "<smartcard mode='passthrough' type='spicevmc'/>", "virt-xml example-VM-name --add-device --disk /home/username/Downloads/example-ISO-name.iso ,device=cdrom Domain 'example-VM-name' defined successfully.", "virt-xml vmname --add-device --disk target.dev=sda,device=cdrom", "virsh dumpxml example-VM-name <disk> <source file=' USD(/home/username/Downloads/example-ISO-name.iso) '/> <target dev='sda' bus='sata'/> </disk>", "virt-xml example-VM-name --edit target=sda --disk /dev/cdrom/example-ISO-name-2.iso Domain 'example-VM-name' defined successfully.", "virsh dumpxml example-VM-name <disk> <source file=' USD(/home/username/Downloads/example-ISO-name.iso) '/> <target dev='sda' bus='sata'/> </disk>", "virt-xml example-VM-name --edit target=sda --disk path= Domain 'example-VM-name' defined successfully.", "virsh dumpxml example-VM-name <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='sda' bus='sata'/> </disk>", "virt-xml example-VM-name --remove-device --disk target=sda Domain 'example-VM-name' defined successfully.", "lspci -v [...] 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter Flags: bus master, fast devsel, latency 0, IRQ 16, NUMA node 0 Memory at fcba0000 (32-bit, non-prefetchable) [size=128K] [...] Capabilities: [150] Alternative Routing-ID Interpretation (ARI) Capabilities: [160] Single Root I/O Virtualization (SR-IOV) Kernel driver in use: igb Kernel modules: igb [...]", "ip link set eth1 up ip link show eth1 8: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000 link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto", "grubby --args=\"intel_iommu=on iommu=pt\" --update-kernel=ALL", "grubby --args=\"iommu=pt\" --update-kernel=ALL", "cat /sys/class/net/eth1/device/sriov_totalvfs 7", "echo VF-number > /sys/class/net/ network-interface /device/sriov_numvfs", "echo 2 > /sys/class/net/eth1/device/sriov_numvfs", "lspci | grep Ethernet 82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 82:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 82:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)", "ACTION==\"add\", SUBSYSTEM==\"net\", ENV{ID_NET_DRIVER}==\"ixgbe\", ATTR{device/sriov_numvfs}=\"2\"", "virsh attach-interface testguest1 hostdev 0000:82:10.0 --managed --live --config", "yum install driverctl", "lsmod | grep vfio", "lscss -d 0.0.002c Device Subchan. DevType CU Type Use PIM PAM POM CHPIDs ---------------------------------------------------------------------- 0.0.002c 0.0.29a8 3390/0c 3990/e9 yes f0 f0 ff 02111221 00000000", "cio_ignore -r 0.0.002c", "cio_ignore=all,!condev, ! 0.0.002c", "driverctl -b css set-override 0.0.29a8 vfio_ccw", "cat nodedev.xml <device> <parent>css_0_0_29a8</parent> <capability type=\"mdev\"> <type id=\"vfio_ccw-io\"/> </capability> </device> virsh nodedev-define nodedev.xml Node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8' defined from 'nodedev.xml' virsh nodedev-start mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8 Device mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8 started", "virsh nodedev-dumpxml mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8 <device> <name>mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8</name> <parent>css_0_0_29a8</parent> <capability type='mdev'> <type id='vfio_ccw-io'/> <uuid>30820a6f-b1a5-4503-91ca-0c10ba12345a</uuid> <iommuGroup number='0'/> <attr name='assign_adapter' value='0x02'/> <attr name='assign_domain' value='0x002b'/> </capability> </device>", "<hostdev mode='subsystem' type='mdev' model='vfio-ccw'> <source> <address uuid=\"30820a6f-b1a5-4503-91ca-0c10ba12345a\"/> </source> </hostdev>", "virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8", "virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8 Name: mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8 Parent: css_0_0_0121 Active: yes Persistent: yes Autostart: yes", "virsh dumpxml vm-name <domain> [...] <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ccw'> <source> <address uuid='10620d2f-ed4d-437b-8aff-beda461541f9'/> </source> <alias name='hostdev0'/> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0009 '/> </hostdev> [...] </domain>", "lscss | grep 0.0.0009 0.0.0009 0.0.0007 3390/0c 3990/e9 f0 f0 ff 12212231 00000000", "chccwdev -e 0.0009 Setting device 0.0.0009 online Done", "lsmod | grep vfio", "lspci -nkD 0000:00:00.0 0000: 1014:04ed Kernel driver in use: ism Kernel modules: ism 0001:00:00.0 0000: 1014:04ed Kernel driver in use: ism Kernel modules: ism 0002:00:00.0 0200: 15b3:1016 Subsystem: 15b3:0062 Kernel driver in use: mlx5_core Kernel modules: mlx5_core 0003:00:00.0 0200: 15b3:1016 Subsystem: 15b3:0062 Kernel driver in use: mlx5_core Kernel modules: mlx5_core", "virsh edit vm-name", "<hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"> <driver name=\"vfio\"/> <source> <address domain=\" 0x0003 \" bus=\" 0x00 \" slot=\" 0x00 \" function=\" 0x0 \"/> </source> <address type=\"pci\"/> </hostdev>", "<hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"> <driver name=\"vfio\"/> <source> <address domain=\" 0x0003 \" bus=\" 0x00 \" slot=\" 0x00 \" function=\" 0x0 \"/> </source> <address type=\"pci\"> <zpci uid=\"0x0008\" fid=\"0x001807\"/> </address> </hostdev>", "virsh shutdown vm-name", "lspci -nkD | grep 0003:00:00.0 0003:00:00.0 8086:9a09 (rev 01)", "virsh vol-info --pool guest_images firstimage Name: firstimage Type: block Capacity: 20.00 GB Allocation: 20.00 GB", "virsh pool-list --all --details Name State Autostart Persistent Capacity Allocation Available default running yes yes 48.97 GiB 23.93 GiB 25.03 GiB Downloads running yes yes 175.62 GiB 62.02 GiB 113.60 GiB RHEL-Storage-Pool running yes yes 214.62 GiB 93.02 GiB 168.60 GiB", "virsh pool-capabilities | grep \"'dir' supported='yes'\"", "virsh pool-define-as guest_images_dir dir --target \"/guest_images\" Pool guest_images_dir defined", "virsh pool-build guest_images_dir Pool guest_images_dir built ls -la /guest_images total 8 drwx------. 2 root root 4096 May 31 19:38 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_dir inactive no", "virsh pool-start guest_images_dir Pool guest_images_dir started", "virsh pool-autostart guest_images_dir Pool guest_images_dir marked as autostarted", "virsh pool-info guest_images_dir Name: guest_images_dir UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB", "virsh pool-capabilities | grep \"'disk' supported='yes'\"", "GRUB_DISABLE_OS_PROBER=true", "GRUB_OS_PROBER_SKIP_LIST=\"5ef6313a-257c-4d43@/dev/sdb1\"", "virsh pool-define-as guest_images_disk disk --source-format=gpt --source-dev=/dev/sdb --target /dev Pool guest_images_disk defined", "virsh pool-build guest_images_disk Pool guest_images_disk built", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk inactive no", "virsh pool-start guest_images_disk Pool guest_images_disk started", "virsh pool-autostart guest_images_disk Pool guest_images_disk marked as autostarted", "virsh pool-info guest_images_disk Name: guest_images_disk UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB", "virsh pool-capabilities | grep \"'fs' supported='yes'\"", "GRUB_DISABLE_OS_PROBER=true", "GRUB_OS_PROBER_SKIP_LIST=\"5ef6313a-257c-4d43@/dev/sdb1\"", "virsh pool-define-as guest_images_fs fs --source-dev /dev/sdc1 --target /guest_images Pool guest_images_fs defined", "virsh pool-build guest_images_fs Pool guest_images_fs built ls -la /guest_images total 8 drwx------. 2 root root 4096 May 31 19:38 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_fs inactive no", "virsh pool-start guest_images_fs Pool guest_images_fs started", "virsh pool-autostart guest_images_fs Pool guest_images_fs marked as autostarted", "virsh pool-info guest_images_fs Name: guest_images_fs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB", "mount | grep /guest_images /dev/sdc1 on /guest_images type ext4 (rw) ls -la /guest_images total 24 drwxr-xr-x. 3 root root 4096 May 31 19:47 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 .. drwx------. 2 root root 16384 May 31 14:18 lost+found", "gluster volume status Status of volume: gluster-vol1 Gluster process Port Online Pid ------------------------------------------------------------ Brick 222.111.222.111:/gluster-vol1 49155 Y 18634 Task Status of Volume gluster-vol1 ------------------------------------------------------------ There are no active volume tasks", "setsebool virt_use_fusefs on getsebool virt_use_fusefs virt_use_fusefs --> on", "virsh pool-capabilities | grep \"'gluster' supported='yes'\"", "virsh pool-define-as --name guest_images_glusterfs --type gluster --source-host 111.222.111.222 --source-name gluster-vol1 --source-path / Pool guest_images_glusterfs defined", "virsh pool-list --all Name State Autostart -------------------------------------------- default active yes guest_images_glusterfs inactive no", "virsh pool-start guest_images_glusterfs Pool guest_images_glusterfs started", "virsh pool-autostart guest_images_glusterfs Pool guest_images_glusterfs marked as autostarted", "virsh pool-info guest_images_glusterfs Name: guest_images_glusterfs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB", "virsh pool-capabilities | grep \"'iscsi' supported='yes'\"", "virsh pool-define-as --name guest_images_iscsi --type iscsi --source-host server1.example.com --source-dev iqn.2010-05.com.example.server1:iscsirhel7guest --target /dev/disk/by-path Pool guest_images_iscsi defined", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_iscsi inactive no", "virsh pool-start guest_images_iscsi Pool guest_images_iscsi started", "virsh pool-autostart guest_images_iscsi Pool guest_images_iscsi marked as autostarted", "virsh pool-info guest_images_iscsi Name: guest_images_iscsi UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB", "virsh pool-capabilities | grep \"'logical' supported='yes'\"", "virsh pool-define-as guest_images_lvm logical --source-name lvm_vg --target /dev/lvm_vg Pool guest_images_lvm defined", "virsh pool-list --all Name State Autostart ------------------------------------------- default active yes guest_images_lvm inactive no", "virsh pool-start guest_images_lvm Pool guest_images_lvm started", "virsh pool-autostart guest_images_lvm Pool guest_images_lvm marked as autostarted", "virsh pool-info guest_images_lvm Name: guest_images_lvm UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB", "virsh pool-capabilities | grep \"<value>nfs</value>\"", "virsh pool-define-as --name guest_images_netfs --type netfs --source-host='111.222.111.222' --source-path='/home/net_mount' --source-format='nfs' --target='/var/lib/libvirt/images/nfspool'", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_netfs inactive no", "virsh pool-start guest_images_netfs Pool guest_images_netfs started", "virsh pool-autostart guest_images_netfs Pool guest_images_netfs marked as autostarted", "virsh pool-info guest_images_netfs Name: guest_images_netfs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB", "virsh pool-capabilities | grep \"'scsi' supported='yes'\"", "virsh pool-define-as guest_images_vhba scsi --adapter-parent scsi_host3 --adapter-wwnn 5001a4a93526d0a1 --adapter-wwpn 5001a4ace3ee047d --target /dev/disk/ Pool guest_images_vhba defined", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_vhba inactive no", "virsh pool-start guest_images_vhba Pool guest_images_vhba started", "virsh pool-autostart guest_images_vhba Pool guest_images_vhba marked as autostarted", "virsh pool-info guest_images_vhba Name: guest_images_vhba UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB", "virsh pool-list --all Name State Autostart ------------------------------------------- default active yes Downloads active yes RHEL-Storage-Pool active yes", "virsh pool-destroy Downloads Pool Downloads destroyed", "virsh pool-delete Downloads Pool Downloads deleted", "virsh pool-undefine Downloads Pool Downloads has been undefined", "virsh pool-list --all Name State Autostart ------------------------------------------- default active yes rhel-Storage-Pool active yes", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_dir", "<pool type='dir'> <name>dirpool</name> <target> <path>/guest_images</path> </target> </pool>", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_disk", "<pool type='disk'> <name>phy_disk</name> <source> <device path='/dev/sdb'/> <format type='gpt'/> </source> <target> <path>/dev</path> </target> </pool>", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_fs", "<pool type='fs'> <name>guest_images_fs</name> <source> <device path='/dev/sdc1'/> <format type='auto'/> </source> <target> <path>/guest_images</path> </target> </pool>", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_glusterfs", "<pool type='gluster'> <name>Gluster_pool</name> <source> <host name='111.222.111.222'/> <dir path='/'/> <name>gluster-vol1</name> </source> </pool>", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_iscsi", "<pool type='iscsi'> <name>iSCSI_pool</name> <source> <host name='server1.example.com'/> <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_logical", "<source> <device path='/dev/sda1'/> <device path='/dev/sdb3'/> <device path='/dev/sdc2'/> </source>", "<pool type='logical'> <name>guest_images_lvm</name> <source> <device path='/dev/sdc'/> <name>libvirt_lvm</name> <format type='lvm2'/> </source> <target> <path>/dev/libvirt_lvm</path> </target> </pool>", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_netfs", "<pool type='netfs'> <name>nfspool</name> <source> <host name='file_server'/> <format type='nfs'/> <dir path='/home/net_mount'/> </source> <target> <path>/var/lib/libvirt/images/nfspool</path> </target> </pool>", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_vhba", "<pool type='scsi'> <name>vhbapool_host3</name> <source> <adapter type='fc_host' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>", "<pool type='scsi'> <name>vhbapool_host3</name> <source> <adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>", "virsh vol-list --pool RHEL-Storage-Pool --details Name Path Type Capacity Allocation --------------------------------------------------------------------------------------------- .bash_history /home/VirtualMachines/.bash_history file 18.70 KiB 20.00 KiB .bash_logout /home/VirtualMachines/.bash_logout file 18.00 B 4.00 KiB .bash_profile /home/VirtualMachines/.bash_profile file 193.00 B 4.00 KiB .bashrc /home/VirtualMachines/.bashrc file 1.29 KiB 4.00 KiB .git-prompt.sh /home/VirtualMachines/.git-prompt.sh file 15.84 KiB 16.00 KiB .gitconfig /home/VirtualMachines/.gitconfig file 167.00 B 4.00 KiB RHEL_Volume.qcow2 /home/VirtualMachines/RHEL8_Volume.qcow2 file 60.00 GiB 13.93 GiB", "virsh vol-info --pool RHEL-Storage-Pool --vol RHEL_Volume.qcow2 Name: RHEL_Volume.qcow2 Type: file Capacity: 60.00 GiB Allocation: 13.93 GiB", "virsh pool-list --details Name State Autostart Persistent Capacity Allocation Available -------------------------------------------------------------------------------------------- default running yes yes 48.97 GiB 36.34 GiB 12.63 GiB Downloads running yes yes 175.92 GiB 121.20 GiB 54.72 GiB VM-disks running yes yes 175.92 GiB 121.20 GiB 54.72 GiB", "virsh vol-create-as --pool guest-images-fs --name vm-disk1 --capacity 20 --format qcow2", "<disk type='volume' device='disk'> <driver name='qemu' type='qcow2'/> <source pool='guest-images-fs' volume='vm-disk1'/> <target dev='hdk' bus='ide'/> </disk>", "<disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='gluster' name='Volume1/Image'> <host name='example.org' port='6000'/> </source> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </disk>", "<disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/mapper/mpatha' /> <target dev='sda' bus='scsi'/> </disk>", "<disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='rbd' name='pool/image'> <host name='mon1.example.org' port='6321'/> </source> <target dev='vdc' bus='virtio'/> </disk>", "virsh attach-device --config testguest1 ~/vm-disk1.xml", "virsh vol-list --pool RHEL-SP Name Path --------------------------------------------------------------- .bash_history /home/VirtualMachines/.bash_history .bash_logout /home/VirtualMachines/.bash_logout .bash_profile /home/VirtualMachines/.bash_profile .bashrc /home/VirtualMachines/.bashrc .git-prompt.sh /home/VirtualMachines/.git-prompt.sh .gitconfig /home/VirtualMachines/.gitconfig vm-disk1 /home/VirtualMachines/vm-disk1", "virsh vol-wipe --pool RHEL-SP vm-disk1 Vol vm-disk1 wiped", "virsh vol-delete --pool RHEL-SP vm-disk1 Vol vm-disk1 deleted", "virsh vol-list --pool RHEL-SP Name Path --------------------------------------------------------------- .bash_history /home/VirtualMachines/.bash_history .bash_logout /home/VirtualMachines/.bash_logout .bash_profile /home/VirtualMachines/.bash_profile .bashrc /home/VirtualMachines/.bashrc .git-prompt.sh /home/VirtualMachines/.git-prompt.sh .gitconfig /home/VirtualMachines/.gitconfig", "qemu-img create -f <format> <image-name> <size>", "qemu-img create -f qcow2 test-image 30G Formatting 'test-img', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=32212254720 lazy_refcounts=off refcount_bits=16", "qemu-img info <test-img> image: test-img file format: qcow2 virtual size: 30 GiB (32212254720 bytes) disk size: 196 KiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false extended l2: false", "qemu-img check <test-name.qcow2> No errors were found on the image. 327434/327680 = 99.92% allocated, 0.00% fragmented, 0.00% compressed clusters Image end offset: 21478375424", "167 errors were found on the image. Data may be corrupted, or further writes to the image may corrupt it. 453368 leaked clusters were found on the image. This means waste of disk space, but no harm to data. 259 internal errors have occurred during the check. Image end offset: 21478375424", "qemu-img check -r all <test-name.qcow2> [...] 122 errors were found on the image. Data may be corrupted, or further writes to the image may corrupt it. 250 internal errors have occurred during the check. Image end offset: 27071414272", "virsh domblklist <vm-name> Target Source ---------------------------------------------------------- vda /home/username/disk-images/ example-image.qcow2", "cp <example-image.qcow2> <example-image-backup.qcow2>", "qemu-img resize <example-image.qcow2> +10G", "qemu-img info <converted-image.qcow2> image: converted-image.qcow2 file format: qcow2 virtual size: 30 GiB (32212254720 bytes) disk size: 196 KiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false extended l2: false", "qemu-img convert -f raw <original-image.img> -O qcow2 <converted-image.qcow2>", "qemu-img info <converted-image.qcow2> image: converted-image.qcow2 file format: qcow2 virtual size: 30 GiB (32212254720 bytes) disk size: 196 KiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false extended l2: false", "<secret ephemeral='no' private='yes'> <description>Passphrase for the iSCSI example.com server</description> <usage type='iscsi'> <target>iscsirhel7secret</target> </usage> </secret>", "virsh secret-define secret.xml", "virsh secret-list UUID Usage -------------------------------------------------------------- 2d7891af-20be-4e5e-af83-190e8a922360 iscsi iscsirhel7secret", "virsh secret-set-value --interactive 2d7891af-20be-4e5e-af83-190e8a922360 Enter new value for secret: Secret value set", "<pool type='iscsi'> <name>iscsirhel7pool</name> <source> <host name='192.0.2.1'/> <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/> <auth type='chap' username='_example-user_'> <secret usage='iscsirhel7secret'/> </auth> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>", "<auth username='redhat'> <secret type='iscsi' usage='iscsirhel7secret'/> </auth>", "virsh pool-destroy iscsirhel7pool virsh pool-start iscsirhel7pool", "virsh nodedev-list --cap vports scsi_host3 scsi_host4", "virsh nodedev-dumpxml scsi_host3", "<device> <name>scsi_host3</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path> <parent>pci_0000_10_00_0</parent> <capability type='scsi_host'> <host>3</host> <unique_id>0</unique_id> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>0</vports> </capability> </capability> </device>", "<device> <parent>scsi_host3</parent> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device>", "<device> <name>vhba</name> <parent wwnn='20000000c9848140' wwpn='10000000c9848140'/> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device>", "virsh nodedev-create vhba_host3 Node device scsi_host5 created from vhba_host3.xml", "virsh nodedev-dumpxml scsi_host5 <device> <name>scsi_host5</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path> <parent>scsi_host3</parent> <capability type='scsi_host'> <host>5</host> <unique_id>2</unique_id> <capability type='fc_host'> <wwnn>5001a4a93526d0a1</wwnn> <wwpn>5001a4ace3ee047d</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> </capability> </device>", "grubby --args=\"intel_iommu=on iommu_pt\" --update-kernel DEFAULT", "grubby --args=\"iommu=pt\" --update-kernel DEFAULT", "lspci -Dnn | grep VGA 0000:02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106GL [Quadro K4000] [ 10de:11fa ] (rev a1)", "grubby --args=\"pci-stub.ids=10de:11fa\" --update-kernel DEFAULT", "virsh nodedev-dumpxml pci_0000_02_00_0", "<device> <name>pci_0000_02_00_0</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path> <parent>pci_0000_00_03_0</parent> <driver> <name>pci-stub</name> </driver> <capability type='pci'> <domain>0</domain> <bus>2</bus> <slot>0</slot> <function>0</function> <product id='0x11fa'>GK106GL [Quadro K4000]</product> <vendor id='0x10de'>NVIDIA Corporation</vendor> <iommuGroup number='13'> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='8' width='16'/> <link validity='sta' speed='2.5' width='16'/> </pci-express> </capability> </device>", "driverctl set-override 0000:02:00.1 vfio-pci", "<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> </hostdev>", "virsh attach-device System1 --file /home/GPU-Assign.xml --persistent Device attached successfully.", "lshw -C display *-display description: 3D controller product: GP104GL [Tesla P4] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress cap_list configuration: driver=vfio-pci latency=0 resources: irq:16 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff", "blacklist nouveau options nouveau modeset=0", "dracut --force reboot", "lsmod | grep nvidia_vgpu_vfio nvidia_vgpu_vfio 45011 0 nvidia 14333621 10 nvidia_vgpu_vfio mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio 32695 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 systemctl status nvidia-vgpu-mgr.service nvidia-vgpu-mgr.service - NVIDIA vGPU Manager Daemon Loaded: loaded (/usr/lib/systemd/system/nvidia-vgpu-mgr.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-03-16 10:17:36 CET; 5h 8min ago Main PID: 1553 (nvidia-vgpu-mgr) [...]", "uuidgen 30820a6f-b1a5-4503-91ca-0c10ba58692a", "<device> <parent>pci_0000_01_00_0</parent> <capability type=\"mdev\"> <type id=\"nvidia-63\"/> <uuid>30820a6f-b1a5-4503-91ca-0c10ba58692a</uuid> </capability> </device>", "virsh nodedev-define vgpu-test.xml Node device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 created from vgpu-test.xml", "virsh nodedev-list --cap mdev --inactive mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0", "virsh nodedev-start mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 started", "virsh nodedev-list --cap mdev mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0", "virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Device mdev_d196754e_d8ed_4f43_bf22_684ed698b08b_0000_9b_00_0 marked as autostarted", "<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev>", "virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Name: virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Parent: pci_0000_01_00_0 Active: yes Persistent: yes Autostart: yes", "lspci -d 10de: -k 07:00.0 VGA compatible controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1) Subsystem: NVIDIA Corporation Device 12ce Kernel driver in use: nvidia Kernel modules: nouveau, nvidia_drm, nvidia", "virsh nodedev-list --cap mdev mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0", "virsh nodedev-destroy mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Destroyed node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'", "virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Name: virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Parent: pci_0000_01_00_0 Active: no Persistent: yes Autostart: yes", "<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev>", "virsh nodedev-undefine mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Undefined node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'", "virsh nodedev-list --cap mdev --inactive mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0", "virsh nodedev-list --cap mdev", "virsh nodedev-list --cap mdev_types pci_0000_5b_00_0 pci_0000_9b_00_0", "virsh nodedev-dumpxml pci_0000_9b_00_0 <device> <name>pci_0000_9b_00_0</name> <path>/sys/devices/pci0000:9a/0000:9a:00.0/0000:9b:00.0</path> <parent>pci_0000_9a_00_0</parent> <driver> <name>nvidia</name> </driver> <capability type='pci'> <class>0x030000</class> <domain>0</domain> <bus>155</bus> <slot>0</slot> <function>0</function> <product id='0x1e30'>TU102GL [Quadro RTX 6000/8000]</product> <vendor id='0x10de'>NVIDIA Corporation</vendor> <capability type='mdev_types'> <type id='nvidia-346'> <name>GRID RTX6000-12C</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>2</availableInstances> </type> <type id='nvidia-439'> <name>GRID RTX6000-3A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> [...] <type id='nvidia-440'> <name>GRID RTX6000-4A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>6</availableInstances> </type> <type id='nvidia-261'> <name>GRID RTX6000-8Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>3</availableInstances> </type> </capability> <iommuGroup number='216'> <address domain='0x0000' bus='0x9b' slot='0x00' function='0x3'/> <address domain='0x0000' bus='0x9b' slot='0x00' function='0x1'/> <address domain='0x0000' bus='0x9b' slot='0x00' function='0x2'/> <address domain='0x0000' bus='0x9b' slot='0x00' function='0x0'/> </iommuGroup> <numa node='2'/> <pci-express> <link validity='cap' port='0' speed='8' width='16'/> <link validity='sta' speed='2.5' width='8'/> </pci-express> </capability> </device>", "ip addr show virbr0 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global virbr0", "ip addr [...] enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s25", "virt-xml testguest --edit --network bridge=bridge0 Domain 'testguest' defined successfully.", "virsh start testguest", "ip link show master bridge0 2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff 10: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:89:15:40 brd ff:ff:ff:ff:ff:ff", "ip addr [...] enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s0", "ssh [email protected] [email protected]'s password: Last login: Mon Sep 24 12:05:36 2019 root~#*", "ip addr [...] enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s0", "ssh [email protected] [email protected]'s password: Last login: Mon Sep 24 12:05:36 2019 root~#*", "chmod -R a+r /var/lib/tftpboot", "chown -R nobody: /var/lib/tftpboot", "chcon -R --reference /usr/sbin/dnsmasq /var/lib/tftpboot chcon -R --reference /usr/libexec/libvirt_leaseshelper /var/lib/tftpboot", "virsh net-destroy default", "virsh net-edit default", "<ip address='192.0.2.1' netmask='255.255.255.0'> <tftp root='/var/lib/tftpboot'/> <dhcp> <range start='192.0.2.2' end='192.0.2.254' /> <bootp file=' example-pxelinux '/> </dhcp> </ip>", "virsh net-start default", "virsh net-list Name State Autostart Persistent --------------------------------------------------- default active no no", "virt-install --pxe --network network=default --memory 2048 --vcpus 2 --disk size=10", "<interface type='network' > <mac address='52:54:00:66:79:14'/> <source network='default'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <boot order='1'/> </interface>", "virt-install --pxe --network bridge=breth0 --memory 2048 --vcpus 2 --disk size=10", "<interface type='bridge' > <mac address='52:54:00:5a:ad:cb'/> <source bridge='breth0'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <boot order='1'/> </interface>", "yum install nfs-utils -y", "mkdir shared-files", "virsh domifaddr testguest1 Name MAC address Protocol Address ---------------------------------------------------------------- vnet0 52:53:00:84:57:90 ipv4 192.0.2.2/24 virsh domifaddr testguest2 Name MAC address Protocol Address ---------------------------------------------------------------- vnet1 52:53:00:65:29:21 ipv4 192.0.2.3/24", "/home/<username>/Downloads/<shared_directory>/ <VM1-IP(options)> <VM2-IP(options)>", "/usr/local/shared-files/ 192.0.2.2(rw,sync) 192.0.2.3(rw,sync)", "exportfs -a", "systemctl start nfs-server", "ip addr 5: virbr0: [BROADCAST,MULTICAST,UP,LOWER_UP] mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:32:ff:a5 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global virbr0 valid_lft forever preferred_lft forever", "mount 192.0.2.1:/usr/local/shared-files /mnt/host-share", "Install-WindowsFeature NFS-Client", "Enable-WindowsOptionalFeature -FeatureName ServicesForNFS-ClientOnly, ClientForNFS-Infrastructure -Online -NoRestart", "C:\\Windows\\system32\\mount.exe -o anon \\\\192.0.2.1\\usr\\local\\shared-files Z:", "ls <mount_point> shared-file1 shared-file2 shared-file3", "getenforce Enforcing", "yum install edk2-ovmf", "virt-install --name rhel8sb --memory 4096 --vcpus 4 --os-variant rhel8.0 --boot uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd --disk boot_order=2,size=10 --disk boot_order=1,device=cdrom,bus=scsi,path=/images/RHEL-8.0-installation.iso", "mokutil --sb-state SecureBoot enabled", "ls /usr/share/polkit-1/actions | grep libvirt ls /usr/share/polkit-1/rules.d | grep libvirt", "sed -i 's/#access_drivers = \\[ \"polkit\" \\]/access_drivers = \\[ \"polkit\" \\]/' /etc/libvirt/libvirtd.conf", "systemctl restart libvirtd", "virsh -c qemu:///system list --all Id Name State -------------------------------", "getsebool -a | grep virt [...] virt_sandbox_use_netlink --> off virt_sandbox_use_sys_admin --> off virt_transition_userdomain --> off virt_use_comm --> off virt_use_execmem --> off virt_use_fusefs --> off [...]", "grep facilities /proc/cpuinfo | grep 158", "ls /sys/firmware | grep uv", "virsh domcapabilities | grep unpack <feature policy='require' name='unpack'/>", "virsh dumpxml <vm_name> | grep \"<cpu mode='host-model'/>\"", "yum install guestfs-tools", "grubby --update-kernel=ALL --args=\"prot_virt=1\"", "[...] </memballoon> </devices> <launchSecurity type=\"s390-pv\"/> </domain>", "touch ~/secure-parameters", "ls /boot/loader/entries -l [...] -rw-r--r--. 1 root root 281 Oct 9 15:51 3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf", "cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf | grep options options root=/dev/mapper/rhel-root crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap", "echo \"root=/dev/mapper/rhel-root crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap swiotlb=262144\" > ~/secure-parameters", "genprotimg -i /boot/vmlinuz-4.18.0-240.el8.s390x -r /boot/initramfs-4.18.0-240.el8.s390x.img -p ~/secure-parameters -k HKD-8651-00020089A8.crt -o /boot/secure-image", "cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf title Red Hat Enterprise Linux 8.3 version 4.18.0-240.el8.s390x linux /boot/secure-image [...]", "zipl -V", "shred /boot/vmlinuz-4.18.0-240.el8.s390x shred /boot/initramfs-4.18.0-240.el8.s390x.img shred secure-parameters", "#!/usr/bin/bash echo \"USD(cat /proc/cmdline) swiotlb=262144\" > parmfile cat > ./HKD.crt << EOF -----BEGIN CERTIFICATE----- 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 1234569901234569901234569901234569901234569901234569901234569900 xLPRGYwhmXzKDg== -----END CERTIFICATE----- EOF version=USD(uname -r) kernel=/boot/vmlinuz-USDversion initrd=/boot/initramfs-USDversion.img genprotimg -k ./HKD.crt -p ./parmfile -i USDkernel -r USDinitrd -o /boot/secure-linux --no-verify cat >> /etc/zipl.conf<< EOF [secure] target=/boot image=/boot/secure-linux EOF zipl -V shutdown -h now", "virt-customize -a <vm_image_path> --selinux-relabel --firstboot <script_path>", "virsh dumpxml vm-name [...] <cpu mode='host-model'/> <devices> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none' io='native'> <source file='/var/lib/libvirt/images/secure-guest.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <interface type='network'> <source network='default'/> <model type='virtio'/> </interface> <console type='pty'/> <memballoon model='none'/> </devices> <launchSecurity type=\"s390-pv\"/> </domain>", "lszcrypt -V CARD.DOMAIN TYPE MODE STATUS REQUESTS PENDING HWTYPE QDEPTH FUNCTIONS DRIVER -------------------------------------------------------------------------------------------- 05 CEX5C CCA-Coproc online 1 0 11 08 S--D--N-- cex4card 05.0004 CEX5C CCA-Coproc online 1 0 11 08 S--D--N-- cex4queue 05.00ab CEX5C CCA-Coproc online 1 0 11 08 S--D--N-- cex4queue", "lsmod | grep vfio_ap vfio_ap 24576 0 [...]", "modprobe vfio_ap", "lszdev --list-types ap Cryptographic Adjunct Processor (AP) device", "echo \"obase=10; ibase=16; 04\" | bc 4 echo \"obase=10; ibase=16; AB\" | bc 171", "chzdev -t ap apmask=-5 aqmask=-4,-171", "lszcrypt -V CARD.DOMAIN TYPE MODE STATUS REQUESTS PENDING HWTYPE QDEPTH FUNCTIONS DRIVER -------------------------------------------------------------------------------------------- 05 CEX5C CCA-Coproc - 1 0 11 08 S--D--N-- cex4card 05.0004 CEX5C CCA-Coproc - 1 0 11 08 S--D--N-- vfio_ap 05.00ab CEX5C CCA-Coproc - 1 0 11 08 S--D--N-- vfio_ap", "vim vfio_ap.xml <device> <parent>ap_matrix</parent> <capability type=\"mdev\"> <type id=\"vfio_ap-passthrough\"/> <attr name='assign_adapter' value='0x05'/> <attr name='assign_domain' value='0x0004'/> <attr name='assign_domain' value='0x00ab'/> <attr name='assign_control_domain' value='0x00ab'/> </capability> </device>", "virsh nodedev-define vfio_ap.xml Node device 'mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix' defined from 'vfio_ap.xml'", "virsh nodedev-start mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix Device mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix started", "cat /sys/devices/vfio_ap/matrix/mdev_supported_types/vfio_ap-passthrough/devices/ 669d9b23-fe1b-4ecb-be08-a2fabca99b71 /matrix 05.0004 05.00ab", "virsh nodedev-dumpxml mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix <device> <name>mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix</name> <parent>ap_matrix</parent> <capability type='mdev'> <type id='vfio_ap-passthrough'/> <uuid>8f9c4a73-1411-48d2-895d-34db9ac18f85</uuid> <iommuGroup number='0'/> <attr name='assign_adapter' value='0x05'/> <attr name='assign_domain' value='0x0004'/> <attr name='assign_domain' value='0x00ab'/> <attr name='assign_control_domain' value='0x00ab'/> </capability> </device>", "vim crypto-dev.xml", "<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ap'> <source> <address uuid='8f9c4a73-1411-48d2-895d-34db9ac18f85'/> </source> </hostdev>", "virsh attach-device testguest1 crypto-dev.xml --live --config", "lszcrypt -V CARD.DOMAIN TYPE MODE STATUS REQUESTS PENDING HWTYPE QDEPTH FUNCTIONS DRIVER -------------------------------------------------------------------------------------------- 05 CEX5C CCA-Coproc online 1 0 11 08 S--D--N-- cex4card 05.0004 CEX5C CCA-Coproc online 1 0 11 08 S--D--N-- cex4queue 05.00ab CEX5C CCA-Coproc online 1 0 11 08 S--D--N-- cex4queue", "lszcrypt -d C DOMAIN 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f ------------------------------------------------------ 00 . . . . U . . . . . . . . . . . 10 . . . . . . . . . . . . . . . . 20 . . . . . . . . . . . . . . . . 30 . . . . . . . . . . . . . . . . 40 . . . . . . . . . . . . . . . . 50 . . . . . . . . . . . . . . . . 60 . . . . . . . . . . . . . . . . 70 . . . . . . . . . . . . . . . . 80 . . . . . . . . . . . . . . . . 90 . . . . . . . . . . . . . . . . a0 . . . . . . . . . . . B . . . . b0 . . . . . . . . . . . . . . . . c0 . . . . . . . . . . . . . . . . d0 . . . . . . . . . . . . . . . . e0 . . . . . . . . . . . . . . . . f0 . . . . . . . . . . . . . . . . ------------------------------------------------------ C: Control domain U: Usage domain B: Both (Control + Usage domain)", "{PackageManagerCommand} install edk2-ovmf", "{PackageManagerCommand} install swtpm libtpms", "<devices> [...] <tpm model='tpm-crb'> <backend type='emulator' version='2.0'/> </tpm> [...] </devices>", "Your device meets the requirements for standard hardware security.", "tuned-adm list Available profiles: - balanced - General non-specialized TuneD profile - desktop - Optimize for the desktop use-case [...] - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: balanced", "tuned-adm profile selected-profile", "tuned-adm profile virtual-host", "tuned-adm profile virtual-guest", "tuned-adm active Current active profile: virtual-host", "tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.", "virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>", "virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB", "virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>", "virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB", "virt-xml testguest --edit --memory memory=4096,currentMemory=4096 Domain 'testguest' defined successfully. Changes will take effect after the domain is fully powered off.", "virsh setmem testguest --current 2048", "virsh dominfo testguest Max memory: 4194304 KiB Used memory: 2097152 KiB", "virsh domstats --balloon testguest Domain: 'testguest' balloon.current=365624 balloon.maximum=4194304 balloon.swap_in=0 balloon.swap_out=0 balloon.major_fault=306 balloon.minor_fault=156117 balloon.unused=3834448 balloon.available=4035008 balloon.usable=3746340 balloon.last-update=1587971682 balloon.disk_caches=75444 balloon.hugetlb_pgalloc=0 balloon.hugetlb_pgfail=0 balloon.rss=1005456", "virsh edit testguest", "<memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking>", "cat /proc/meminfo | grep Huge HugePages_Total: 4 HugePages_Free: 2 HugePages_Rsvd: 1 Hugepagesize: 1024000 kB", "<domain> [...] <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> [...] </domain>", "virsh blkiotune VM-name --device-weights device , I/O-weight", "virsh blkiotune testguest1 --device-weights /dev/sda, 500", "virsh blkiotune testguest1 Block I/O tuning parameters for domain testguest1: weight : 800 device_weight : [ {\"sda\": 500}, ]", "virsh domblklist rollin-coal Target Source ------------------------------------------------ vda /var/lib/libvirt/images/rollin-coal.qcow2 sda - sdb /home/horridly-demanding-processes.iso", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 252:0 0 4G 0 disk [SWAP] nvme0n1 259:0 0 238.5G 0 disk ├─nvme0n1p1 259:1 0 600M 0 part /boot/efi ├─nvme0n1p2 259:2 0 1G 0 part /boot └─nvme0n1p3 259:3 0 236.9G 0 part └─luks-a1123911-6f37-463c-b4eb-fxzy1ac12fea 253:0 0 236.9G 0 crypt /home", "virsh blkiotune VM-name --parameter device , limit", "virsh blkiotune rollin-coal --device-read-iops-sec /dev/nvme0n1p3,1000 --device-write-iops-sec /dev/nvme0n1p3,1000 --device-write-bytes-sec /dev/nvme0n1p3,52428800 --device-read-bytes-sec /dev/nvme0n1p3,52428800", "virsh edit <example_vm>", "<disk type='block' device='disk'> <driver name='qemu' type='raw' queues='N' /> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>", "<controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller>", "virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <vcpu placement='static'>8</vcpu> <iothreads>1</iothreads> </domain>", "virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1' /> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> </devices> </domain>", "virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <devices> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1' /> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </controller> </devices> </domain>", "virsh edit <vm_name>", "<domain type='kvm'> <name>testguest1</name> <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1'/> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> </devices> </domain>", "virt-xml testguest1 --edit --cpu host-model", "virsh vcpucount testguest maximum config 4 maximum live 2 current config 2 current live 1", "virsh setvcpus testguest 8 --maximum --config", "virsh setvcpus testguest 4 --live", "virsh setvcpus testguest 1 --config", "virsh vcpucount testguest maximum config 8 maximum live 4 current config 1 current live 4", "virsh nodeinfo CPU model: x86_64 CPU(s): 48 CPU frequency: 1200 MHz CPU socket(s): 1 Core(s) per socket: 12 Thread(s) per core: 2 NUMA cell(s): 2 Memory size: 67012964 KiB", "yum install numactl", "virt-xml testguest5 --edit --vcpus placement=auto virt-xml testguest5 --edit --numatune mode=preferred", "echo 1 > /proc/sys/kernel/numa_balancing", "systemctl start numad", "numactl --hardware available: 2 nodes (0-1) node 0 size: 18156 MB node 0 free: 9053 MB node 1 size: 18180 MB node 1 free: 6853 MB node distances: node 0 1 0: 10 20 1: 20 10", "virsh edit <testguest6> <domain type='kvm'> <name>testguest6</name> <vcpu placement='static'>16</vcpu> <cpu ...> <numa> <cell id='0' cpus='0-7' memory='16' unit='GiB'/> <cell id='1' cpus='8-15' memory='16' unit='GiB'/> </numa> </domain>", "lscpu -p=node,cpu Node,CPU 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 1,0 1,1 1,2 1,3 1,4 1,5 1,6 1,7", "lscpu -p=node,cpu Node,CPU 0,0 0,1 0,2 0,3", "virsh vcpupin testguest6 0 1 virsh vcpupin testguest6 1 3 virsh vcpupin testguest6 2 5 virsh vcpupin testguest6 3 7", "virsh vcpupin testguest6 VCPU CPU Affinity ---------------------- 0 1 1 3 2 5 3 7", "virsh emulatorpin testguest6 2,4 virsh emulatorpin testguest6 emulator: CPU Affinity ---------------------------------- *: 2,4", "virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 0 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0", "virsh schedinfo <vm_name> --set vcpu_period=100000", "virsh schedinfo <vm_name> --set vcpu_quota=50000", "virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 2048 vcpu_period : 100000 vcpu_quota : 50000", "virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 1024 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0", "virsh schedinfo <vm_name> --set cpu_shares=2048 Scheduler : posix cpu_shares : 2048 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0", "systemctl stop ksm systemctl stop ksmtuned", "systemctl disable ksm Removed /etc/systemd/system/multi-user.target.wants/ksm.service. systemctl disable ksmtuned Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.", "echo 2 > /sys/kernel/mm/ksm/run", "lsmod | grep vhost vhost_net 32768 1 vhost 53248 1 vhost_net tap 24576 1 vhost_net tun 57344 6 vhost_net", "modprobe vhost_net", "<interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface>", "ethtool -C tap0 rx-frames 64", "yum install perf", "perf kvm stat report Analyze events for all VMs, all VCPUs: VM-EXIT Samples Samples% Time% Min Time Max Time Avg time EXTERNAL_INTERRUPT 365634 31.59% 18.04% 0.42us 58780.59us 204.08us ( +- 0.99% ) MSR_WRITE 293428 25.35% 0.13% 0.59us 17873.02us 1.80us ( +- 4.63% ) PREEMPTION_TIMER 276162 23.86% 0.23% 0.51us 21396.03us 3.38us ( +- 5.19% ) PAUSE_INSTRUCTION 189375 16.36% 11.75% 0.72us 29655.25us 256.77us ( +- 0.70% ) HLT 20440 1.77% 69.83% 0.62us 79319.41us 14134.56us ( +- 0.79% ) VMCALL 12426 1.07% 0.03% 1.02us 5416.25us 8.77us ( +- 7.36% ) EXCEPTION_NMI 27 0.00% 0.00% 0.69us 1.34us 0.98us ( +- 3.50% ) EPT_MISCONFIG 5 0.00% 0.00% 5.15us 10.85us 7.88us ( +- 11.67% ) Total Samples:1157497, Total events handled time:413728274.66us.", "numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51722 (qemu-kvm) 68 16 357 6936 2 3 147 598 8128 51747 (qemu-kvm) 245 11 5 18 5172 2532 1 92 8076 53736 (qemu-kvm) 62 432 1661 506 4851 136 22 445 8116 53773 (qemu-kvm) 1393 3 1 2 12 0 0 6702 8114 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 1769 463 2024 7462 10037 2672 169 7837 32434", "numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51747 (qemu-kvm) 0 0 7 0 8072 0 1 0 8080 53736 (qemu-kvm) 0 0 7 0 0 0 8113 0 8120 53773 (qemu-kvm) 0 0 7 0 0 0 1 8110 8118 59065 (qemu-kvm) 0 0 8050 0 0 0 0 0 8051 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 0 0 8072 0 8072 0 8114 8110 32368", "--disk path= /usr/share/virtio-win/virtio-win.iso ,device=cdrom", "--os-variant win10", "osinfo-query os", "--boot uefi --tpm model=tpm-crb,backend.type=emulator,backend.version=2.0", "virsh edit windows-vm", "<os firmware='efi' > <type arch='x86_64' machine='pc-q35-6.2'>hvm</type> <boot dev='hd'/> </os>", "<devices> <tpm model='tpm-crb'> <backend type='emulator' version='2.0'/> </tpm> </devices>", "subscription-manager refresh All local data refreshed", "yum install -y virtio-win", "yum upgrade -y virtio-win", "ls /usr/share/virtio-win/ drivers/ guest-agent/ virtio-win-1.9.9.iso virtio-win.iso", "virt-xml WindowsVM --add-device --disk virtio-win.iso,device=cdrom Domain 'WindowsVM' defined successfully.", "C:\\WINDOWS\\system32\\netsh dump > backup.txt", "C:\\WINDOWS\\system32\\msiexec.exe /i X :\\virtio-win-gt-x86.msi /passive /norestart", "C:\\WINDOWS\\system32\\netsh -f backup.txt", "virsh edit windows-vm", "<features> [...] <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <runtime state='on' /> <synic state='on'/> <stimer state='on'> <direct state='on'/> </stimer> <frequencies state='on'/> </hyperv> [...] </features>", "<clock offset='localtime'> <timer name='hypervclock' present='yes'/> </clock>", "<hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <runtime state='on' /> <synic state='on'/> <stimer state='on'> <direct state='on'/> </stimer> <frequencies state='on'/> </hyperv> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> </clock>", "bcdedit /set useplatformclock No", "{PackageManagerCommand} install edk2-ovmf", "{PackageManagerCommand} install swtpm libtpms", "<devices> [...] <tpm model='tpm-crb'> <backend type='emulator' version='2.0'/> </tpm> [...] </devices>", "Your device meets the requirements for standard hardware security.", "*modprobe kvm hpage=1 nested=1* modprobe: ERROR: could not insert 'kvm': Invalid argument *dmesg |tail -1* [90226.508366] kvm-s390: A KVM host that supports nesting cannot back its KVM guests with huge pages", "cat /sys/module/kvm_intel/parameters/nested", "modprobe -r kvm_intel", "modprobe kvm_intel nested=1", "options kvm_intel nested=1", "virsh edit Intel-L1", "<cpu mode='host-passthrough' />", "<cpu mode ='custom' match ='exact' check='partial'> <model fallback='allow'> Haswell-noTSX </model> <feature policy='require' name='vmx'/> </cpu>", "cat /sys/module/kvm_amd/parameters/nested", "modprobe -r kvm_amd", "modprobe kvm_amd nested=1", "options kvm_amd nested=1", "virsh edit AMD-L1", "<cpu mode='host-passthrough' />", "<cpu mode=\"custom\" match=\"exact\" check=\"none\"> <model fallback=\"allow\"> EPYC-IBPB </model> <feature policy=\"require\" name=\"svm\"/> </cpu>", "cat /sys/module/kvm/parameters/nested", "modprobe -r kvm", "modprobe kvm nested=1", "options kvm nested=1", "cat /sys/module/kvm_hv/parameters/nested", "modprobe -r kvm_hv", "modprobe kvm_hv nested=1", "options kvm_hv nested=1", "<nested-hv state='on'/>", "log_filters=\"3:remote 4:event 3:util.json 3:rpc\" log_outputs=\"1:file:/var/log/libvirt/libvirt.log\"", "systemctl restart libvirtd.service", "virt-admin daemon-log-filters >> virt-filters-backup", "virt-admin daemon-log-filters \"3:remote 4:event 3:util.json 3:rpc\"", "virt-admin daemon-log-outputs \"1:file:/var/log/libvirt/libvirt.log\"", "virt-admin daemon-log-filters Logging filters:", "virsh dump lander1 /core/file/gargantua.file --memory-only Domain 'lander1' dumped to /core/file/gargantua.file", "crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M", "pgrep libvirt 22014 22025", "gstack 22014 Thread 3 (Thread 0x7f33edaf7700 (LWP 22017)): #0 0x00007f33f81aef21 in poll () from /lib64/libc.so.6 #1 0x00007f33f89059b6 in g_main_context_iterate.isra () from /lib64/libglib-2.0.so.0 #2 0x00007f33f8905d72 in g_main_loop_run () from /lib64/libglib-2.0.so.0", "virsh dumpxml VM-name | grep machine=", "/usr/libexec/qemu-kvm -M help", "<domain type='qemu'>", "<domain type='kvm'>", "<disk type=\"block\" device=\"lun\">", "<hostdev mode='subsystem' type='scsi'>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/configuring_and_managing_virtualization/index
Developing C and C++ applications in RHEL 9
Developing C and C++ applications in RHEL 9 Red Hat Enterprise Linux 9 Setting up a developer workstation, and developing and debugging C and C++ applications in Red Hat Enterprise Linux 9 Red Hat Customer Content Services
[ "subscription-manager repos --enable rhel-9-for-USD(uname -i)-baseos-debug-rpms subscription-manager repos --enable rhel-9-for-USD(uname -i)-baseos-source-rpms subscription-manager repos --enable rhel-9-for-USD(uname -i)-appstream-debug-rpms subscription-manager repos --enable rhel-9-for-USD(uname -i)-appstream-source-rpms", "dnf install git", "git config --global user.name \" Full Name \" git config --global user.email \" [email protected] \"", "git config --global core.editor command", "man git man gittutorial man gittutorial-2", "dnf group install \"Development Tools\"", "dnf install llvm-toolset", "dnf install gcc-gfortran", "dnf install gdb valgrind systemtap ltrace strace", "dnf install dnf-utils", "stap-prep", "dnf install perf papi pcp-zeroconf valgrind strace sysstat systemtap", "stap-prep", "systemctl enable pmcd && systemctl start pmcd", "man gcc", "gcc -c source.c another_source.c", "gcc ... -g", "man gcc", "man gcc", "gcc ... -O2 -g -Wall -Wl,-z,now,-z,relro -fstack-protector-strong -fstack-clash-protection -D_FORTIFY_SOURCE=2", "gcc ... -Walloc-zero -Walloca-larger-than -Wextra -Wformat-security -Wvla-larger-than", "gcc ... objfile.o another_object.o ... -o executable-file", "mkdir hello-c cd hello-c", "#include <stdio.h> int main() { printf(\"Hello, World!\\n\"); return 0; }", "gcc hello.c -o helloworld", "./helloworld Hello, World!", "mkdir hello-c cd hello-c", "#include <stdio.h> int main() { printf(\"Hello, World!\\n\"); return 0; }", "gcc -c hello.c", "gcc hello.o -o helloworld", "./helloworld Hello, World!", "mkdir hello-cpp cd hello-cpp", "#include <iostream> int main() { std::cout << \"Hello, World!\\n\"; return 0; }", "g++ hello.cpp -o helloworld", "./helloworld Hello, World!", "mkdir hello-cpp cd hello-cpp", "#include <iostream> int main() { std::cout << \"Hello, World!\\n\"; return 0; }", "g++ -c hello.cpp", "g++ hello.o -o helloworld", "./helloworld Hello, World!", "gcc ... -l foo", "gcc ... -I include_path", "gcc ... -L library_path -l foo", "gcc ... -I header_path -c", "gcc ... -L library_path -l foo", "./program", "gcc ... -L library_path -l foo", "gcc ... -L library_path -l foo -Wl,-rpath= library_path", "./program", "export LD_LIBRARY_PATH= library_path :USDLD_LIBRARY_PATH ./program", "man ld.so", "cat /etc/ld.so.conf", "ldconfig -v", "gcc ... path/to/libfoo.a", "gcc ... -Wl,-Bstatic -l first -Wl,-Bdynamic -l second", "gcc ... -l foo", "objdump -p somelibrary | grep SONAME", "gcc ... -c -fPIC some_file.c", "gcc -shared -o libfoo.so.x.y -Wl,-soname, libfoo.so.x some_file.o", "cp libfoo.so.x.y /usr/lib64", "ln -s libfoo.so.x.y libfoo.so.x ln -s libfoo.so.x libfoo.so", "gcc -c source_file.c", "ar rcs lib foo .a source_file.o", "nm libfoo.a", "gcc ... -l foo", "man ar", "all: hello hello: hello.o gcc hello.o -o hello hello.o: hello.c gcc -c hello.c -o hello.o", "CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)", "mkdir hellomake cd hellomake", "#include <stdio.h> int main(int argc, char *argv[]) { printf(\"Hello, World!\\n\"); return 0; }", "CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)", "make gcc -c -Wall hello.c -o hello.o gcc hello.o -o hello", "./hello Hello, World!", "make clean rm -rf hello.o hello", "man make info make", "gcc ... -g", "man gcc", "gdb -q /bin/ls Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64 (gdb)", "(gdb) q", "dnf debuginfo-install coreutils-8.30-6.el8.x86_64", "which less /usr/bin/less", "locate libz | grep so /usr/lib64/libz.so.1 /usr/lib64/libz.so.1.2.11", "dnf install mlocate updatedb", "rpm -qf /usr/lib64/libz.so.1.2.7 zlib-1.2.11-10.el8.x86_64", "debuginfo-install zlib-1.2.11-10.el8.x86_64", "gdb program", "ps -C program -o pid h pid", "gdb -p pid", "(gdb) shell ps -C program -o pid h pid", "(gdb) attach pid", "(gdb) file path/to/program", "(gdb) help info", "(gdb) br file:line", "(gdb) br line", "(gdb) br function_name", "(gdb) br file:line if condition", "(gdb) info br", "(gdb) delete number", "(gdb) clear file:line", "(gdb) watch expression", "(gdb) rwatch expression", "(gdb) awatch expression", "(gdb) info br", "(gdb) delete num", "strace -fvttTyy -s 256 -e trace= call program", "ps -C program (...) strace -fvttTyy -s 256 -e trace= call -p pid", "strace ... |& tee your_log_file.log", "man strace", "ltrace -f -l library -e function program", "ps -C program (...) ltrace -f -l library -e function -p pid program", "ltrace ... |& tee your_log_file.log", "man ltrace", "ps -aux", "stap /usr/share/systemtap/examples/process/strace.stp -x pid", "(gdb) catch syscall syscall-name", "(gdb) r", "(gdb) c", "(gdb) catch signal signal-type", "(gdb) r", "(gdb) c", "ulimit -a", "DumpCore=yes DefaultLimitCORE=infinity", "systemctl daemon-reexec", "ulimit -c unlimited", "dnf install sos", "sosreport", "coredumpctl list executable-name coredumpctl dump executable-name > /path/to/file-for-export", "eu-unstrip -n --core= ./core.9814 0x400000+0x207000 2818b2009547f780a5639c904cded443e564973e@0x400284 /usr/bin/sleep /usr/lib/debug/bin/sleep.debug [exe] 0x7fff26fff000+0x1000 1e2a683b7d877576970e4275d41a6aaec280795e@0x7fff26fff340 . - linux-vdso.so.1 0x35e7e00000+0x3b6000 374add1ead31ccb449779bc7ee7877de3377e5ad@0x35e7e00280 /usr/lib64/libc-2.14.90.so /usr/lib/debug/lib64/libc-2.14.90.so.debug libc.so.6 0x35e7a00000+0x224000 3ed9e61c2b7e707ce244816335776afa2ad0307d@0x35e7a001d8 /usr/lib64/ld-2.14.90.so /usr/lib/debug/lib64/ld-2.14.90.so.debug ld-linux-x86-64.so.2", "eu-readelf -n executable_file", "gdb -e executable_file -c core_file", "(gdb) symbol-file program.debug", "sysctl kernel.core_pattern", "kernel.core_pattern = |/usr/lib/systemd/systemd-coredump", "pgrep -a executable-name-fragment", "PID command-line", "pgrep -a bc 5459 bc", "kill -ABRT PID", "coredumpctl list PID", "coredumpctl list 5459 TIME PID UID GID SIG COREFILE EXE Thu 2019-11-07 15:14:46 CET 5459 1000 1000 6 present /usr/bin/bc", "coredumpctl info PID", "coredumpctl debug PID", "Missing separate debuginfos, use: dnf debuginfo-install bc-1.07.1-5.el8.x86_64", "coredumpctl dump PID > /path/to/file_for_export", "ps -C some-program", "gcore -o filename pid", "sosreport", "(gdb) set use-coredump-filter off", "(gdb) set dump-excluded-mappings on", "(gdb) gcore core-file", "dnf install gcc-toolset- N", "dnf list available gcc-toolset- N -\\*", "dnf install package_name", "dnf install gcc-toolset-13-annobin-annocheck gcc-toolset-13-binutils-devel", "dnf remove gcc-toolset- N \\*", "scl enable gcc-toolset- N tool", "scl enable gcc-toolset- N bash", "scl enable gcc-toolset-12 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-12 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-12 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-12 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so", "scl enable gcc-toolset-13 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-13 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-13 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-13 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-13/root/usr/lib/gcc/ architecture -linux-gnu/13/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-13/root/usr/lib/gcc/ architecture -linux-gnu/13/plugin ln -s annobin.so gcc-annobin.so", "scl enable gcc-toolset-14 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-14 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-14 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-14 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-14/root/usr/lib/gcc/ architecture -linux-gnu/14/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-14/root/usr/lib/gcc/ architecture -linux-gnu/14/plugin ln -s annobin.so gcc-annobin.so", "podman login registry.redhat.io Username: username Password: ********", "podman pull registry.redhat.io/rhel8/gcc-toolset- <toolset_version> -toolchain", "podman images", "podman run -it image_name /bin/bash", "podman login registry.redhat.io Username: username Password: ********", "podman pull registry.redhat.io/rhel9/gcc-toolset-14-toolchain", "podman run -it registry.redhat.io/rhel9/gcc-toolset-14-toolchain /bin/bash", "bash-4.4USD gcc -v gcc version 14.2.1 20240801 (Red Hat 14.2.1-1) (GCC)", "bash-4.4USD rpm -qa", "gcc -fplugin=annobin", "gcc -iplugindir= /path/to/directory/containing/annobin/", "gcc --print-file-name=plugin", "clang -fplugin= /path/to/directory/containing/annobin/", "gcc -fplugin=annobin -fplugin-arg-annobin- option file-name", "gcc -fplugin=annobin -fplugin-arg-annobin-verbose file-name", "clang -fplugin= /path/to/directory/containing/annobin/ -Xclang -plugin-arg-annobin -Xclang option file-name", "clang -fplugin=/usr/lib64/clang/10/lib/annobin.so -Xclang -plugin-arg-annobin -Xclang verbose file-name", "annocheck file-name", "annocheck directory-name", "annocheck rpm-package-name", "annocheck rpm-package-name --debug-rpm debuginfo-rpm", "annocheck --enable-built-by", "annocheck --enable-notes", "annocheck --section-size= name", "annocheck --enable-notes --disable-hardened file-name", "objcopy --merge-notes file-name", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so", "#include <signal.h> #include <stdio.h> static const char *strsig (int sig) { return sys_siglist[sig]; } int main (int argc, char *argv[]) { printf (\"%s\\n\", strsig (SIGINT)); return 0; }", "#include <signal.h> #include <stdio.h> #include <string.h> static const char *strsig (int sig) { return strsignal(sig); } int main (int argc, char *argv[]) { printf (\"%s\\n\", strsig (SIGINT)); return 0; }", "#define _GNU_SOURCE #include <signal.h> #include <stdio.h> #include <string.h> static const char *strsig (int sig) { const char *r = sigdescr_np (sig); return r == NULL ? \"Unknown signal\" : r; } int main (int argc, char *argv[]) { printf (\"%s\\n\", strsig (SIGINT)); printf (\"%s\\n\", strsig (-1)); return 0; }", "#include <stdio.h> #include <errno.h> static const char *strerr (int err) { if (err < 0 || err > sys_nerr) return \"Unknown\"; return sys_errlist[err]; } int main (int argc, char *argv[]) { printf (\"%s\\n\", strerr (-1)); printf (\"%s\\n\", strerr (EINVAL)); return 0; }", "#include <stdio.h> #include <errno.h> static const char *strerr (int err) { return strerror (err); } int main (int argc, char *argv[]) { printf (\"%s\\n\", strerr (-1)); printf (\"%s\\n\", strerr (EINVAL)); return 0; }", "#define _GNU_SOURCE #include <stdio.h> #include <errno.h> #include <string.h> static const char *strerr (int err) { const char *r = strerrordesc_np (err); return r == NULL ? \"Unknown error\" : r; } int main (int argc, char *argv[]) { printf (\"%s\\n\", strerr (-1)); printf (\"%s\\n\", strerr (EINVAL)); return 0; }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/developing_c_and_cpp_applications_in_rhel_9/index
B.4. Sample lvm.conf File
B.4. Sample lvm.conf File The following is a sample lvm.conf configuration file. Your configuration file may differ slightly from this one. Note You can generate an lvm.conf file with all of the default values set and with the comments included by running the following command:
[ "lvmconfig --type default --withcomments", "This is an example configuration file for the LVM2 system. It contains the default settings that would be used if there was no /etc/lvm/lvm.conf file. # Refer to 'man lvm.conf' for further information including the file layout. # Refer to 'man lvm.conf' for information about how settings configured in this file are combined with built-in values and command line options to arrive at the final values used by LVM. # Refer to 'man lvmconfig' for information about displaying the built-in and configured values used by LVM. # If a default value is set in this file (not commented out), then a new version of LVM using this file will continue using that value, even if the new version of LVM changes the built-in default value. # To put this file in a different directory and override /etc/lvm set the environment variable LVM_SYSTEM_DIR before running the tools. # N.B. Take care that each setting only appears once if uncommenting example settings in this file. Configuration section config. How LVM configuration settings are handled. config { # Configuration option config/checks. # If enabled, any LVM configuration mismatch is reported. # This implies checking that the configuration key is understood by # LVM and that the value of the key is the proper type. If disabled, # any configuration mismatch is ignored and the default value is used # without any warning (a message about the configuration key not being # found is issued in verbose mode only). checks = 1 # Configuration option config/abort_on_errors. # Abort the LVM process if a configuration mismatch is found. abort_on_errors = 0 # Configuration option config/profile_dir. # Directory where LVM looks for configuration profiles. profile_dir = \"/etc/lvm/profile\" } Configuration section devices. How LVM uses block devices. devices { # Configuration option devices/dir. # Directory in which to create volume group device nodes. # Commands also accept this as a prefix on volume group names. # This configuration option is advanced. dir = \"/dev\" # Configuration option devices/scan. # Directories containing device nodes to use with LVM. # This configuration option is advanced. scan = [ \"/dev\" ] # Configuration option devices/obtain_device_list_from_udev. # Obtain the list of available devices from udev. # This avoids opening or using any inapplicable non-block devices or # subdirectories found in the udev directory. Any device node or # symlink not managed by udev in the udev directory is ignored. This # setting applies only to the udev-managed device directory; other # directories will be scanned fully. LVM needs to be compiled with # udev support for this setting to apply. obtain_device_list_from_udev = 1 # Configuration option devices/external_device_info_source. # Select an external device information source. # Some information may already be available in the system and LVM can # use this information to determine the exact type or use of devices it # processes. Using an existing external device information source can # speed up device processing as LVM does not need to run its own native # routines to acquire this information. For example, this information # is used to drive LVM filtering like MD component detection, multipath # component detection, partition detection and others. # # Accepted values: # none # No external device information source is used. # udev # Reuse existing udev database records. Applicable only if LVM is # compiled with udev support. # external_device_info_source = \"none\" # Configuration option devices/preferred_names. # Select which path name to display for a block device. # If multiple path names exist for a block device, and LVM needs to # display a name for the device, the path names are matched against # each item in this list of regular expressions. The first match is # used. Try to avoid using undescriptive /dev/dm-N names, if present. # If no preferred name matches, or if preferred_names are not defined, # the following built-in preferences are applied in order until one # produces a preferred name: # Prefer names with path prefixes in the order of: # /dev/mapper, /dev/disk, /dev/dm-*, /dev/block. # Prefer the name with the least number of slashes. # Prefer a name that is a symlink. # Prefer the path with least value in lexicographical order. # # Example # preferred_names = [ \"^/dev/mpath/\", \"^/dev/mapper/mpath\", \"^/dev/[hs]d\" ] # preferred_names = [ \"^/dev/mpath/\", \"^/dev/mapper/mpath\", \"^/dev/[hs]d\" ] # Configuration option devices/filter. # Limit the block devices that are used by LVM commands. # This is a list of regular expressions used to accept or reject block # device path names. Each regex is delimited by a vertical bar '|' # (or any character) and is preceded by 'a' to accept the path, or # by 'r' to reject the path. The first regex in the list to match the # path is used, producing the 'a' or 'r' result for the device. # When multiple path names exist for a block device, if any path name # matches an 'a' pattern before an 'r' pattern, then the device is # accepted. If all the path names match an 'r' pattern first, then the # device is rejected. Unmatching path names do not affect the accept # or reject decision. If no path names for a device match a pattern, # then the device is accepted. Be careful mixing 'a' and 'r' patterns, # as the combination might produce unexpected results (test changes.) # Run vgscan after changing the filter to regenerate the cache. # See the use_lvmetad comment for a special case regarding filters. # # Example # Accept every block device: # filter = [ \"a|.*/|\" ] # Reject the cdrom drive: # filter = [ \"r|/dev/cdrom|\" ] # Work with just loopback devices, e.g. for testing: # filter = [ \"a|loop|\", \"r|.*|\" ] # Accept all loop devices and ide drives except hdc: # filter = [ \"a|loop|\", \"r|/dev/hdc|\", \"a|/dev/ide|\", \"r|.*|\" ] # Use anchors to be very specific: # filter = [ \"a|^/dev/hda8USD|\", \"r|.*/|\" ] # # This configuration option has an automatic default value. # filter = [ \"a|.*/|\" ] # Configuration option devices/global_filter. # Limit the block devices that are used by LVM system components. # Because devices/filter may be overridden from the command line, it is # not suitable for system-wide device filtering, e.g. udev and lvmetad. # Use global_filter to hide devices from these LVM system components. # The syntax is the same as devices/filter. Devices rejected by # global_filter are not opened by LVM. # This configuration option has an automatic default value. # global_filter = [ \"a|.*/|\" ] # Configuration option devices/cache_dir. # Directory in which to store the device cache file. # The results of filtering are cached on disk to avoid rescanning dud # devices (which can take a very long time). By default this cache is # stored in a file named .cache. It is safe to delete this file; the # tools regenerate it. If obtain_device_list_from_udev is enabled, the # list of devices is obtained from udev and any existing .cache file # is removed. cache_dir = \"/etc/lvm/cache\" # Configuration option devices/cache_file_prefix. # A prefix used before the .cache file name. See devices/cache_dir. cache_file_prefix = \"\" # Configuration option devices/write_cache_state. # Enable/disable writing the cache file. See devices/cache_dir. write_cache_state = 1 # Configuration option devices/types. # List of additional acceptable block device types. # These are of device type names from /proc/devices, followed by the # maximum number of partitions. # # Example # types = [ \"fd\", 16 ] # # This configuration option is advanced. # This configuration option does not have a default value defined. # Configuration option devices/sysfs_scan. # Restrict device scanning to block devices appearing in sysfs. # This is a quick way of filtering out block devices that are not # present on the system. sysfs must be part of the kernel and mounted.) sysfs_scan = 1 # Configuration option devices/multipath_component_detection. # Ignore devices that are components of DM multipath devices. multipath_component_detection = 1 # Configuration option devices/md_component_detection. # Ignore devices that are components of software RAID (md) devices. md_component_detection = 1 # Configuration option devices/fw_raid_component_detection. # Ignore devices that are components of firmware RAID devices. # LVM must use an external_device_info_source other than none for this # detection to execute. fw_raid_component_detection = 0 # Configuration option devices/md_chunk_alignment. # Align PV data blocks with md device's stripe-width. # This applies if a PV is placed directly on an md device. md_chunk_alignment = 1 # Configuration option devices/default_data_alignment. # Default alignment of the start of a PV data area in MB. # If set to 0, a value of 64KiB will be used. # Set to 1 for 1MiB, 2 for 2MiB, etc. # This configuration option has an automatic default value. # default_data_alignment = 1 # Configuration option devices/data_alignment_detection. # Detect PV data alignment based on sysfs device information. # The start of a PV data area will be a multiple of minimum_io_size or # optimal_io_size exposed in sysfs. minimum_io_size is the smallest # request the device can perform without incurring a read-modify-write # penalty, e.g. MD chunk size. optimal_io_size is the device's # preferred unit of receiving I/O, e.g. MD stripe width. # minimum_io_size is used if optimal_io_size is undefined (0). # If md_chunk_alignment is enabled, that detects the optimal_io_size. # This setting takes precedence over md_chunk_alignment. data_alignment_detection = 1 # Configuration option devices/data_alignment. # Alignment of the start of a PV data area in KiB. # If a PV is placed directly on an md device and md_chunk_alignment or # data_alignment_detection are enabled, then this setting is ignored. # Otherwise, md_chunk_alignment and data_alignment_detection are # disabled if this is set. Set to 0 to use the default alignment or the # page size, if larger. data_alignment = 0 # Configuration option devices/data_alignment_offset_detection. # Detect PV data alignment offset based on sysfs device information. # The start of a PV aligned data area will be shifted by the # alignment_offset exposed in sysfs. This offset is often 0, but may # be non-zero. Certain 4KiB sector drives that compensate for windows # partitioning will have an alignment_offset of 3584 bytes (sector 7 # is the lowest aligned logical block, the 4KiB sectors start at # LBA -1, and consequently sector 63 is aligned on a 4KiB boundary). # pvcreate --dataalignmentoffset will skip this detection. data_alignment_offset_detection = 1 # Configuration option devices/ignore_suspended_devices. # Ignore DM devices that have I/O suspended while scanning devices. # Otherwise, LVM waits for a suspended device to become accessible. # This should only be needed in recovery situations. ignore_suspended_devices = 0 # Configuration option devices/ignore_lvm_mirrors. # Do not scan 'mirror' LVs to avoid possible deadlocks. # This avoids possible deadlocks when using the 'mirror' segment type. # This setting determines whether LVs using the 'mirror' segment type # are scanned for LVM labels. This affects the ability of mirrors to # be used as physical volumes. If this setting is enabled, it is # impossible to create VGs on top of mirror LVs, i.e. to stack VGs on # mirror LVs. If this setting is disabled, allowing mirror LVs to be # scanned, it may cause LVM processes and I/O to the mirror to become # blocked. This is due to the way that the mirror segment type handles # failures. In order for the hang to occur, an LVM command must be run # just after a failure and before the automatic LVM repair process # takes place, or there must be failures in multiple mirrors in the # same VG at the same time with write failures occurring moments before # a scan of the mirror's labels. The 'mirror' scanning problems do not # apply to LVM RAID types like 'raid1' which handle failures in a # different way, making them a better choice for VG stacking. ignore_lvm_mirrors = 1 # Configuration option devices/disable_after_error_count. # Number of I/O errors after which a device is skipped. # During each LVM operation, errors received from each device are # counted. If the counter of a device exceeds the limit set here, # no further I/O is sent to that device for the remainder of the # operation. Setting this to 0 disables the counters altogether. disable_after_error_count = 0 # Configuration option devices/require_restorefile_with_uuid. # Allow use of pvcreate --uuid without requiring --restorefile. require_restorefile_with_uuid = 1 # Configuration option devices/pv_min_size. # Minimum size in KiB of block devices which can be used as PVs. # In a clustered environment all nodes must use the same value. # Any value smaller than 512KiB is ignored. The previous built-in # value was 512. pv_min_size = 2048 # Configuration option devices/issue_discards. # Issue discards to PVs that are no longer used by an LV. # Discards are sent to an LV's underlying physical volumes when the LV # is no longer using the physical volumes' space, e.g. lvremove, # lvreduce. Discards inform the storage that a region is no longer # used. Storage that supports discards advertise the protocol-specific # way discards should be issued by the kernel (TRIM, UNMAP, or # WRITE SAME with UNMAP bit set). Not all storage will support or # benefit from discards, but SSDs and thinly provisioned LUNs # generally do. If enabled, discards will only be issued if both the # storage and kernel provide support. issue_discards = 0 # Configuration option devices/allow_changes_with_duplicate_pvs. # Allow VG modification while a PV appears on multiple devices. # When a PV appears on multiple devices, LVM attempts to choose the # best device to use for the PV. If the devices represent the same # underlying storage, the choice has minimal consequence. If the # devices represent different underlying storage, the wrong choice # can result in data loss if the VG is modified. Disabling this # setting is the safest option because it prevents modifying a VG # or activating LVs in it while a PV appears on multiple devices. # Enabling this setting allows the VG to be used as usual even with # uncertain devices. allow_changes_with_duplicate_pvs = 0 } Configuration section allocation. How LVM selects space and applies properties to LVs. allocation { # Configuration option allocation/cling_tag_list. # Advise LVM which PVs to use when searching for new space. # When searching for free space to extend an LV, the 'cling' allocation # policy will choose space on the same PVs as the last segment of the # existing LV. If there is insufficient space and a list of tags is # defined here, it will check whether any of them are attached to the # PVs concerned and then seek to match those PV tags between existing # extents and new extents. # # Example # Use the special tag \"@*\" as a wildcard to match any PV tag: # cling_tag_list = [ \"@*\" ] # LVs are mirrored between two sites within a single VG, and # PVs are tagged with either @site1 or @site2 to indicate where # they are situated: # cling_tag_list = [ \"@site1\", \"@site2\" ] # # This configuration option does not have a default value defined. # Configuration option allocation/maximise_cling. # Use a previous allocation algorithm. # Changes made in version 2.02.85 extended the reach of the 'cling' # policies to detect more situations where data can be grouped onto # the same disks. This setting can be used to disable the changes # and revert to the previous algorithm. maximise_cling = 1 # Configuration option allocation/use_blkid_wiping. # Use blkid to detect existing signatures on new PVs and LVs. # The blkid library can detect more signatures than the native LVM # detection code, but may take longer. LVM needs to be compiled with # blkid wiping support for this setting to apply. LVM native detection # code is currently able to recognize: MD device signatures, # swap signature, and LUKS signatures. To see the list of signatures # recognized by blkid, check the output of the 'blkid -k' command. use_blkid_wiping = 1 # Configuration option allocation/wipe_signatures_when_zeroing_new_lvs. # Look for and erase any signatures while zeroing a new LV. # The --wipesignatures option overrides this setting. # Zeroing is controlled by the -Z/--zero option, and if not specified, # zeroing is used by default if possible. Zeroing simply overwrites the # first 4KiB of a new LV with zeroes and does no signature detection or # wiping. Signature wiping goes beyond zeroing and detects exact types # and positions of signatures within the whole LV. It provides a # cleaner LV after creation as all known signatures are wiped. The LV # is not claimed incorrectly by other tools because of old signatures # from previous use. The number of signatures that LVM can detect # depends on the detection code that is selected (see # use_blkid_wiping.) Wiping each detected signature must be confirmed. # When this setting is disabled, signatures on new LVs are not detected # or erased unless the --wipesignatures option is used directly. wipe_signatures_when_zeroing_new_lvs = 1 # Configuration option allocation/mirror_logs_require_separate_pvs. # Mirror logs and images will always use different PVs. # The default setting changed in version 2.02.85. mirror_logs_require_separate_pvs = 0 # Configuration option allocation/raid_stripe_all_devices. # Stripe across all PVs when RAID stripes are not specified. # If enabled, all PVs in the VG or on the command line are used for raid0/4/5/6/10 # when the command does not specify the number of stripes to use. # This was the default behaviour until release 2.02.162. # This configuration option has an automatic default value. # raid_stripe_all_devices = 0 # Configuration option allocation/cache_pool_metadata_require_separate_pvs. # Cache pool metadata and data will always use different PVs. cache_pool_metadata_require_separate_pvs = 0 # Configuration option allocation/cache_mode. # The default cache mode used for new cache. # # Accepted values: # writethrough # Data blocks are immediately written from the cache to disk. # writeback # Data blocks are written from the cache back to disk after some # delay to improve performance. # # This setting replaces allocation/cache_pool_cachemode. # This configuration option has an automatic default value. # cache_mode = \"writethrough\" # Configuration option allocation/cache_policy. # The default cache policy used for new cache volume. # Since kernel 4.2 the default policy is smq (Stochastic multique), # otherwise the older mq (Multiqueue) policy is selected. # This configuration option does not have a default value defined. # Configuration section allocation/cache_settings. # Settings for the cache policy. # See documentation for individual cache policies for more info. # This configuration section has an automatic default value. # cache_settings { # } # Configuration option allocation/cache_pool_chunk_size. # The minimal chunk size in KiB for cache pool volumes. # Using a chunk_size that is too large can result in wasteful use of # the cache, where small reads and writes can cause large sections of # an LV to be mapped into the cache. However, choosing a chunk_size # that is too small can result in more overhead trying to manage the # numerous chunks that become mapped into the cache. The former is # more of a problem than the latter in most cases, so the default is # on the smaller end of the spectrum. Supported values range from # 32KiB to 1GiB in multiples of 32. # This configuration option does not have a default value defined. # Configuration option allocation/thin_pool_metadata_require_separate_pvs. # Thin pool metdata and data will always use different PVs. thin_pool_metadata_require_separate_pvs = 0 # Configuration option allocation/thin_pool_zero. # Thin pool data chunks are zeroed before they are first used. # Zeroing with a larger thin pool chunk size reduces performance. # This configuration option has an automatic default value. # thin_pool_zero = 1 # Configuration option allocation/thin_pool_discards. # The discards behaviour of thin pool volumes. # # Accepted values: # ignore # nopassdown # passdown # # This configuration option has an automatic default value. # thin_pool_discards = \"passdown\" # Configuration option allocation/thin_pool_chunk_size_policy. # The chunk size calculation policy for thin pool volumes. # # Accepted values: # generic # If thin_pool_chunk_size is defined, use it. Otherwise, calculate # the chunk size based on estimation and device hints exposed in # sysfs - the minimum_io_size. The chunk size is always at least # 64KiB. # performance # If thin_pool_chunk_size is defined, use it. Otherwise, calculate # the chunk size for performance based on device hints exposed in # sysfs - the optimal_io_size. The chunk size is always at least # 512KiB. # # This configuration option has an automatic default value. # thin_pool_chunk_size_policy = \"generic\" # Configuration option allocation/thin_pool_chunk_size. # The minimal chunk size in KiB for thin pool volumes. # Larger chunk sizes may improve performance for plain thin volumes, # however using them for snapshot volumes is less efficient, as it # consumes more space and takes extra time for copying. When unset, # lvm tries to estimate chunk size starting from 64KiB. Supported # values are in the range 64KiB to 1GiB. # This configuration option does not have a default value defined. # Configuration option allocation/physical_extent_size. # Default physical extent size in KiB to use for new VGs. # This configuration option has an automatic default value. # physical_extent_size = 4096 } Configuration section log. How LVM log information is reported. log { # Configuration option log/report_command_log. # Enable or disable LVM log reporting. # If enabled, LVM will collect a log of operations, messages, # per-object return codes with object identification and associated # error numbers (errnos) during LVM command processing. Then the # log is either reported solely or in addition to any existing # reports, depending on LVM command used. If it is a reporting command # (e.g. pvs, vgs, lvs, lvm fullreport), then the log is reported in # addition to any existing reports. Otherwise, there's only log report # on output. For all applicable LVM commands, you can request that # the output has only log report by using --logonly command line # option. Use log/command_log_cols and log/command_log_sort settings # to define fields to display and sort fields for the log report. # You can also use log/command_log_selection to define selection # criteria used each time the log is reported. # This configuration option has an automatic default value. # report_command_log = 0 # Configuration option log/command_log_sort. # List of columns to sort by when reporting command log. # See <lvm command> --logonly --configreport log -o help # for the list of possible fields. # This configuration option has an automatic default value. # command_log_sort = \"log_seq_num\" # Configuration option log/command_log_cols. # List of columns to report when reporting command log. # See <lvm command> --logonly --configreport log -o help # for the list of possible fields. # This configuration option has an automatic default value. # command_log_cols = \"log_seq_num,log_type,log_context,log_object_type,log_object_name,log_object_id,log_object_group,log_object_group_id,log_message,log_errno,log_ret_code\" # Configuration option log/command_log_selection. # Selection criteria used when reporting command log. # You can define selection criteria that are applied each # time log is reported. This way, it is possible to control the # amount of log that is displayed on output and you can select # only parts of the log that are important for you. To define # selection criteria, use fields from log report. See also # <lvm command> --logonly --configreport log -S help for the # list of possible fields and selection operators. You can also # define selection criteria for log report on command line directly # using <lvm command> --configreport log -S <selection criteria> # which has precedence over log/command_log_selection setting. # For more information about selection criteria in general, see # lvm(8) man page. # This configuration option has an automatic default value. # command_log_selection = \"!(log_type=status && message=success)\" # Configuration option log/verbose. # Controls the messages sent to stdout or stderr. verbose = 0 # Configuration option log/silent. # Suppress all non-essential messages from stdout. # This has the same effect as -qq. When enabled, the following commands # still produce output: dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck, # pvdisplay, pvs, version, vgcfgrestore -l, vgdisplay, vgs. # Non-essential messages are shifted from log level 4 to log level 5 # for syslog and lvm2_log_fn purposes. # Any 'yes' or 'no' questions not overridden by other arguments are # suppressed and default to 'no'. silent = 0 # Configuration option log/syslog. # Send log messages through syslog. syslog = 1 # Configuration option log/file. # Write error and debug log messages to a file specified here. # This configuration option does not have a default value defined. # Configuration option log/overwrite. # Overwrite the log file each time the program is run. overwrite = 0 # Configuration option log/level. # The level of log messages that are sent to the log file or syslog. # There are 6 syslog-like log levels currently in use: 2 to 7 inclusive. # 7 is the most verbose (LOG_DEBUG). level = 0 # Configuration option log/indent. # Indent messages according to their severity. indent = 1 # Configuration option log/command_names. # Display the command name on each line of output. command_names = 0 # Configuration option log/prefix. # A prefix to use before the log message text. # (After the command name, if selected). # Two spaces allows you to see/grep the severity of each message. # To make the messages look similar to the original LVM tools use: # indent = 0, command_names = 1, prefix = \" -- \" prefix = \" \" # Configuration option log/activation. # Log messages during activation. # Don't use this in low memory situations (can deadlock). activation = 0 # Configuration option log/debug_classes. # Select log messages by class. # Some debugging messages are assigned to a class and only appear in # debug output if the class is listed here. Classes currently # available: memory, devices, activation, allocation, lvmetad, # metadata, cache, locking, lvmpolld. Use \"all\" to see everything. debug_classes = [ \"memory\", \"devices\", \"activation\", \"allocation\", \"lvmetad\", \"metadata\", \"cache\", \"locking\", \"lvmpolld\", \"dbus\" ] } Configuration section backup. How LVM metadata is backed up and archived. In LVM, a 'backup' is a copy of the metadata for the current system, and an 'archive' contains old metadata configurations. They are stored in a human readable text format. backup { # Configuration option backup/backup. # Maintain a backup of the current metadata configuration. # Think very hard before turning this off! backup = 1 # Configuration option backup/backup_dir. # Location of the metadata backup files. # Remember to back up this directory regularly! backup_dir = \"/etc/lvm/backup\" # Configuration option backup/archive. # Maintain an archive of old metadata configurations. # Think very hard before turning this off. archive = 1 # Configuration option backup/archive_dir. # Location of the metdata archive files. # Remember to back up this directory regularly! archive_dir = \"/etc/lvm/archive\" # Configuration option backup/retain_min. # Minimum number of archives to keep. retain_min = 10 # Configuration option backup/retain_days. # Minimum number of days to keep archive files. retain_days = 30 } Configuration section shell. Settings for running LVM in shell (readline) mode. shell { # Configuration option shell/history_size. # Number of lines of history to store in ~/.lvm_history. history_size = 100 } Configuration section global. Miscellaneous global LVM settings. global { # Configuration option global/umask. # The file creation mask for any files and directories created. # Interpreted as octal if the first digit is zero. umask = 077 # Configuration option global/test. # No on-disk metadata changes will be made in test mode. # Equivalent to having the -t option on every command. test = 0 # Configuration option global/units. # Default value for --units argument. units = \"h\" # Configuration option global/si_unit_consistency. # Distinguish between powers of 1024 and 1000 bytes. # The LVM commands distinguish between powers of 1024 bytes, # e.g. KiB, MiB, GiB, and powers of 1000 bytes, e.g. KB, MB, GB. # If scripts depend on the old behaviour, disable this setting # temporarily until they are updated. si_unit_consistency = 1 # Configuration option global/suffix. # Display unit suffix for sizes. # This setting has no effect if the units are in human-readable form # (global/units = \"h\") in which case the suffix is always displayed. suffix = 1 # Configuration option global/activation. # Enable/disable communication with the kernel device-mapper. # Disable to use the tools to manipulate LVM metadata without # activating any logical volumes. If the device-mapper driver # is not present in the kernel, disabling this should suppress # the error messages. activation = 1 # Configuration option global/fallback_to_lvm1. # Try running LVM1 tools if LVM cannot communicate with DM. # This option only applies to 2.4 kernels and is provided to help # switch between device-mapper kernels and LVM1 kernels. The LVM1 # tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1. # They will stop working once the lvm2 on-disk metadata format is used. # This configuration option has an automatic default value. # fallback_to_lvm1 = 1 # Configuration option global/format. # The default metadata format that commands should use. # The -M 1|2 option overrides this setting. # # Accepted values: # lvm1 # lvm2 # # This configuration option has an automatic default value. # format = \"lvm2\" # Configuration option global/format_libraries. # Shared libraries that process different metadata formats. # If support for LVM1 metadata was compiled as a shared library use # format_libraries = \"liblvm2format1.so\" # This configuration option does not have a default value defined. # Configuration option global/segment_libraries. # This configuration option does not have a default value defined. # Configuration option global/proc. # Location of proc filesystem. # This configuration option is advanced. proc = \"/proc\" # Configuration option global/etc. # Location of /etc system configuration directory. etc = \"/etc\" # Configuration option global/locking_type. # Type of locking to use. # # Accepted values: # 0 # Turns off locking. Warning: this risks metadata corruption if # commands run concurrently. # 1 # LVM uses local file-based locking, the standard mode. # 2 # LVM uses the external shared library locking_library. # 3 # LVM uses built-in clustered locking with clvmd. # This is incompatible with lvmetad. If use_lvmetad is enabled, # LVM prints a warning and disables lvmetad use. # 4 # LVM uses read-only locking which forbids any operations that # might change metadata. # 5 # Offers dummy locking for tools that do not need any locks. # You should not need to set this directly; the tools will select # when to use it instead of the configured locking_type. # Do not use lvmetad or the kernel device-mapper driver with this # locking type. It is used by the --readonly option that offers # read-only access to Volume Group metadata that cannot be locked # safely because it belongs to an inaccessible domain and might be # in use, for example a virtual machine image or a disk that is # shared by a clustered machine. # locking_type = 3 # Configuration option global/wait_for_locks. # When disabled, fail if a lock request would block. wait_for_locks = 1 # Configuration option global/fallback_to_clustered_locking. # Attempt to use built-in cluster locking if locking_type 2 fails. # If using external locking (type 2) and initialisation fails, with # this enabled, an attempt will be made to use the built-in clustered # locking. Disable this if using a customised locking_library. fallback_to_clustered_locking = 1 # Configuration option global/fallback_to_local_locking. # Use locking_type 1 (local) if locking_type 2 or 3 fail. # If an attempt to initialise type 2 or type 3 locking failed, perhaps # because cluster components such as clvmd are not running, with this # enabled, an attempt will be made to use local file-based locking # (type 1). If this succeeds, only commands against local VGs will # proceed. VGs marked as clustered will be ignored. fallback_to_local_locking = 1 # Configuration option global/locking_dir. # Directory to use for LVM command file locks. # Local non-LV directory that holds file-based locks while commands are # in progress. A directory like /tmp that may get wiped on reboot is OK. locking_dir = \"/run/lock/lvm\" # Configuration option global/prioritise_write_locks. # Allow quicker VG write access during high volume read access. # When there are competing read-only and read-write access requests for # a volume group's metadata, instead of always granting the read-only # requests immediately, delay them to allow the read-write requests to # be serviced. Without this setting, write access may be stalled by a # high volume of read-only requests. This option only affects # locking_type 1 viz. local file-based locking. prioritise_write_locks = 1 # Configuration option global/library_dir. # Search this directory first for shared libraries. # This configuration option does not have a default value defined. # Configuration option global/locking_library. # The external locking library to use for locking_type 2. # This configuration option has an automatic default value. # locking_library = \"liblvm2clusterlock.so\" # Configuration option global/abort_on_internal_errors. # Abort a command that encounters an internal error. # Treat any internal errors as fatal errors, aborting the process that # encountered the internal error. Please only enable for debugging. abort_on_internal_errors = 0 # Configuration option global/detect_internal_vg_cache_corruption. # Internal verification of VG structures. # Check if CRC matches when a parsed VG is used multiple times. This # is useful to catch unexpected changes to cached VG structures. # Please only enable for debugging. detect_internal_vg_cache_corruption = 0 # Configuration option global/metadata_read_only. # No operations that change on-disk metadata are permitted. # Additionally, read-only commands that encounter metadata in need of # repair will still be allowed to proceed exactly as if the repair had # been performed (except for the unchanged vg_seqno). Inappropriate # use could mess up your system, so seek advice first! metadata_read_only = 0 # Configuration option global/mirror_segtype_default. # The segment type used by the short mirroring option -m. # The --type mirror|raid1 option overrides this setting. # # Accepted values: # mirror # The original RAID1 implementation from LVM/DM. It is # characterized by a flexible log solution (core, disk, mirrored), # and by the necessity to block I/O while handling a failure. # There is an inherent race in the dmeventd failure handling logic # with snapshots of devices using this type of RAID1 that in the # worst case could cause a deadlock. (Also see # devices/ignore_lvm_mirrors.) # raid1 # This is a newer RAID1 implementation using the MD RAID1 # personality through device-mapper. It is characterized by a # lack of log options. (A log is always allocated for every # device and they are placed on the same device as the image, # so no separate devices are required.) This mirror # implementation does not require I/O to be blocked while # handling a failure. This mirror implementation is not # cluster-aware and cannot be used in a shared (active/active) # fashion in a cluster. # mirror_segtype_default = \"raid1\" # Configuration option global/raid10_segtype_default. # The segment type used by the -i -m combination. # The --type raid10|mirror option overrides this setting. # The --stripes/-i and --mirrors/-m options can both be specified # during the creation of a logical volume to use both striping and # mirroring for the LV. There are two different implementations. # # Accepted values: # raid10 # LVM uses MD's RAID10 personality through DM. This is the # preferred option. # mirror # LVM layers the 'mirror' and 'stripe' segment types. The layering # is done by creating a mirror LV on top of striped sub-LVs, # effectively creating a RAID 0+1 array. The layering is suboptimal # in terms of providing redundancy and performance. # raid10_segtype_default = \"raid10\" # Configuration option global/sparse_segtype_default. # The segment type used by the -V -L combination. # The --type snapshot|thin option overrides this setting. # The combination of -V and -L options creates a sparse LV. There are # two different implementations. # # Accepted values: # snapshot # The original snapshot implementation from LVM/DM. It uses an old # snapshot that mixes data and metadata within a single COW # storage volume and performs poorly when the size of stored data # passes hundreds of MB. # thin # A newer implementation that uses thin provisioning. It has a # bigger minimal chunk size (64KiB) and uses a separate volume for # metadata. It has better performance, especially when more data # is used. It also supports full snapshots. # sparse_segtype_default = \"thin\" # Configuration option global/lvdisplay_shows_full_device_path. # Enable this to reinstate the previous lvdisplay name format. # The default format for displaying LV names in lvdisplay was changed # in version 2.02.89 to show the LV name and path separately. # Previously this was always shown as /dev/vgname/lvname even when that # was never a valid path in the /dev filesystem. # This configuration option has an automatic default value. # lvdisplay_shows_full_device_path = 0 # Configuration option global/use_lvmetad. # Use lvmetad to cache metadata and reduce disk scanning. # When enabled (and running), lvmetad provides LVM commands with VG # metadata and PV state. LVM commands then avoid reading this # information from disks which can be slow. When disabled (or not # running), LVM commands fall back to scanning disks to obtain VG # metadata. lvmetad is kept updated via udev rules which must be set # up for LVM to work correctly. (The udev rules should be installed # by default.) Without a proper udev setup, changes in the system's # block device configuration will be unknown to LVM, and ignored # until a manual 'pvscan --cache' is run. If lvmetad was running # while use_lvmetad was disabled, it must be stopped, use_lvmetad # enabled, and then started. When using lvmetad, LV activation is # switched to an automatic, event-based mode. In this mode, LVs are # activated based on incoming udev events that inform lvmetad when # PVs appear on the system. When a VG is complete (all PVs present), # it is auto-activated. The auto_activation_volume_list setting # controls which LVs are auto-activated (all by default.) # When lvmetad is updated (automatically by udev events, or directly # by pvscan --cache), devices/filter is ignored and all devices are # scanned by default. lvmetad always keeps unfiltered information # which is provided to LVM commands. Each LVM command then filters # based on devices/filter. This does not apply to other, non-regexp, # filtering settings: component filters such as multipath and MD # are checked during pvscan --cache. To filter a device and prevent # scanning from the LVM system entirely, including lvmetad, use # devices/global_filter. use_lvmetad = 0 # Configuration option global/lvmetad_update_wait_time. # The number of seconds a command will wait for lvmetad update to finish. # After waiting for this period, a command will not use lvmetad, and # will revert to disk scanning. # This configuration option has an automatic default value. # lvmetad_update_wait_time = 10 # Configuration option global/use_lvmlockd. # Use lvmlockd for locking among hosts using LVM on shared storage. # Applicable only if LVM is compiled with lockd support in which # case there is also lvmlockd(8) man page available for more # information. use_lvmlockd = 0 # Configuration option global/lvmlockd_lock_retries. # Retry lvmlockd lock requests this many times. # Applicable only if LVM is compiled with lockd support # This configuration option has an automatic default value. # lvmlockd_lock_retries = 3 # Configuration option global/sanlock_lv_extend. # Size in MiB to extend the internal LV holding sanlock locks. # The internal LV holds locks for each LV in the VG, and after enough # LVs have been created, the internal LV needs to be extended. lvcreate # will automatically extend the internal LV when needed by the amount # specified here. Setting this to 0 disables the automatic extension # and can cause lvcreate to fail. Applicable only if LVM is compiled # with lockd support # This configuration option has an automatic default value. # sanlock_lv_extend = 256 # Configuration option global/thin_check_executable. # The full path to the thin_check command. # LVM uses this command to check that a thin metadata device is in a # usable state. When a thin pool is activated and after it is # deactivated, this command is run. Activation will only proceed if # the command has an exit status of 0. Set to \"\" to skip this check. # (Not recommended.) Also see thin_check_options. # (See package device-mapper-persistent-data or thin-provisioning-tools) # This configuration option has an automatic default value. # thin_check_executable = \"/usr/sbin/thin_check\" # Configuration option global/thin_dump_executable. # The full path to the thin_dump command. # LVM uses this command to dump thin pool metadata. # (See package device-mapper-persistent-data or thin-provisioning-tools) # This configuration option has an automatic default value. # thin_dump_executable = \"/usr/sbin/thin_dump\" # Configuration option global/thin_repair_executable. # The full path to the thin_repair command. # LVM uses this command to repair a thin metadata device if it is in # an unusable state. Also see thin_repair_options. # (See package device-mapper-persistent-data or thin-provisioning-tools) # This configuration option has an automatic default value. # thin_repair_executable = \"/usr/sbin/thin_repair\" # Configuration option global/thin_check_options. # List of options passed to the thin_check command. # With thin_check version 2.1 or newer you can add the option # --ignore-non-fatal-errors to let it pass through ignorable errors # and fix them later. With thin_check version 3.2 or newer you should # include the option --clear-needs-check-flag. # This configuration option has an automatic default value. # thin_check_options = [ \"-q\", \"--clear-needs-check-flag\" ] # Configuration option global/thin_repair_options. # List of options passed to the thin_repair command. # This configuration option has an automatic default value. # thin_repair_options = [ \"\" ] # Configuration option global/thin_disabled_features. # Features to not use in the thin driver. # This can be helpful for testing, or to avoid using a feature that is # causing problems. Features include: block_size, discards, # discards_non_power_2, external_origin, metadata_resize, # external_origin_extend, error_if_no_space. # # Example # thin_disabled_features = [ \"discards\", \"block_size\" ] # # This configuration option does not have a default value defined. # Configuration option global/cache_disabled_features. # Features to not use in the cache driver. # This can be helpful for testing, or to avoid using a feature that is # causing problems. Features include: policy_mq, policy_smq. # # Example # cache_disabled_features = [ \"policy_smq\" ] # # This configuration option does not have a default value defined. # Configuration option global/cache_check_executable. # The full path to the cache_check command. # LVM uses this command to check that a cache metadata device is in a # usable state. When a cached LV is activated and after it is # deactivated, this command is run. Activation will only proceed if the # command has an exit status of 0. Set to \"\" to skip this check. # (Not recommended.) Also see cache_check_options. # (See package device-mapper-persistent-data or thin-provisioning-tools) # This configuration option has an automatic default value. # cache_check_executable = \"/usr/sbin/cache_check\" # Configuration option global/cache_dump_executable. # The full path to the cache_dump command. # LVM uses this command to dump cache pool metadata. # (See package device-mapper-persistent-data or thin-provisioning-tools) # This configuration option has an automatic default value. # cache_dump_executable = \"/usr/sbin/cache_dump\" # Configuration option global/cache_repair_executable. # The full path to the cache_repair command. # LVM uses this command to repair a cache metadata device if it is in # an unusable state. Also see cache_repair_options. # (See package device-mapper-persistent-data or thin-provisioning-tools) # This configuration option has an automatic default value. # cache_repair_executable = \"/usr/sbin/cache_repair\" # Configuration option global/cache_check_options. # List of options passed to the cache_check command. # With cache_check version 5.0 or newer you should include the option # --clear-needs-check-flag. # This configuration option has an automatic default value. # cache_check_options = [ \"-q\", \"--clear-needs-check-flag\" ] # Configuration option global/cache_repair_options. # List of options passed to the cache_repair command. # This configuration option has an automatic default value. # cache_repair_options = [ \"\" ] # Configuration option global/system_id_source. # The method LVM uses to set the local system ID. # Volume Groups can also be given a system ID (by vgcreate, vgchange, # or vgimport.) A VG on shared storage devices is accessible only to # the host with a matching system ID. See 'man lvmsystemid' for # information on limitations and correct usage. # # Accepted values: # none # The host has no system ID. # lvmlocal # Obtain the system ID from the system_id setting in the 'local' # section of an lvm configuration file, e.g. lvmlocal.conf. # uname # Set the system ID from the hostname (uname) of the system. # System IDs beginning localhost are not permitted. # machineid # Use the contents of the machine-id file to set the system ID. # Some systems create this file at installation time. # See 'man machine-id' and global/etc. # file # Use the contents of another file (system_id_file) to set the # system ID. # system_id_source = \"none\" # Configuration option global/system_id_file. # The full path to the file containing a system ID. # This is used when system_id_source is set to 'file'. # Comments starting with the character # are ignored. # This configuration option does not have a default value defined. # Configuration option global/use_lvmpolld. # Use lvmpolld to supervise long running LVM commands. # When enabled, control of long running LVM commands is transferred # from the original LVM command to the lvmpolld daemon. This allows # the operation to continue independent of the original LVM command. # After lvmpolld takes over, the LVM command displays the progress # of the ongoing operation. lvmpolld itself runs LVM commands to # manage the progress of ongoing operations. lvmpolld can be used as # a native systemd service, which allows it to be started on demand, # and to use its own control group. When this option is disabled, LVM # commands will supervise long running operations by forking themselves. # Applicable only if LVM is compiled with lvmpolld support. use_lvmpolld = 1 # Configuration option global/notify_dbus. # Enable D-Bus notification from LVM commands. # When enabled, an LVM command that changes PVs, changes VG metadata, # or changes the activation state of an LV will send a notification. notify_dbus = 1 } Configuration section activation. activation { # Configuration option activation/checks. # Perform internal checks of libdevmapper operations. # Useful for debugging problems with activation. Some of the checks may # be expensive, so it's best to use this only when there seems to be a # problem. checks = 0 # Configuration option activation/udev_sync. # Use udev notifications to synchronize udev and LVM. # The --nodevsync option overrides this setting. # When disabled, LVM commands will not wait for notifications from # udev, but continue irrespective of any possible udev processing in # the background. Only use this if udev is not running or has rules # that ignore the devices LVM creates. If enabled when udev is not # running, and LVM processes are waiting for udev, run the command # 'dmsetup udevcomplete_all' to wake them up. udev_sync = 1 # Configuration option activation/udev_rules. # Use udev rules to manage LV device nodes and symlinks. # When disabled, LVM will manage the device nodes and symlinks for # active LVs itself. Manual intervention may be required if this # setting is changed while LVs are active. udev_rules = 1 # Configuration option activation/verify_udev_operations. # Use extra checks in LVM to verify udev operations. # This enables additional checks (and if necessary, repairs) on entries # in the device directory after udev has completed processing its # events. Useful for diagnosing problems with LVM/udev interactions. verify_udev_operations = 0 # Configuration option activation/retry_deactivation. # Retry failed LV deactivation. # If LV deactivation fails, LVM will retry for a few seconds before # failing. This may happen because a process run from a quick udev rule # temporarily opened the device. retry_deactivation = 1 # Configuration option activation/missing_stripe_filler. # Method to fill missing stripes when activating an incomplete LV. # Using 'error' will make inaccessible parts of the device return I/O # errors on access. You can instead use a device path, in which case, # that device will be used in place of missing stripes. Using anything # other than 'error' with mirrored or snapshotted volumes is likely to # result in data corruption. # This configuration option is advanced. missing_stripe_filler = \"error\" # Configuration option activation/use_linear_target. # Use the linear target to optimize single stripe LVs. # When disabled, the striped target is used. The linear target is an # optimised version of the striped target that only handles a single # stripe. use_linear_target = 1 # Configuration option activation/reserved_stack. # Stack size in KiB to reserve for use while devices are suspended. # Insufficent reserve risks I/O deadlock during device suspension. reserved_stack = 64 # Configuration option activation/reserved_memory. # Memory size in KiB to reserve for use while devices are suspended. # Insufficent reserve risks I/O deadlock during device suspension. reserved_memory = 8192 # Configuration option activation/process_priority. # Nice value used while devices are suspended. # Use a high priority so that LVs are suspended # for the shortest possible time. process_priority = -18 # Configuration option activation/volume_list. # Only LVs selected by this list are activated. # If this list is defined, an LV is only activated if it matches an # entry in this list. If this list is undefined, it imposes no limits # on LV activation (all are allowed). # # Accepted values: # vgname # The VG name is matched exactly and selects all LVs in the VG. # vgname/lvname # The VG name and LV name are matched exactly and selects the LV. # @tag # Selects an LV if the specified tag matches a tag set on the LV # or VG. # @* # Selects an LV if a tag defined on the host is also set on the LV # or VG. See tags/hosttags. If any host tags exist but volume_list # is not defined, a default single-entry list containing '@*' # is assumed. # # Example # volume_list = [ \"vg1\", \"vg2/lvol1\", \"@tag1\", \"@*\" ] # # This configuration option does not have a default value defined. # Configuration option activation/auto_activation_volume_list. # Only LVs selected by this list are auto-activated. # This list works like volume_list, but it is used only by # auto-activation commands. It does not apply to direct activation # commands. If this list is defined, an LV is only auto-activated # if it matches an entry in this list. If this list is undefined, it # imposes no limits on LV auto-activation (all are allowed.) If this # list is defined and empty, i.e. \"[]\", then no LVs are selected for # auto-activation. An LV that is selected by this list for # auto-activation, must also be selected by volume_list (if defined) # before it is activated. Auto-activation is an activation command that # includes the 'a' argument: --activate ay or -a ay. The 'a' (auto) # argument for auto-activation is meant to be used by activation # commands that are run automatically by the system, as opposed to LVM # commands run directly by a user. A user may also use the 'a' flag # directly to perform auto-activation. Also see pvscan(8) for more # information about auto-activation. # # Accepted values: # vgname # The VG name is matched exactly and selects all LVs in the VG. # vgname/lvname # The VG name and LV name are matched exactly and selects the LV. # @tag # Selects an LV if the specified tag matches a tag set on the LV # or VG. # @* # Selects an LV if a tag defined on the host is also set on the LV # or VG. See tags/hosttags. If any host tags exist but volume_list # is not defined, a default single-entry list containing '@*' # is assumed. # # Example # auto_activation_volume_list = [ \"vg1\", \"vg2/lvol1\", \"@tag1\", \"@*\" ] # # This configuration option does not have a default value defined. # Configuration option activation/read_only_volume_list. # LVs in this list are activated in read-only mode. # If this list is defined, each LV that is to be activated is checked # against this list, and if it matches, it is activated in read-only # mode. This overrides the permission setting stored in the metadata, # e.g. from --permission rw. # # Accepted values: # vgname # The VG name is matched exactly and selects all LVs in the VG. # vgname/lvname # The VG name and LV name are matched exactly and selects the LV. # @tag # Selects an LV if the specified tag matches a tag set on the LV # or VG. # @* # Selects an LV if a tag defined on the host is also set on the LV # or VG. See tags/hosttags. If any host tags exist but volume_list # is not defined, a default single-entry list containing '@*' # is assumed. # # Example # read_only_volume_list = [ \"vg1\", \"vg2/lvol1\", \"@tag1\", \"@*\" ] # # This configuration option does not have a default value defined. # Configuration option activation/raid_region_size. # Size in KiB of each raid or mirror synchronization region. # For raid or mirror segment types, this is the amount of data that is # copied at once when initializing, or moved at once by pvmove. raid_region_size = 512 # Configuration option activation/error_when_full. # Return errors if a thin pool runs out of space. # The --errorwhenfull option overrides this setting. # When enabled, writes to thin LVs immediately return an error if the # thin pool is out of data space. When disabled, writes to thin LVs # are queued if the thin pool is out of space, and processed when the # thin pool data space is extended. New thin pools are assigned the # behavior defined here. # This configuration option has an automatic default value. # error_when_full = 0 # Configuration option activation/readahead. # Setting to use when there is no readahead setting in metadata. # # Accepted values: # none # Disable readahead. # auto # Use default value chosen by kernel. # readahead = \"auto\" # Configuration option activation/raid_fault_policy. # Defines how a device failure in a RAID LV is handled. # This includes LVs that have the following segment types: # raid1, raid4, raid5*, and raid6*. # If a device in the LV fails, the policy determines the steps # performed by dmeventd automatically, and the steps perfomed by the # manual command lvconvert --repair --use-policies. # Automatic handling requires dmeventd to be monitoring the LV. # # Accepted values: # warn # Use the system log to warn the user that a device in the RAID LV # has failed. It is left to the user to run lvconvert --repair # manually to remove or replace the failed device. As long as the # number of failed devices does not exceed the redundancy of the LV # (1 device for raid4/5, 2 for raid6), the LV will remain usable. # allocate # Attempt to use any extra physical volumes in the VG as spares and # replace faulty devices. # raid_fault_policy = \"warn\" # Configuration option activation/mirror_image_fault_policy. # Defines how a device failure in a 'mirror' LV is handled. # An LV with the 'mirror' segment type is composed of mirror images # (copies) and a mirror log. A disk log ensures that a mirror LV does # not need to be re-synced (all copies made the same) every time a # machine reboots or crashes. If a device in the LV fails, this policy # determines the steps perfomed by dmeventd automatically, and the steps # performed by the manual command lvconvert --repair --use-policies. # Automatic handling requires dmeventd to be monitoring the LV. # # Accepted values: # remove # Simply remove the faulty device and run without it. If the log # device fails, the mirror would convert to using an in-memory log. # This means the mirror will not remember its sync status across # crashes/reboots and the entire mirror will be re-synced. If a # mirror image fails, the mirror will convert to a non-mirrored # device if there is only one remaining good copy. # allocate # Remove the faulty device and try to allocate space on a new # device to be a replacement for the failed device. Using this # policy for the log is fast and maintains the ability to remember # sync state through crashes/reboots. Using this policy for a # mirror device is slow, as it requires the mirror to resynchronize # the devices, but it will preserve the mirror characteristic of # the device. This policy acts like 'remove' if no suitable device # and space can be allocated for the replacement. # allocate_anywhere # Not yet implemented. Useful to place the log device temporarily # on the same physical volume as one of the mirror images. This # policy is not recommended for mirror devices since it would break # the redundant nature of the mirror. This policy acts like # 'remove' if no suitable device and space can be allocated for the # replacement. # mirror_image_fault_policy = \"remove\" # Configuration option activation/mirror_log_fault_policy. # Defines how a device failure in a 'mirror' log LV is handled. # The mirror_image_fault_policy description for mirrored LVs also # applies to mirrored log LVs. mirror_log_fault_policy = \"allocate\" # Configuration option activation/snapshot_autoextend_threshold. # Auto-extend a snapshot when its usage exceeds this percent. # Setting this to 100 disables automatic extension. # The minimum value is 50 (a smaller value is treated as 50.) # Also see snapshot_autoextend_percent. # Automatic extension requires dmeventd to be monitoring the LV. # # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # snapshot_autoextend_threshold = 70 # snapshot_autoextend_threshold = 100 # Configuration option activation/snapshot_autoextend_percent. # Auto-extending a snapshot adds this percent extra space. # The amount of additional space added to a snapshot is this # percent of its current size. # # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # snapshot_autoextend_percent = 20 # snapshot_autoextend_percent = 20 # Configuration option activation/thin_pool_autoextend_threshold. # Auto-extend a thin pool when its usage exceeds this percent. # Setting this to 100 disables automatic extension. # The minimum value is 50 (a smaller value is treated as 50.) # Also see thin_pool_autoextend_percent. # Automatic extension requires dmeventd to be monitoring the LV. # # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # thin_pool_autoextend_threshold = 70 # thin_pool_autoextend_threshold = 100 # Configuration option activation/thin_pool_autoextend_percent. # Auto-extending a thin pool adds this percent extra space. # The amount of additional space added to a thin pool is this # percent of its current size. # # Example # Using 70% autoextend threshold and 20% autoextend size, when a 1G # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds # 840M, it is extended to 1.44G: # thin_pool_autoextend_percent = 20 # thin_pool_autoextend_percent = 20 # Configuration option activation/mlock_filter. # Do not mlock these memory areas. # While activating devices, I/O to devices being (re)configured is # suspended. As a precaution against deadlocks, LVM pins memory it is # using so it is not paged out, and will not require I/O to reread. # Groups of pages that are known not to be accessed during activation # do not need to be pinned into memory. Each string listed in this # setting is compared against each line in /proc/self/maps, and the # pages corresponding to lines that match are not pinned. On some # systems, locale-archive was found to make up over 80% of the memory # used by the process. # # Example # mlock_filter = [ \"locale/locale-archive\", \"gconv/gconv-modules.cache\" ] # # This configuration option is advanced. # This configuration option does not have a default value defined. # Configuration option activation/use_mlockall. # Use the old behavior of mlockall to pin all memory. # Prior to version 2.02.62, LVM used mlockall() to pin the whole # process's memory while activating devices. use_mlockall = 0 # Configuration option activation/monitoring. # Monitor LVs that are activated. # The --ignoremonitoring option overrides this setting. # When enabled, LVM will ask dmeventd to monitor activated LVs. monitoring = 1 # Configuration option activation/polling_interval. # Check pvmove or lvconvert progress at this interval (seconds). # When pvmove or lvconvert must wait for the kernel to finish # synchronising or merging data, they check and report progress at # intervals of this number of seconds. If this is set to 0 and there # is only one thing to wait for, there are no progress reports, but # the process is awoken immediately once the operation is complete. polling_interval = 15 # Configuration option activation/auto_set_activation_skip. # Set the activation skip flag on new thin snapshot LVs. # The --setactivationskip option overrides this setting. # An LV can have a persistent 'activation skip' flag. The flag causes # the LV to be skipped during normal activation. The lvchange/vgchange # -K option is required to activate LVs that have the activation skip # flag set. When this setting is enabled, the activation skip flag is # set on new thin snapshot LVs. # This configuration option has an automatic default value. # auto_set_activation_skip = 1 # Configuration option activation/activation_mode. # How LVs with missing devices are activated. # The --activationmode option overrides this setting. # # Accepted values: # complete # Only allow activation of an LV if all of the Physical Volumes it # uses are present. Other PVs in the Volume Group may be missing. # degraded # Like complete, but additionally RAID LVs of segment type raid1, # raid4, raid5, radid6 and raid10 will be activated if there is no # data loss, i.e. they have sufficient redundancy to present the # entire addressable range of the Logical Volume. # partial # Allows the activation of any LV even if a missing or failed PV # could cause data loss with a portion of the LV inaccessible. # This setting should not normally be used, but may sometimes # assist with data recovery. # activation_mode = \"degraded\" # Configuration option activation/lock_start_list. # Locking is started only for VGs selected by this list. # The rules are the same as those for volume_list. # This configuration option does not have a default value defined. # Configuration option activation/auto_lock_start_list. # Locking is auto-started only for VGs selected by this list. # The rules are the same as those for auto_activation_volume_list. # This configuration option does not have a default value defined. } Configuration section metadata. This configuration section has an automatic default value. metadata { # Configuration option metadata/check_pv_device_sizes. # Check device sizes are not smaller than corresponding PV sizes. # If device size is less than corresponding PV size found in metadata, # there is always a risk of data loss. If this option is set, then LVM # issues a warning message each time it finds that the device size is # less than corresponding PV size. You should not disable this unless # you are absolutely sure about what you are doing! # This configuration option is advanced. # This configuration option has an automatic default value. # check_pv_device_sizes = 1 # Configuration option metadata/record_lvs_history. # When enabled, LVM keeps history records about removed LVs in # metadata. The information that is recorded in metadata for # historical LVs is reduced when compared to original # information kept in metadata for live LVs. Currently, this # feature is supported for thin and thin snapshot LVs only. # This configuration option has an automatic default value. # record_lvs_history = 0 # Configuration option metadata/lvs_history_retention_time. # Retention time in seconds after which a record about individual # historical logical volume is automatically destroyed. # A value of 0 disables this feature. # This configuration option has an automatic default value. # lvs_history_retention_time = 0 # Configuration option metadata/pvmetadatacopies. # Number of copies of metadata to store on each PV. # The --pvmetadatacopies option overrides this setting. # # Accepted values: # 2 # Two copies of the VG metadata are stored on the PV, one at the # front of the PV, and one at the end. # 1 # One copy of VG metadata is stored at the front of the PV. # 0 # No copies of VG metadata are stored on the PV. This may be # useful for VGs containing large numbers of PVs. # # This configuration option is advanced. # This configuration option has an automatic default value. # pvmetadatacopies = 1 # Configuration option metadata/vgmetadatacopies. # Number of copies of metadata to maintain for each VG. # The --vgmetadatacopies option overrides this setting. # If set to a non-zero value, LVM automatically chooses which of the # available metadata areas to use to achieve the requested number of # copies of the VG metadata. If you set a value larger than the the # total number of metadata areas available, then metadata is stored in # them all. The value 0 (unmanaged) disables this automatic management # and allows you to control which metadata areas are used at the # individual PV level using pvchange --metadataignore y|n. # This configuration option has an automatic default value. # vgmetadatacopies = 0 # Configuration option metadata/pvmetadatasize. # Approximate number of sectors to use for each metadata copy. # VGs with large numbers of PVs or LVs, or VGs containing complex LV # structures, may need additional space for VG metadata. The metadata # areas are treated as circular buffers, so unused space becomes filled # with an archive of the most recent previous versions of the metadata. # This configuration option has an automatic default value. # pvmetadatasize = 255 # Configuration option metadata/pvmetadataignore. # Ignore metadata areas on a new PV. # The --metadataignore option overrides this setting. # If metadata areas on a PV are ignored, LVM will not store metadata # in them. # This configuration option is advanced. # This configuration option has an automatic default value. # pvmetadataignore = 0 # Configuration option metadata/stripesize. # This configuration option is advanced. # This configuration option has an automatic default value. # stripesize = 64 # Configuration option metadata/dirs. # Directories holding live copies of text format metadata. # These directories must not be on logical volumes! # It's possible to use LVM with a couple of directories here, # preferably on different (non-LV) filesystems, and with no other # on-disk metadata (pvmetadatacopies = 0). Or this can be in addition # to on-disk metadata areas. The feature was originally added to # simplify testing and is not supported under low memory situations - # the machine could lock up. Never edit any files in these directories # by hand unless you are absolutely sure you know what you are doing! # Use the supplied toolset to make changes (e.g. vgcfgrestore). # # Example # dirs = [ \"/etc/lvm/metadata\", \"/mnt/disk2/lvm/metadata2\" ] # # This configuration option is advanced. # This configuration option does not have a default value defined. } Configuration section report. LVM report command output formatting. This configuration section has an automatic default value. report { # Configuration option report/output_format. # Format of LVM command's report output. # If there is more than one report per command, then the format # is applied for all reports. You can also change output format # directly on command line using --reportformat option which # has precedence over log/output_format setting. # Accepted values: # basic # Original format with columns and rows. If there is more than # one report per command, each report is prefixed with report's # name for identification. # json # JSON format. # This configuration option has an automatic default value. # output_format = \"basic\" # Configuration option report/compact_output. # Do not print empty values for all report fields. # If enabled, all fields that don't have a value set for any of the # rows reported are skipped and not printed. Compact output is # applicable only if report/buffered is enabled. If you need to # compact only specified fields, use compact_output=0 and define # report/compact_output_cols configuration setting instead. # This configuration option has an automatic default value. # compact_output = 0 # Configuration option report/compact_output_cols. # Do not print empty values for specified report fields. # If defined, specified fields that don't have a value set for any # of the rows reported are skipped and not printed. Compact output # is applicable only if report/buffered is enabled. If you need to # compact all fields, use compact_output=1 instead in which case # the compact_output_cols setting is then ignored. # This configuration option has an automatic default value. # compact_output_cols = \"\" # Configuration option report/aligned. # Align columns in report output. # This configuration option has an automatic default value. # aligned = 1 # Configuration option report/buffered. # Buffer report output. # When buffered reporting is used, the report's content is appended # incrementally to include each object being reported until the report # is flushed to output which normally happens at the end of command # execution. Otherwise, if buffering is not used, each object is # reported as soon as its processing is finished. # This configuration option has an automatic default value. # buffered = 1 # Configuration option report/headings. # Show headings for columns on report. # This configuration option has an automatic default value. # headings = 1 # Configuration option report/separator. # A separator to use on report after each field. # This configuration option has an automatic default value. # separator = \" \" # Configuration option report/list_item_separator. # A separator to use for list items when reported. # This configuration option has an automatic default value. # list_item_separator = \",\" # Configuration option report/prefixes. # Use a field name prefix for each field reported. # This configuration option has an automatic default value. # prefixes = 0 # Configuration option report/quoted. # Quote field values when using field name prefixes. # This configuration option has an automatic default value. # quoted = 1 # Configuration option report/colums_as_rows. # Output each column as a row. # If set, this also implies report/prefixes=1. # This configuration option has an automatic default value. # colums_as_rows = 0 # Configuration option report/binary_values_as_numeric. # Use binary values 0 or 1 instead of descriptive literal values. # For columns that have exactly two valid values to report # (not counting the 'unknown' value which denotes that the # value could not be determined). # This configuration option has an automatic default value. # binary_values_as_numeric = 0 # Configuration option report/time_format. # Set time format for fields reporting time values. # Format specification is a string which may contain special character # sequences and ordinary character sequences. Ordinary character # sequences are copied verbatim. Each special character sequence is # introduced by the '%' character and such sequence is then # substituted with a value as described below. # # Accepted values: # %a # The abbreviated name of the day of the week according to the # current locale. # %A # The full name of the day of the week according to the current # locale. # %b # The abbreviated month name according to the current locale. # %B # The full month name according to the current locale. # %c # The preferred date and time representation for the current # locale (alt E) # %C # The century number (year/100) as a 2-digit integer. (alt E) # %d # The day of the month as a decimal number (range 01 to 31). # (alt O) # %D # Equivalent to %m/%d/%y. (For Americans only. Americans should # note that in other countries%d/%m/%y is rather common. This # means that in international context this format is ambiguous and # should not be used. # %e # Like %d, the day of the month as a decimal number, but a leading # zero is replaced by a space. (alt O) # %E # Modifier: use alternative local-dependent representation if # available. # %F # Equivalent to %Y-%m-%d (the ISO 8601 date format). # %G # The ISO 8601 week-based year with century as adecimal number. # The 4-digit year corresponding to the ISO week number (see %V). # This has the same format and value as %Y, except that if the # ISO week number belongs to the previous or next year, that year # is used instead. # %g # Like %G, but without century, that is, with a 2-digit year # (00-99). # %h # Equivalent to %b. # %H # The hour as a decimal number using a 24-hour clock # (range 00 to 23). (alt O) # %I # The hour as a decimal number using a 12-hour clock # (range 01 to 12). (alt O) # %j # The day of the year as a decimal number (range 001 to 366). # %k # The hour (24-hour clock) as a decimal number (range 0 to 23); # single digits are preceded by a blank. (See also %H.) # %l # The hour (12-hour clock) as a decimal number (range 1 to 12); # single digits are preceded by a blank. (See also %I.) # %m # The month as a decimal number (range 01 to 12). (alt O) # %M # The minute as a decimal number (range 00 to 59). (alt O) # %O # Modifier: use alternative numeric symbols. # %p # Either \"AM\" or \"PM\" according to the given time value, # or the corresponding strings for the current locale. Noon is # treated as \"PM\" and midnight as \"AM\". # %P # Like %p but in lowercase: \"am\" or \"pm\" or a corresponding # string for the current locale. # %r # The time in a.m. or p.m. notation. In the POSIX locale this is # equivalent to %I:%M:%S %p. # %R # The time in 24-hour notation (%H:%M). For a version including # the seconds, see %T below. # %s # The number of seconds since the Epoch, # 1970-01-01 00:00:00 +0000 (UTC) # %S # The second as a decimal number (range 00 to 60). (The range is # up to 60 to allow for occasional leap seconds.) (alt O) # %t # A tab character. # %T # The time in 24-hour notation (%H:%M:%S). # %u # The day of the week as a decimal, range 1 to 7, Monday being 1. # See also %w. (alt O) # %U # The week number of the current year as a decimal number, # range 00 to 53, starting with the first Sunday as the first # day of week 01. See also %V and %W. (alt O) # %V # The ISO 8601 week number of the current year as a decimal number, # range 01 to 53, where week 1 is the first week that has at least # 4 days in the new year. See also %U and %W. (alt O) # %w # The day of the week as a decimal, range 0 to 6, Sunday being 0. # See also %u. (alt O) # %W # The week number of the current year as a decimal number, # range 00 to 53, starting with the first Monday as the first day # of week 01. (alt O) # %x # The preferred date representation for the current locale without # the time. (alt E) # %X # The preferred time representation for the current locale without # the date. (alt E) # %y # The year as a decimal number without a century (range 00 to 99). # (alt E, alt O) # %Y # The year as a decimal number including the century. (alt E) # %z # The +hhmm or -hhmm numeric timezone (that is, the hour and minute # offset from UTC). # %Z # The timezone name or abbreviation. # %% # A literal '%' character. # # This configuration option has an automatic default value. # time_format = \"%Y-%m-%d %T %z\" # Configuration option report/devtypes_sort. # List of columns to sort by when reporting 'lvm devtypes' command. # See 'lvm devtypes -o help' for the list of possible fields. # This configuration option has an automatic default value. # devtypes_sort = \"devtype_name\" # Configuration option report/devtypes_cols. # List of columns to report for 'lvm devtypes' command. # See 'lvm devtypes -o help' for the list of possible fields. # This configuration option has an automatic default value. # devtypes_cols = \"devtype_name,devtype_max_partitions,devtype_description\" # Configuration option report/devtypes_cols_verbose. # List of columns to report for 'lvm devtypes' command in verbose mode. # See 'lvm devtypes -o help' for the list of possible fields. # This configuration option has an automatic default value. # devtypes_cols_verbose = \"devtype_name,devtype_max_partitions,devtype_description\" # Configuration option report/lvs_sort. # List of columns to sort by when reporting 'lvs' command. # See 'lvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # lvs_sort = \"vg_name,lv_name\" # Configuration option report/lvs_cols. # List of columns to report for 'lvs' command. # See 'lvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # lvs_cols = \"lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv\" # Configuration option report/lvs_cols_verbose. # List of columns to report for 'lvs' command in verbose mode. # See 'lvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # lvs_cols_verbose = \"lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile\" # Configuration option report/vgs_sort. # List of columns to sort by when reporting 'vgs' command. # See 'vgs -o help' for the list of possible fields. # This configuration option has an automatic default value. # vgs_sort = \"vg_name\" # Configuration option report/vgs_cols. # List of columns to report for 'vgs' command. # See 'vgs -o help' for the list of possible fields. # This configuration option has an automatic default value. # vgs_cols = \"vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free\" # Configuration option report/vgs_cols_verbose. # List of columns to report for 'vgs' command in verbose mode. # See 'vgs -o help' for the list of possible fields. # This configuration option has an automatic default value. # vgs_cols_verbose = \"vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile\" # Configuration option report/pvs_sort. # List of columns to sort by when reporting 'pvs' command. # See 'pvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvs_sort = \"pv_name\" # Configuration option report/pvs_cols. # List of columns to report for 'pvs' command. # See 'pvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvs_cols = \"pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free\" # Configuration option report/pvs_cols_verbose. # List of columns to report for 'pvs' command in verbose mode. # See 'pvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvs_cols_verbose = \"pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid\" # Configuration option report/segs_sort. # List of columns to sort by when reporting 'lvs --segments' command. # See 'lvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # segs_sort = \"vg_name,lv_name,seg_start\" # Configuration option report/segs_cols. # List of columns to report for 'lvs --segments' command. # See 'lvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # segs_cols = \"lv_name,vg_name,lv_attr,stripes,segtype,seg_size\" # Configuration option report/segs_cols_verbose. # List of columns to report for 'lvs --segments' command in verbose mode. # See 'lvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # segs_cols_verbose = \"lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize\" # Configuration option report/pvsegs_sort. # List of columns to sort by when reporting 'pvs --segments' command. # See 'pvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvsegs_sort = \"pv_name,pvseg_start\" # Configuration option report/pvsegs_cols. # List of columns to sort by when reporting 'pvs --segments' command. # See 'pvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvsegs_cols = \"pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size\" # Configuration option report/pvsegs_cols_verbose. # List of columns to sort by when reporting 'pvs --segments' command in verbose mode. # See 'pvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvsegs_cols_verbose = \"pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges\" # Configuration option report/vgs_cols_full. # List of columns to report for lvm fullreport's 'vgs' subreport. # See 'vgs -o help' for the list of possible fields. # This configuration option has an automatic default value. # vgs_cols_full = \"vg_all\" # Configuration option report/pvs_cols_full. # List of columns to report for lvm fullreport's 'vgs' subreport. # See 'pvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvs_cols_full = \"pv_all\" # Configuration option report/lvs_cols_full. # List of columns to report for lvm fullreport's 'lvs' subreport. # See 'lvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # lvs_cols_full = \"lv_all\" # Configuration option report/pvsegs_cols_full. # List of columns to report for lvm fullreport's 'pvseg' subreport. # See 'pvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvsegs_cols_full = \"pvseg_all,pv_uuid,lv_uuid\" # Configuration option report/segs_cols_full. # List of columns to report for lvm fullreport's 'seg' subreport. # See 'lvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # segs_cols_full = \"seg_all,lv_uuid\" # Configuration option report/vgs_sort_full. # List of columns to sort by when reporting lvm fullreport's 'vgs' subreport. # See 'vgs -o help' for the list of possible fields. # This configuration option has an automatic default value. # vgs_sort_full = \"vg_name\" # Configuration option report/pvs_sort_full. # List of columns to sort by when reporting lvm fullreport's 'vgs' subreport. # See 'pvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvs_sort_full = \"pv_name\" # Configuration option report/lvs_sort_full. # List of columns to sort by when reporting lvm fullreport's 'lvs' subreport. # See 'lvs -o help' for the list of possible fields. # This configuration option has an automatic default value. # lvs_sort_full = \"vg_name,lv_name\" # Configuration option report/pvsegs_sort_full. # List of columns to sort by when reporting for lvm fullreport's 'pvseg' subreport. # See 'pvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # pvsegs_sort_full = \"pv_uuid,pvseg_start\" # Configuration option report/segs_sort_full. # List of columns to sort by when reporting lvm fullreport's 'seg' subreport. # See 'lvs --segments -o help' for the list of possible fields. # This configuration option has an automatic default value. # segs_sort_full = \"lv_uuid,seg_start\" # Configuration option report/mark_hidden_devices. # Use brackets [] to mark hidden devices. # This configuration option has an automatic default value. # mark_hidden_devices = 1 # Configuration option report/two_word_unknown_device. # Use the two words 'unknown device' in place of '[unknown]'. # This is displayed when the device for a PV is not known. # This configuration option has an automatic default value. # two_word_unknown_device = 0 } Configuration section dmeventd. Settings for the LVM event daemon. dmeventd { # Configuration option dmeventd/mirror_library. # The library dmeventd uses when monitoring a mirror device. # libdevmapper-event-lvm2mirror.so attempts to recover from # failures. It removes failed devices from a volume group and # reconfigures a mirror as necessary. If no mirror library is # provided, mirrors are not monitored through dmeventd. mirror_library = \"libdevmapper-event-lvm2mirror.so\" # Configuration option dmeventd/raid_library. # This configuration option has an automatic default value. # raid_library = \"libdevmapper-event-lvm2raid.so\" # Configuration option dmeventd/snapshot_library. # The library dmeventd uses when monitoring a snapshot device. # libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots # and emits a warning through syslog when the usage exceeds 80%. The # warning is repeated when 85%, 90% and 95% of the snapshot is filled. snapshot_library = \"libdevmapper-event-lvm2snapshot.so\" # Configuration option dmeventd/thin_library. # The library dmeventd uses when monitoring a thin device. # libdevmapper-event-lvm2thin.so monitors the filling of a pool # and emits a warning through syslog when the usage exceeds 80%. The # warning is repeated when 85%, 90% and 95% of the pool is filled. thin_library = \"libdevmapper-event-lvm2thin.so\" # Configuration option dmeventd/executable. # The full path to the dmeventd binary. # This configuration option has an automatic default value. # executable = \"/usr/sbin/dmeventd\" } Configuration section tags. Host tag settings. This configuration section has an automatic default value. tags { # Configuration option tags/hosttags. # Create a host tag using the machine name. # The machine name is nodename returned by uname(2). # This configuration option has an automatic default value. # hosttags = 0 # Configuration section tags/<tag>. # Replace this subsection name with a custom tag name. # Multiple subsections like this can be created. The '@' prefix for # tags is optional. This subsection can contain host_list, which is a # list of machine names. If the name of the local machine is found in # host_list, then the name of this subsection is used as a tag and is # applied to the local machine as a 'host tag'. If this subsection is # empty (has no host_list), then the subsection name is always applied # as a 'host tag'. # # Example # The host tag foo is given to all hosts, and the host tag # bar is given to the hosts named machine1 and machine2. # tags { foo { } bar { host_list = [ \"machine1\", \"machine2\" ] } } # # This configuration section has variable name. # This configuration section has an automatic default value. # tag { # Configuration option tags/<tag>/host_list. # A list of machine names. # These machine names are compared to the nodename returned # by uname(2). If the local machine name matches an entry in # this list, the name of the subsection is applied to the # machine as a 'host tag'. # This configuration option does not have a default value defined. # } }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvmconf_file
12.2. Installing from a Different Source
12.2. Installing from a Different Source You can install Red Hat Enterprise Linux from the ISO images stored on hard disk, or from a network using NFS, FTP, HTTP, or HTTPS methods. Experienced users frequently use one of these methods because it is often faster to read data from a hard disk or network server than from a DVD. The following table summarizes the different boot methods and recommended installation methods to use with each: Table 12.1. Boot Methods and Installation Sources Boot method Installation source Full installation media (DVD) The boot media itself Minimal boot media (CD or DVD) Full installation DVD ISO image or the installation tree extracted from this image, placed in a network location or on a hard drive Network boot Full installation DVD ISO image or the installation tree extracted from this image, placed in a network location
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installing-alternate-source-ppc
7.118. libqb
7.118. libqb 7.118.1. RHBA-2013:0323 - libqb bug fix and enhancement update Updated libqb packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libqb packages provide a library with the primary purpose of providing high performance client server reusable features, such as high performance logging, tracing, inter-process communication, and polling. Note The libqb packages have been upgraded to upstream version 0.14.2, which provides a number of bug fixes and enhancements over the version. (BZ#845275) Bug Fix BZ# 869446 Previously, a timeout argument given to the qb_ipcc_recv() API function was not passed to poll() while waiting for a reply. Consequently, this function could consume nearly 100% CPU resources and affect the pacemaker utility. This bug has been fixed by passing the timeout value to poll() in qb_ipcc_recv(). As a result, the timeout period is honored as expected and pacemaker works correctly in such a case. All libqb users are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 7.118.2. RHBA-2013:1431 - libqb bug fix and enhancement update Updated libqb packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libqb packages provide a library with the primary purpose of providing high performance client server reusable features, such as high performance logging, tracing, inter-process communication, and polling. Note The libqb packages have been upgraded to upstream version 0.16.0, which provides a number of bug fixes and enhancements over the version. One of the notable changes fixes a bug in the qb_log_from_external_source() function, that caused the Pacemaker's policy engine to terminate unexpectedly. The engine now works as expected. (BZ# 1001491 ) Users of libqb are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libqb
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_bare_metal_provisioning_service/proc_providing-feedback-on-red-hat-documentation
22.5. Basic Example: Performing a Recovery
22.5. Basic Example: Performing a Recovery An administrator, John Smith, has to create a disaster recovery plan for his directory deployment. Example Corp. has three physical offices, in San Francisco, Dallas, and Arlington. Each site has 10 servers which replicate to each other locally, and then one server at each site replicates to another server at the other two sites. Each site has business-critical customer data stored in its directory, as well as human resources data. Several external applications require access to the data to perform operations like billing. John Smith's first step is to perform a site survey. He is looking for three things: what his directory usage is (clients that access it and traffic loads across the sites), what his current assets are, and what assets he may need to acquire. This is much like the initial site survey he performed when deploying Red Hat Directory Server. His step is identifying potential disaster scenarios. Two of the three sites are highly vulnerable to natural disasters (San Francisco and Dallas). All three sites could face normal interruptions, like outages for power or Internet access. Additionally, since each site suppliers its own local data, each site is vulnerable to losing a server instance or machine. John Smith then breaks his disaster recovery plan into three parts: Plan A covers losing a single instance of Directory Server Plan B covers some kind of data corruption or attack Plan C covers losing an entire office For plans A and B, John Smith decides to use a hot recovery to immediately switch functionality from a single instance to the backup. Each server is backed up daily, using a cron job, and then the archive is copied over and restored on a virtual machine. The virtual machine is kept on a different subnet, but can be switched over immediately if its peer ever does offline. John Smith uses simple SNMP traps to track each Directory Server instance's availability. Plan C is more extensive. Along with replication between sites and the local backups, he decides to mail a physical copy of each site's backup, for every local instance, once a week to the other two colocation facilities. He also keep a spare server with adequate Internet access and software licenses to restore an entire site, using virtual machines, one of the other different colocation facilities. He designates the Arlington site as the primary recovery location because that is where most of the IT staff is located, then San Francisco and last Dallas, based on the distribution of personnel. For every event, the IT administrator at all three sites will be notified, and the manager assumes the responsibilities of setting up the virtual machines, restoring the Directory Server instances from the physical backups, and rerouting client traffic. John Smith schedules to review and update the plan quarterly to account for any new hardware or application changes. Once a year, all three sites have to run through the procedure of recovering and deploying the other two sites, according to the procedures in Disaster Plan C.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/recovering-ds
5.9. Configuring Fencing Levels
5.9. Configuring Fencing Levels Pacemaker supports fencing nodes with multiple devices through a feature called fencing topologies. To implement topologies, create the individual devices as you normally would and then define one or more fencing levels in the fencing topology section in the configuration. Each level is attempted in ascending numeric order, starting at 1. If a device fails, processing terminates for the current level. No further devices in that level are exercised and the level is attempted instead. If all devices are successfully fenced, then that level has succeeded and no other levels are tried. The operation is finished when a level has passed (success), or all levels have been attempted (failed). Use the following command to add a fencing level to a node. The devices are given as a comma-separated list of stonith ids, which are attempted for the node at that level. The following command lists all of the fencing levels that are currently configured. In the following example, there are two fence devices configured for node rh7-2 : an ilo fence device called my_ilo and an apc fence device called my_apc . These commands sets up fence levels so that if the device my_ilo fails and is unable to fence the node, then Pacemaker will attempt to use the device my_apc . This example also shows the output of the pcs stonith level command after the levels are configured. The following command removes the fence level for the specified node and devices. If no nodes or devices are specified then the fence level you specify is removed from all nodes. The following command clears the fence levels on the specified node or stonith id. If you do not specify a node or stonith id, all fence levels are cleared. If you specify more than one stonith id, they must be separated by a comma and no spaces, as in the following example. The following command verifies that all fence devices and nodes specified in fence levels exist. As of Red Hat Enterprise Linux 7.4, you can specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For example, the following commands configure nodes node1 , node2 , and ` node3 to use fence devices apc1 and ` apc2 , and nodes ` node4 , node5 , and ` node6 to use fence devices apc3 and ` apc4 . The following commands yield the same results by using node attribute matching.
[ "pcs stonith level add level node devices", "pcs stonith level", "pcs stonith level add 1 rh7-2 my_ilo pcs stonith level add 2 rh7-2 my_apc pcs stonith level Node: rh7-2 Level 1 - my_ilo Level 2 - my_apc", "pcs stonith level remove level [ node_id ] [ stonith_id ] ... [ stonith_id ]", "pcs stonith level clear [ node | stonith_id (s)]", "pcs stonith level clear dev_a,dev_b", "pcs stonith level verify", "pcs stonith level add 1 \"regexp%node[1-3]\" apc1,apc2 pcs stonith level add 1 \"regexp%node[4-6]\" apc3,apc4", "pcs node attribute node1 rack=1 pcs node attribute node2 rack=1 pcs node attribute node3 rack=1 pcs node attribute node4 rack=2 pcs node attribute node5 rack=2 pcs node attribute node6 rack=2 pcs stonith level add 1 attrib%rack=1 apc1,apc2 pcs stonith level add 1 attrib%rack=2 apc3,apc4" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-fencelevels-HAAR
Chapter 10. Tasks
Chapter 10. Tasks Table 10.1. Tasks Subcommand Description and tasks task List all tasks:
[ "hammer task list Monitor progress of a running task: hammer task progress --id task_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cheat_sheet/tasks-1
Chapter 3. Configuring external listeners
Chapter 3. Configuring external listeners Use an external listener to expose your AMQ Streams Kafka cluster to a client outside an OpenShift environment. Specify the connection type to expose Kafka in the external listener configuration. nodeport uses NodePort type Services loadbalancer uses Loadbalancer type Services ingress uses Kubernetes Ingress and the NGINX Ingress Controller for Kubernetes route uses OpenShift Routes and the HAProxy router For more information on listener configuration, see GenericKafkaListener schema reference . Note route is only supported on OpenShift Additional resources Accessing Apache Kafka in Strimzi 3.1. Accessing Kafka using node ports This procedure describes how to access an AMQ Streams Kafka cluster from an external client using node ports. To connect to a broker, you need a hostname and port number for the Kafka bootstrap address , as well as the certificate used for authentication. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Configure a Kafka resource with an external listener set to the nodeport type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: external port: 9094 type: nodeport tls: true authentication: type: tls # ... # ... zookeeper: # ... Create or update the resource. oc apply -f KAFKA-CONFIG-FILE NodePort type services are created for each Kafka broker, as well as an external bootstrap service . The bootstrap service routes external traffic to the Kafka brokers. Node addresses used for connection are propagated to the status of the Kafka custom resource. The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the Kafka resource. Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}' If TLS encryption is enabled, extract the public certificate of the broker certification authority. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication. 3.2. Accessing Kafka using loadbalancers This procedure describes how to access an AMQ Streams Kafka cluster from an external client using loadbalancers. To connect to a broker, you need the address of the bootstrap loadbalancer , as well as the certificate used for TLS encryption. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Configure a Kafka resource with an external listener set to the loadbalancer type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: external port: 9094 type: loadbalancer tls: true # ... # ... zookeeper: # ... Create or update the resource. oc apply -f KAFKA-CONFIG-FILE loadbalancer type services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service . The bootstrap service routes external traffic to all Kafka brokers. DNS names and IP addresses used for connection are propagated to the status of each service. The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the Kafka resource. Retrieve the address of the bootstrap service you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}' If TLS encryption is enabled, extract the public certificate of the broker certification authority. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication. 3.3. Accessing Kafka using ingress This procedure shows how to access an AMQ Streams Kafka cluster from an external client outside of OpenShift using Nginx Ingress. To connect to a broker, you need a hostname (advertised address) for the Ingress bootstrap address , as well as the certificate used for authentication. For access using Ingress, the port is always 443. TLS passthrough Kafka uses a binary protocol over TCP, but the NGINX Ingress Controller for Kubernetes is designed to work with the HTTP protocol. To be able to pass the Kafka connections through the Ingress, AMQ Streams uses the TLS passthrough feature of the NGINX Ingress Controller for Kubernetes . Ensure TLS passthrough is enabled in your NGINX Ingress Controller for Kubernetes deployment. Because it is using the TLS passthrough functionality, TLS encryption cannot be disabled when exposing Kafka using Ingress . For more information about enabling TLS passthrough, see TLS passthrough documentation . Prerequisites OpenShift cluster Deployed NGINX Ingress Controller for Kubernetes with TLS passthrough enabled A running Cluster Operator Procedure Configure a Kafka resource with an external listener set to the ingress type. Specify the Ingress hosts for the bootstrap service and Kafka brokers. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: 1 bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # ... zookeeper: # ... 1 Ingress hosts for the bootstrap service and Kafka brokers. Create or update the resource. oc apply -f KAFKA-CONFIG-FILE ClusterIP type services are created for each Kafka broker, as well as an additional bootstrap service . These services are used by the Ingress controller to route traffic to the Kafka brokers. An Ingress resource is also created for each service to expose them using the Ingress controller. The Ingress hosts are propagated to the status of each service. The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the Kafka resource. Use the address for the bootstrap host you specified in the configuration and port 443 ( BOOTSTRAP-HOST:443 ) in your Kafka client as the bootstrap address to connect to the Kafka cluster. Extract the public certificate of the broker certificate authority. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate in your Kafka client to configure the TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication. 3.4. Accessing Kafka using OpenShift routes This procedure describes how to access an AMQ Streams Kafka cluster from an external client outside of OpenShift using routes. To connect to a broker, you need a hostname for the route bootstrap address , as well as the certificate used for TLS encryption. For access using routes, the port is always 443. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Configure a Kafka resource with an external listener set to the route type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: listener1 port: 9094 type: route tls: true # ... # ... zookeeper: # ... Warning An OpenShift Route address comprises the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example, my-cluster-kafka-listener1-bootstrap-myproject ( CLUSTER-NAME -kafka- LISTENER-NAME -bootstrap- NAMESPACE ). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters. Create or update the resource. oc apply -f KAFKA-CONFIG-FILE ClusterIP type services are created for each Kafka broker, as well as an external bootstrap service . The services route the traffic from the OpenShift Routes to the Kafka brokers. An OpenShift Route resource is also created for each service to expose them using the HAProxy load balancer. DNS addresses used for connection are propagated to the status of each service. The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the Kafka resource. Retrieve the address of the bootstrap service you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}' Extract the public certificate of the broker certification authority. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: external port: 9094 type: nodeport tls: true authentication: type: tls # # zookeeper: #", "apply -f KAFKA-CONFIG-FILE", "get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}{\"\\n\"}'", "get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: external port: 9094 type: loadbalancer tls: true # # zookeeper: #", "apply -f KAFKA-CONFIG-FILE", "get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}{\"\\n\"}'", "get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: 1 bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # zookeeper: #", "apply -f KAFKA-CONFIG-FILE", "get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: listener1 port: 9094 type: route tls: true # # zookeeper: #", "apply -f KAFKA-CONFIG-FILE", "get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}{\"\\n\"}'", "get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/assembly-configuring-external-listeners-str
Chapter 4. Configuring identity providers
Chapter 4. Configuring identity providers After your OpenShift Dedicated cluster is created, you must configure identity providers to determine how users log in to access the cluster. 4.1. Understanding identity providers OpenShift Dedicated includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. Configuring identity providers allows users to log in and access the cluster. 4.1.1. Supported identity providers You can configure the following types of identity providers: Identity provider Description GitHub or GitHub Enterprise Configure a GitHub identity provider to validate usernames and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a GitLab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a Google identity provider using Google's OpenID Connect integration . LDAP Configure an LDAP identity provider to validate usernames and passwords against an LDAPv3 server, using simple bind authentication. OpenID Connect Configure an OpenID Connect (OIDC) identity provider to integrate with an OIDC identity provider using an Authorization Code Flow . htpasswd Configure an htpasswd identity provider for a single, static administration user. You can log in to the cluster as the user to troubleshoot issues. Important The htpasswd identity provider option is included only to enable the creation of a single, static administration user. htpasswd is not supported as a general-use identity provider for OpenShift Dedicated. For the steps to configure the single user, see Configuring an htpasswd identity provider . 4.1.2. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 4.2. Configuring a GitHub identity provider Configure a GitHub identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server and access your OpenShift Dedicated cluster. OAuth facilitates a token exchange flow between OpenShift Dedicated and GitHub or GitHub Enterprise. Warning Configuring GitHub authentication allows users to log in to OpenShift Dedicated with their GitHub credentials. To prevent anyone with any GitHub user ID from logging in to your OpenShift Dedicated cluster, you must restrict access to only those in specific GitHub organizations or teams. Prerequisites The OAuth application must be created directly within the GitHub organization settings by the GitHub organization administrator. GitHub organizations or teams are set up in your GitHub account. Procedure From OpenShift Cluster Manager , navigate to the Cluster List page and select the cluster that you need to configure identity providers for. Click the Access control tab. Click Add identity provider . Note You can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers. Select GitHub from the drop-down menu. Enter a unique name for the identity provider. This name cannot be changed later. An OAuth callback URL is automatically generated in the provided field. You will use this to register the GitHub application. For example: Register an application on GitHub . Return to OpenShift Dedicated and select a mapping method from the drop-down menu. Claim is recommended in most cases. Enter the Client ID and Client secret provided by GitHub. Enter a hostname . A hostname must be entered when using a hosted instance of GitHub Enterprise. Optional: You can use a certificate authority (CA) file to validate server certificates for the configured GitHub Enterprise URL. Click Browse to locate and attach a CA file to the identity provider. Select Use organizations or Use teams to restrict access to a particular GitHub organization or a GitHub team. Enter the name of the organization or team you would like to restrict access to. Click Add more to specify multiple organizations or teams that users can be a member of. Click Confirm . Verification The configured identity provider is now visible on the Access control tab of the Cluster List page. 4.3. Configuring a GitLab identity provider Configure a GitLab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Prerequisites If you use GitLab version 7.7.0 to 11.0, you connect using the OAuth integration . If you use GitLab version 11.1 or later, you can use OpenID Connect (OIDC) to connect instead of OAuth. Procedure From OpenShift Cluster Manager , navigate to the Cluster List page and select the cluster that you need to configure identity providers for. Click the Access control tab. Click Add identity provider . Note You can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers. Select GitLab from the drop-down menu. Enter a unique name for the identity provider. This name cannot be changed later. An OAuth callback URL is automatically generated in the provided field. You will provide this URL to GitLab. For example: Add a new application in GitLab . Return to OpenShift Dedicated and select a mapping method from the drop-down menu. Claim is recommended in most cases. Enter the Client ID and Client secret provided by GitLab. Enter the URL of your GitLab provider. Optional: You can use a certificate authority (CA) file to validate server certificates for the configured GitLab URL. Click Browse to locate and attach a CA file to the identity provider. Click Confirm . Verification The configured identity provider is now visible on the Access control tab of the Cluster List page. 4.4. Configuring a Google identity provider Configure a Google identity provider to allow users to authenticate with their Google credentials. Warning Using Google as an identity provider allows any Google user to authenticate to your server. You can limit authentication to members of a specific hosted domain with the hostedDomain configuration attribute. Procedure From OpenShift Cluster Manager , navigate to the Cluster List page and select the cluster that you need to configure identity providers for. Click the Access control tab. Click Add identity provider . Note You can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers. Select Google from the drop-down menu. Enter a unique name for the identity provider. This name cannot be changed later. An OAuth callback URL is automatically generated in the provided field. You will provide this URL to Google. For example: Configure a Google identity provider using Google's OpenID Connect integration . Return to OpenShift Dedicated and select a mapping method from the drop-down menu. Claim is recommended in most cases. Enter the Client ID of a registered Google project and the Client secret issued by Google. Enter a hosted domain to restrict users to a Google Apps domain. Click Confirm . Verification The configured identity provider is now visible on the Access control tab of the Cluster List page. 4.5. Configuring a LDAP identity provider Configure the LDAP identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Prerequisites When configuring a LDAP identity provider, you will need to enter a configured LDAP URL . The configured URL is an RFC 2255 URL, which specifies the LDAP host and search parameters to use. The syntax of the URL is: URL component Description ldap For regular LDAP, use the string ldap . For secure LDAP (LDAPS), use ldaps instead. host:port The name and port of the LDAP server. Defaults to localhost:389 for ldap and localhost:636 for LDAPS. basedn The DN of the branch of the directory where all searches should start from. At the very least, this must be the top of your directory tree, but it could also specify a subtree in the directory. attribute The attribute to search for. Although RFC 2255 allows a comma-separated list of attributes, only the first attribute will be used, no matter how many are provided. If no attributes are provided, the default is to use uid . It is recommended to choose an attribute that will be unique across all entries in the subtree you will be using. scope The scope of the search. Can be either one or sub . If the scope is not provided, the default is to use a scope of sub . filter A valid LDAP search filter. If not provided, defaults to (objectClass=*) When doing searches, the attribute, filter, and provided user name are combined to create a search filter that looks like: Important If the LDAP directory requires authentication to search, specify a bindDN and bindPassword to use to perform the entry search. Procedure From OpenShift Cluster Manager , navigate to the Cluster List page and select the cluster that you need to configure identity providers for. Click the Access control tab. Click Add identity provider . Note You can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers. Select LDAP from the drop-down menu. Enter a unique name for the identity provider. This name cannot be changed later. Select a mapping method from the drop-down menu. Claim is recommended in most cases. Enter a LDAP URL to specify the LDAP search parameters to use. Optional: Enter a Bind DN and Bind password . Enter the attributes that will map LDAP attributes to identities. Enter an ID attribute whose value should be used as the user ID. Click Add more to add multiple ID attributes. Optional: Enter a Preferred username attribute whose value should be used as the display name. Click Add more to add multiple preferred username attributes. Optional: Enter an Email attribute whose value should be used as the email address. Click Add more to add multiple email attributes. Optional: Click Show advanced Options to add a certificate authority (CA) file to your LDAP identity provider to validate server certificates for the configured URL. Click Browse to locate and attach a CA file to the identity provider. Optional: Under the advanced options, you can choose to make the LDAP provider Insecure . If you select this option, a CA file cannot be used. Important If you are using an insecure LDAP connection (ldap:// or port 389), then you must check the Insecure option in the configuration wizard. Click Confirm . Verification The configured identity provider is now visible on the Access control tab of the Cluster List page. 4.6. Configuring an OpenID identity provider Configure an OpenID identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . Important The Authentication Operator in OpenShift Dedicated requires that the configured OpenID Connect identity provider implements the OpenID Connect Discovery specification. Claims are read from the JWT id_token returned from the OpenID identity provider and, if specified, from the JSON returned by the Issuer URL. At least one claim must be configured to use as the user's identity. You can also indicate which claims to use as the user's preferred user name, display name, and email address. If multiple claims are specified, the first one with a non-empty value is used. The standard claims are: Claim Description preferred_username The preferred user name when provisioning a user. A shorthand name that the user wants to be referred to as, such as janedoe . Typically a value that corresponding to the user's login or username in the authentication system, such as username or email. email Email address. name Display name. See the OpenID claims documentation for more information. Prerequisites Before you configure OpenID Connect, check the installation prerequisites for any Red Hat product or service you want to use with your OpenShift Dedicated cluster. Procedure From OpenShift Cluster Manager , navigate to the Cluster List page and select the cluster that you need to configure identity providers for. Click the Access control tab. Click Add identity provider . Note You can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers. Select OpenID from the drop-down menu. Enter a unique name for the identity provider. This name cannot be changed later. An OAuth callback URL is automatically generated in the provided field. For example: Register a new OpenID Connect client in the OpenID identity provider by following the steps to create an authorization request . Return to OpenShift Dedicated and select a mapping method from the drop-down menu. Claim is recommended in most cases. Enter a Client ID and Client secret provided from OpenID. Enter an Issuer URL . This is the URL that the OpenID provider asserts as the Issuer Identifier. It must use the https scheme with no URL query parameters or fragments. Enter an Email attribute whose value should be used as the email address. Click Add more to add multiple email attributes. Enter a Name attribute whose value should be used as the preferred username. Click Add more to add multiple preferred usernames. Enter a Preferred username attribute whose value should be used as the display name. Click Add more to add multiple display names. Optional: Click Show advanced Options to add a certificate authority (CA) file to your OpenID identity provider. Optional: Under the advanced options, you can add Additional scopes . By default, the OpenID scope is requested. Click Confirm . Verification The configured identity provider is now visible on the Access control tab of the Cluster List page. 4.7. Configuring an htpasswd identity provider Configure an htpasswd identity provider to create a single, static user with cluster administration privileges. You can log in to your cluster as the user to troubleshoot issues. Important The htpasswd identity provider option is included only to enable the creation of a single, static administration user. htpasswd is not supported as a general-use identity provider for OpenShift Dedicated. Procedure From OpenShift Cluster Manager , navigate to the Cluster List page and select your cluster. Select Access control Identity providers . Click Add identity provider . Select HTPasswd from the Identity Provider drop-down menu. Add a unique name in the Name field for the identity provider. Use the suggested username and password for the static user, or create your own. Note The credentials defined in this step are not visible after you select Add in the following step. If you lose the credentials, you must recreate the identity provider and define the credentials again. Select Add to create the htpasswd identity provider and the single, static user. Grant the static user permission to manage the cluster: Under Access control Cluster Roles and Access , select Add user . Enter the User ID of the static user that you created in the preceding step. Select Add user to grant the administration privileges to the user. Verification The configured htpasswd identity provider is visible on the Access control Identity providers page. Note After creating the identity provider, synchronization usually completes within two minutes. You can log in to the cluster as the user after the htpasswd identity provider becomes available. The single, administrative user is visible on the Access control Cluster Roles and Access page. The administration group membership of the user is also displayed. 4.8. Accessing your cluster After you have configured your identity providers, users can access the cluster from Red Hat OpenShift Cluster Manager. Prerequisites You logged in to OpenShift Cluster Manager . You created an OpenShift Dedicated cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider. Procedure From OpenShift Cluster Manager , click on the cluster you want to access. Click Open Console . Click on your identity provider and provide your credentials to log into the cluster. Click Open console to open the web console for your cluster. Click on your identity provider and provide your credentials to log in to the cluster. Complete any authorization requests that are presented by your provider.
[ "https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>", "https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github", "https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>", "https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/gitlab", "https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>", "https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/google", "ldap://host:port/basedn?attribute?scope?filter", "(&(<filter>)(<attribute>=<username>))", "https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>", "https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/openid" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/authentication_and_authorization/sd-configuring-identity-providers
Chapter 58. Implementing the Interceptors Processing Logic
Chapter 58. Implementing the Interceptors Processing Logic Abstract Interceptors are straightforward to implement. The bulk of their processing logic is in the handleMessage() method. This method receives the message data and manipulates it as needed. Developers may also want to add some special logic to handle fault processing cases. 58.1. Interceptor Flow Figure 58.1, "Flow through an interceptor" shows the process flow through an interceptor. Figure 58.1. Flow through an interceptor In normal message processing, only the handleMessage() method is called. The handleMessage() method is where the interceptor's message processing logic is placed. If an error occurs in the handleMessage() method of the interceptor, or any subsequent interceptor in the interceptor chain, the handleFault() method is called. The handleFault() method is useful for cleaning up after an interceptor in the event of an error. It can also be used to alter the fault message. 58.2. Processing messages Overview In normal message processing, an interceptor's handleMessage() method is invoked. It receives that message data as a Message object. Along with the actual contents of the message, the Message object may contain a number of properties related to the message or the message processing state. The exact contents of the Message object depends on the interceptors preceding the current interceptor in the chain. Getting the message contents The Message interface provides two methods that can be used in extracting the message contents: public <T> T getContent java.lang.Class<T> format The getContent() method returns the content of the message in an object of the specified class. If the contents are not available as an instance of the specified class, null is returned. The list of available content types is determined by the interceptor's location on the interceptor chain and the direction of the interceptor chain. public Collection<Attachment> getAttachments The getAttachments() method returns a Java Collection object containing any binary attachments associated with the message. The attachments are stored in org.apache.cxf.message.Attachment objects. Attachment objects provide methods for managing the binary data. Important Attachments are only available after the attachment processing interceptors have executed. Determining the message's direction The direction of a message can be determined by querying the message exchange. The message exchange stores the inbound message and the outbound message in separate properties. [3] The message exchange associated with a message is retrieved using the message's getExchange() method. As shown in Example 58.1, "Getting the message exchange" , getExchange() does not take any parameters and returns the message exchange as a org.apache.cxf.message.Exchange object. Example 58.1. Getting the message exchange Exchange getExchange The Exchange object has four methods, shown in Example 58.2, "Getting messages from a message exchange" , for getting the messages associated with an exchange. Each method will either return the message as a org.apache.cxf.Message object or it will return null if the message does not exist. Example 58.2. Getting messages from a message exchange Message getInMessage Message getInFaultMessage Message getOutMessage Message getOutFaultMessage Example 58.3, "Checking the direction of a message chain" shows code for determining if the current message is outbound. The method gets the message exchange and checks to see if the current message is the same as the exchange's outbound message. It also checks the current message against the exchanges outbound fault message to error messages on the outbound fault interceptor chain. Example 58.3. Checking the direction of a message chain Example Example 58.4, "Example message processing method" shows code for an interceptor that processes zip compressed messages. It checks the direction of the message and then performs the appropriate actions. Example 58.4. Example message processing method 58.3. Unwinding after an error Overview When an error occurs during the execution of an interceptor chain, the runtime stops traversing the interceptor chain and unwinds the chain by calling the handleFault() method of any interceptors in the chain that have already been executed. The handleFault() method can be used to clean up any resources used by an interceptor during normal message processing. It can also be used to rollback any actions that should only stand if message processing completes successfully. In cases where the fault message will be passed on to an outbound fault processing interceptor chain, the handleFault() method can also be used to add information to the fault message. Getting the message payload The handleFault() method receives the same Message object as the handleMessage() method used in normal message processing. Getting the message contents from the Message object is described in the section called "Getting the message contents" . Example Example 58.5, "Handling an unwinding interceptor chain" shows code used to ensure that the original XML stream is placed back into the message when the interceptor chain is unwound. Example 58.5. Handling an unwinding interceptor chain [3] It also stores inbound and outbound faults separately.
[ "public static boolean isOutbound() { Exchange exchange = message.getExchange(); return message != null && exchange != null && (message == exchange.getOutMessage() || message == exchange.getOutFaultMessage()); }", "import java.io.IOException; import java.io.InputStream; import java.util.zip.GZIPInputStream; import org.apache.cxf.message.Message; import org.apache.cxf.phase.AbstractPhaseInterceptor; import org.apache.cxf.phase.Phase; public class StreamInterceptor extends AbstractPhaseInterceptor<Message> { public void handleMessage(Message message) { boolean isOutbound = false; isOutbound = message == message.getExchange().getOutMessage() || message == message.getExchange().getOutFaultMessage(); if (!isOutbound) { try { InputStream is = message.getContent(InputStream.class); GZIPInputStream zipInput = new GZIPInputStream(is); message.setContent(InputStream.class, zipInput); } catch (IOException ioe) { ioe.printStackTrace(); } } else { // zip the outbound message } } }", "@Override public void handleFault(SoapMessage message) { super.handleFault(message); XMLStreamWriter writer = (XMLStreamWriter)message.get(ORIGINAL_XML_WRITER); if (writer != null) { message.setContent(XMLStreamWriter.class, writer); } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxfinterceptorimpl
Chapter 2. Driver Toolkit
Chapter 2. Driver Toolkit Learn about the Driver Toolkit and how you can use it as a base image for driver containers for enabling special software and hardware devices on OpenShift Container Platform deployments. 2.1. About the Driver Toolkit Background The Driver Toolkit is a container image in the OpenShift Container Platform payload used as a base image on which you can build driver containers. The Driver Toolkit image includes the kernel packages commonly required as dependencies to build or install kernel modules, as well as a few tools needed in driver containers. The version of these packages will match the kernel version running on the Red Hat Enterprise Linux CoreOS (RHCOS) nodes in the corresponding OpenShift Container Platform release. Driver containers are container images used for building and deploying out-of-tree kernel modules and drivers on container operating systems like RHCOS. Kernel modules and drivers are software libraries running with a high level of privilege in the operating system kernel. They extend the kernel functionalities or provide the hardware-specific code required to control new devices. Examples include hardware devices like Field Programmable Gate Arrays (FPGA) or GPUs, and software-defined storage (SDS) solutions, such as Lustre parallel file systems, which require kernel modules on client machines. Driver containers are the first layer of the software stack used to enable these technologies on Kubernetes. The list of kernel packages in the Driver Toolkit includes the following and their dependencies: kernel-core kernel-devel kernel-headers kernel-modules kernel-modules-extra In addition, the Driver Toolkit also includes the corresponding real-time kernel packages: kernel-rt-core kernel-rt-devel kernel-rt-modules kernel-rt-modules-extra The Driver Toolkit also has several tools that are commonly needed to build and install kernel modules, including: elfutils-libelf-devel kmod binutilskabi-dw kernel-abi-whitelists dependencies for the above Purpose Prior to the Driver Toolkit's existence, users would install kernel packages in a pod or build config on OpenShift Container Platform using entitled builds or by installing from the kernel RPMs in the hosts machine-os-content . The Driver Toolkit simplifies the process by removing the entitlement step, and avoids the privileged operation of accessing the machine-os-content in a pod. The Driver Toolkit can also be used by partners who have access to pre-released OpenShift Container Platform versions to prebuild driver-containers for their hardware devices for future OpenShift Container Platform releases. The Driver Toolkit is also used by the Kernel Module Management (KMM), which is currently available as a community Operator on OperatorHub. KMM supports out-of-tree and third-party kernel drivers and the support software for the underlying operating system. Users can create modules for KMM to build and deploy a driver container, as well as support software like a device plugin, or metrics. Modules can include a build config to build a driver container-based on the Driver Toolkit, or KMM can deploy a prebuilt driver container. 2.2. Pulling the Driver Toolkit container image The driver-toolkit image is available from the Container images section of the Red Hat Ecosystem Catalog and in the OpenShift Container Platform release payload. The image corresponding to the most recent minor release of OpenShift Container Platform will be tagged with the version number in the catalog. The image URL for a specific release can be found using the oc adm CLI command. 2.2.1. Pulling the Driver Toolkit container image from registry.redhat.io Instructions for pulling the driver-toolkit image from registry.redhat.io with podman or in OpenShift Container Platform can be found on the Red Hat Ecosystem Catalog . The driver-toolkit image for the latest minor release are tagged with the minor release version on registry.redhat.io , for example: registry.redhat.io/openshift4/driver-toolkit-rhel8:v4.12 . 2.2.2. Finding the Driver Toolkit image URL in the payload Prerequisites You obtained the image pull secret from the Red Hat OpenShift Cluster Manager . You installed the OpenShift CLI ( oc ). Procedure Use the oc adm command to extract the image URL of the driver-toolkit corresponding to a certain release: For an x86 image, enter the following command: USD oc adm release info quay.io/openshift-release-dev/ocp-release:4.12.z-x86_64 --image-for=driver-toolkit For an ARM image, enter the following command: USD oc adm release info quay.io/openshift-release-dev/ocp-release:4.12.z-aarch64 --image-for=driver-toolkit Example output quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fd84aee79606178b6561ac71f8540f404d518ae5deff45f6d6ac8f02636c7f4 Obtain this image by using a valid pull secret, such as the pull secret required to install OpenShift Container Platform: USD podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA> 2.3. Using the Driver Toolkit As an example, the Driver Toolkit can be used as the base image for building a very simple kernel module called simple-kmod . Note The Driver Toolkit includes the necessary dependencies, openssl , mokutil , and keyutils , needed to sign a kernel module. However, in this example, the simple-kmod kernel module is not signed and therefore cannot be loaded on systems with Secure Boot enabled. 2.3.1. Build and run the simple-kmod driver container on a cluster Prerequisites You have a running OpenShift Container Platform cluster. You set the Image Registry Operator state to Managed for your cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Create a namespace. For example: USD oc new-project simple-kmod-demo The YAML defines an ImageStream for storing the simple-kmod driver container image, and a BuildConfig for building the container. Save this YAML as 0000-buildconfig.yaml.template . apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: "" runPolicy: "Serial" triggers: - type: "ConfigChange" - type: "ImageChange" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo Substitute the correct driver toolkit image for the OpenShift Container Platform version you are running in place of "DRIVER_TOOLKIT_IMAGE" with the following commands. USD OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version}) USD DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit) USD sed "s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml Create the image stream and build config with USD oc create -f 0000-buildconfig.yaml After the builder pod completes successfully, deploy the driver container image as a DaemonSet . The driver container must run with the privileged security context in order to load the kernel modules on the host. The following YAML file contains the RBAC rules and the DaemonSet for running the driver container. Save this YAML as 1000-drivercontainer.yaml . apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: ["modprobe", "-v", "-a" , "simple-kmod", "simple-procfs-kmod"] preStop: exec: command: ["modprobe", "-r", "-a" , "simple-kmod", "simple-procfs-kmod"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: "" Create the RBAC rules and daemon set: USD oc create -f 1000-drivercontainer.yaml After the pods are running on the worker nodes, verify that the simple_kmod kernel module is loaded successfully on the host machines with lsmod . Verify that the pods are running: USD oc get pod -n simple-kmod-demo Example output NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s Execute the lsmod command in the driver container pod: USD oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 2.4. Additional resources For more information about configuring registry storage for your cluster, see Image Registry Operator in OpenShift Container Platform .
[ "oc adm release info quay.io/openshift-release-dev/ocp-release:4.12.z-x86_64 --image-for=driver-toolkit", "oc adm release info quay.io/openshift-release-dev/ocp-release:4.12.z-aarch64 --image-for=driver-toolkit", "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fd84aee79606178b6561ac71f8540f404d518ae5deff45f6d6ac8f02636c7f4", "podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>", "oc new-project simple-kmod-demo", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo", "OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})", "DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)", "sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml", "oc create -f 0000-buildconfig.yaml", "apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: [\"modprobe\", \"-v\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] preStop: exec: command: [\"modprobe\", \"-r\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc create -f 1000-drivercontainer.yaml", "oc get pod -n simple-kmod-demo", "NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s", "oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple", "simple_procfs_kmod 16384 0 simple_kmod 16384 0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/specialized_hardware_and_driver_enablement/driver-toolkit