title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 8. Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment | Chapter 8. Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment Red Hat Ceph Storage Dashboard is disabled by default but you can enable it in your overcloud with the Red Hat OpenStack Platform director. The Ceph Dashboard is a built-in, web-based Ceph management and monitoring application that administers various aspects and objects in your cluster. Red Hat Ceph Storage Dashboard comprises the following components: The Ceph Dashboard manager module provides the user interface and embeds the platform front end, Grafana. Prometheus, the monitoring plugin. Alertmanager sends alerts to the Dashboard. Node Exporters export cluster data to the Dashboard. Note This feature is supported with Ceph Storage 4.1 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . Note The Red Hat Ceph Storage Dashboard is always colocated on the same nodes as the other Ceph manager components. [NOTE] If you want to add Ceph Dashboard during your initial overcloud deployment, complete the procedures in this chapter before you deploy your initial overcloud in Section 7.2, "Initiating overcloud deployment" . The following diagram shows the architecture of Ceph Dashboard on Red Hat OpenStack Platform: For more information about the Dashboard and its features and limitations, see Dashboard features in the Red Hat Ceph Storage Dashboard Guide . TLS everywhere with Ceph Dashboard The Dashboard front end is fully integrated with the TLS everywhere framework. You can enable TLS everywhere provided that you have the required environment files and they are included in the overcloud deploy command. This triggers the certificate request for both Grafana and the Ceph Dashboard and the generated certificate and key files are passed to ceph-ansible during the overcloud deployment. For instructions and more information about how to enable TLS for the Dashboard as well as for other openstack services, see the following locations in the Advanced Overcloud Customization guide: Enabling SSL/TLS on Overcloud Public Endpoints . Enabling SSL/TLS on Internal and Public Endpoints with Identity Management . Note The port to reach the Ceph Dashboard remains the same even in the TLS-everywhere context. 8.1. Including the necessary containers for the Ceph Dashboard Before you can add the Ceph Dashboard templates to your overcloud, you must include the necessary containers by using the containers-prepare-parameter.yaml file. To generate the containers-prepare-parameter.yaml file to prepare your container images, complete the following steps: Procedure Log in to your undercloud host as the stack user. Generate the default container image preparation file: Edit the containers-prepare-parameter.yaml file and make the modifications to suit your requirements. The following example containers-prepare-parameter.yaml file contains the image locations and tags related to the Dashboard services including Grafana, Prometheus, Alertmanager, and Node Exporter. Edit the values depending on your specific scenario: For more information about registry and image configuration with the containers-prepare-parameter.yaml file, see Container image preparation parameters in the Transitioning to Containerized Services guide. 8.2. Deploying Ceph Dashboard Include the ceph-dashboard environment file to deploy the Ceph Dashboard. Note If you want to deploy Ceph Dashboard with a composable network, see Section 8.3, "Deploying Ceph Dashboard with a composable network" . Note The Ceph Dashboard admin user role is set to read-only mode by default. To change the Ceph Dashboard admin default mode, see Section 8.4, "Changing the default permissions" . Procedure Log in to the undercloud node as the stack user. Optional: The Ceph Dashboard network is set by default to the provisioning network. If you want to deploy the Ceph Dashboard and access it through a different network, create an environment file, for example: ceph_dashboard_network_override.yaml . Set CephDashboardNetwork to one of the existing overcloud routed networks, for example external : Important Changing the CephDashboardNetwork value to access the Ceph Dashboard from a different network is not supported after the initial deployment. Include the following environment files in the openstack overcloud deploy command. Include all environment files that are part of your deployment, and the ceph_dashboard_network_override.yaml file if you chose to change the default network: Replace <overcloud_environment_files> with the list of environment files that are part of your deployment. Result The resulting deployment comprises an external stack with the grafana, prometheus, alertmanager, and node-exporter containers. The Ceph Dashboard manager module is the back end for this stack, and it embeds the grafana layouts to provide ceph cluster specific metrics to the end users. 8.3. Deploying Ceph Dashboard with a composable network You can deploy the Ceph Dashboard on a composable network instead of on the default Provisioning network. This eliminates the need to expose the Ceph Dashboard service on the Provisioning network. When you deploy the Dashboard on a composable network, you can also implement separate authorization profiles. You must choose which network to use before you deploy because you can apply the Dashboard to a new network only when you first deploy the overcloud. Use the following procedure to choose a composable network before you deploy. Procedure Log in to the undercloud as the stack user. Generate the Controller specific role to include the Dashboard composable network: Result A new ControllerStorageDashboard role is generated inside the roles_data.yaml defined as the output of the command. You must include this file in the template list when you use the overcloud deploy command. NOTE: The ControllerStorageDashboard role does not contain CephNFS nor network_data_dashboard.yaml . Director provides a network environment file where the composable network is defined. The default location of this file is /usr/share/openstack-tripleo-heat-templates/network_data_dashboard.yaml . You must include this file in the overcloud template list when you use the overcloud deploy command. Include the following environment files, with all environment files that are part of your deployment, in the openstack overcloud deploy command: Replace <overcloud_environment_files> with the list of environment files that are part of your deployment. Result The resulting deployment comprises an external stack with the grafana, prometheus, alertmanager, and node-exporter containers. The Ceph Dashboard manager module is the back end for this stack, and it embeds the grafana layouts to provide Ceph cluster-specific metrics to the end users. 8.4. Changing the default permissions The Ceph Dashboard admin user role is set to read-only mode by default for safe monitoring of the Ceph cluster. To permit an admin user to have elevated privileges so that they can alter elements of the Ceph cluster with the Dashboard, you can use the CephDashboardAdminRO parameter to change the default admin permissions. Warning A user with full permissions might alter elements of your cluster that director configures. This can cause a conflict with director-configured options when you run a stack update. To avoid this problem, do not alter director-configured options with Ceph Dashboard, for example, Ceph OSP pools attributes. Procedure Log in to the undercloud as the stack user. Create the following ceph_dashboard_admin.yaml environment file: Run the overcloud deploy command to update the existing stack and include the environment file you created with all other environment files that are part of your existing deployment: Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment. 8.5. Accessing Ceph Dashboard To test that Ceph Dashboard is running correctly, complete the following verification steps to access it and check that the data it displays from the Ceph cluster is correct. Procedure Log in to the undercloud node as the stack user. Retrieve the dashboard admin login credentials: Retrieve the VIP address to access the Ceph Dashboard: Use a web browser to point to the front end VIP and access the Dashboard. Director configures and exposes the Dashboard on the provisioning network, so you can use the VIP that you retrieved to access the Dashboard directly on TCP port 8444. Ensure that the following conditions are met: The Web client host is layer 2 connected to the provisioning network. The provisioning network is properly routed or proxied, and it can be reached from the web client host. If these conditions are not met, you can still open a SSH tunnel to reach the Dashboard VIP on the overcloud: Replace <dashboard_vip> with the IP address of the control plane VIP that you retrieved. To access the Dashboard, go to: http://localhost:8444 in a web browser and log in with the following details: The default user that ceph-ansible creates: admin . The password in /var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml . Results You can access the Ceph Dashboard. The numbers and graphs that the Dashboard displays reflect the same cluster status that the CLI command, ceph -s , returns. For more information about the Red Hat Ceph Storage Dashboard, see the Red Hat Ceph Storage Administration Guide | [
"sudo openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.6 ceph_grafana_image: rhceph-4-dashboard-rhel8 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: 4 ceph_image: rhceph-4-rhel8 ceph_namespace: registry.redhat.io/rhceph ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.6 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.6 ceph_tag: latest",
"parameter_defaults: ServiceNetMap: CephDashboardNetwork: external",
"openstack overcloud deploy --templates -e <overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-dashboard.yaml -e ceph_dashboard_network_override.yaml",
"openstack overcloud roles generate -o /home/stack/roles_data_dashboard.yaml ControllerStorageDashboard Compute BlockStorage ObjectStorage CephStorage",
"openstack overcloud deploy --templates -r /home/stack/roles_data.yaml -n /usr/share/openstack-tripleo-heat-templates/network_data_dashboard.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e <overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-dashboard.yaml",
"parameter_defaults: CephDashboardAdminRO: false",
"openstack overcloud deploy --templates -e <existing_overcloud_environment_files> -e ceph_dashboard_admin.yml",
"[stack@undercloud ~]USD grep dashboard_admin_password /var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml",
"[stack@undercloud-0 ~]USD grep dashboard_frontend_vip /var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml",
"client_hostUSD ssh -L 8444:<dashboard_vip>:8444 stack@<your undercloud>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_an_overcloud_with_containerized_red_hat_ceph/adding-ceph-dashboard |
Chapter 116. KafkaMirrorMakerSpec schema reference | Chapter 116. KafkaMirrorMakerSpec schema reference Used in: KafkaMirrorMaker Full list of KafkaMirrorMakerSpec schema properties Configures Kafka MirrorMaker. 116.1. include Use the include property to configure a list of topics that Kafka MirrorMaker mirrors from the source to the target Kafka cluster. The property allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using A|B or all topics using * . You can also pass multiple regular expressions separated by commas to the Kafka MirrorMaker. 116.2. KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec Use the KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec to configure source (consumer) and target (producer) clusters. Kafka MirrorMaker always works together with two Kafka clusters (source and target). To establish a connection, the bootstrap servers for the source and the target Kafka clusters are specified as comma-separated lists of HOSTNAME:PORT pairs. Each comma-separated list contains one or more Kafka brokers or a Service pointing to Kafka brokers specified as a HOSTNAME:PORT pair. 116.3. logging Kafka MirrorMaker has its own configurable logger: mirrormaker.root.logger MirrorMaker uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: inline loggers: mirrormaker.root.logger: INFO log4j.logger.org.apache.kafka.clients.NetworkClient: TRACE log4j.logger.org.apache.kafka.common.network.Selector: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 116.4. KafkaMirrorMakerSpec schema properties Property Description version The Kafka MirrorMaker version. Defaults to 3.5.0. Consult the documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Deployment . integer image The docker image for the pods. string consumer Configuration of source cluster. KafkaMirrorMakerConsumerSpec producer Configuration of target cluster. KafkaMirrorMakerProducerSpec resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements whitelist The whitelist property has been deprecated, and should now be configured using spec.include . List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the expression A|B . Or, as a special case, you can mirror all topics using the regular expression * . You can also specify multiple regular expressions separated by commas. string include List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the expression A|B . Or, as a special case, you can mirror all topics using the regular expression * . You can also specify multiple regular expressions separated by commas. string jvmOptions JVM Options for pods. JvmOptions logging Logging configuration for MirrorMaker. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics tracing The configuration of tracing in Kafka MirrorMaker. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger, opentelemetry]. JaegerTracing , OpenTelemetryTracing template Template to specify how Kafka MirrorMaker resources, Deployments and Pods , are generated. KafkaMirrorMakerTemplate livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: inline loggers: mirrormaker.root.logger: INFO log4j.logger.org.apache.kafka.clients.NetworkClient: TRACE log4j.logger.org.apache.kafka.common.network.Selector: DEBUG #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaMirrorMakerSpec-reference |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_properties_file/making-open-source-more-inclusive |
7.4. Block I/O Tuning Techniques | 7.4. Block I/O Tuning Techniques This section describes more techniques for tuning block I/O performance in virtualized environments. 7.4.1. Disk I/O Throttling When several virtual machines are running simultaneously, they can interfere with system performance by using excessive disk I/O. Disk I/O throttling in KVM provides the ability to set a limit on disk I/O requests sent from virtual machines to the host machine. This can prevent a virtual machine from over-utilizing shared resources and impacting the performance of other virtual machines. Disk I/O throttling can be useful in various situations, for example when guest virtual machines belonging to different customers are running on the same host, or when quality of service guarantees are given for different guests. Disk I/O throttling can also be used to simulate slower disks. I/O throttling can be applied independently to each block device attached to a guest and supports limits on throughput and I/O operations. Use the virsh blkdeviotune command to set I/O limits for a virtual machine: Device specifies a unique target name ( <target dev='name'/> ) or source file ( <source file='name'/> ) for one of the disk devices attached to the virtual machine. Use the virsh domblklist command for a list of disk device names. Optional parameters include: total-bytes-sec The total throughput limit in bytes per second. read-bytes-sec The read throughput limit in bytes per second. write-bytes-sec The write throughput limit in bytes per second. total-iops-sec The total I/O operations limit per second. read-iops-sec The read I/O operations limit per second. write-iops-sec The write I/O operations limit per second. For example, to throttle vda on virtual_machine to 1000 I/O operations per second and 50 MB per second throughput, run this command: 7.4.2. Multi-Queue virtio-scsi Multi-queue virtio-scsi provides improved storage performance and scalability in the virtio-scsi driver. It enables each virtual CPU to have a separate queue and interrupt to use without affecting other vCPUs. 7.4.2.1. Configuring Multi-Queue virtio-scsi Multi-queue virtio-scsi is disabled by default on Red Hat Enterprise Linux 7. To enable multi-queue virtio-scsi support in the guest, add the following to the guest XML configuration, where N is the total number of vCPU queues: | [
"virsh blkdeviotune virtual_machine device --parameter limit",
"virsh blkdeviotune virtual_machine vda --total-iops-sec 1000 --total-bytes-sec 52428800",
"<controller type='scsi' index='0' model='virtio-scsi'> <driver queues=' N ' /> </controller>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-BlockIO-Techniques |
Chapter 8. Managing virtual machines in the web console | Chapter 8. Managing virtual machines in the web console To manage virtual machines in a graphical interface on a RHEL 9 host, you can use the Virtual Machines pane in the RHEL 9 web console. 8.1. Overview of virtual machine management by using the web console The RHEL 9 web console is a web-based interface for system administration. As one of its features, the web console provides a graphical view of virtual machines (VMs) on the host system, and makes it possible to create, access, and configure these VMs. Note that to use the web console to manage your VMs on RHEL 9, you must first install a web console plug-in for virtualization. steps For instructions on enabling VMs management in your web console, see Setting up the web console to manage virtual machines . For a comprehensive list of VM management actions that the web console provides, see Virtual machine management features available in the web console . 8.2. Setting up the web console to manage virtual machines Before using the RHEL 9 web console to manage virtual machines (VMs), you must install the web console virtual machine plug-in on the host. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Install the cockpit-machines plug-in. Verification Log in to the RHEL 9 web console. For details, see Logging in to the web console . If the installation was successful, Virtual Machines appears in the web console side menu. Additional resources Managing systems by using the RHEL 9 web console 8.3. Renaming virtual machines by using the web console You might require renaming an existing virtual machine (VM) to avoid naming conflicts or assign a new unique name based on your use case. To rename the VM, you can use the RHEL web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . The VM is shut down. Procedure In the Virtual Machines interface, click the Menu button ... of the VM that you want to rename. A drop-down menu appears with controls for various VM operations. Click Rename . The Rename a VM dialog appears. In the New name field, enter a name for the VM. Click Rename . Verification Check that the new VM name has appeared in the Virtual Machines interface. 8.4. Virtual machine management features available in the web console By using the RHEL 9 web console, you can perform the following actions to manage the virtual machines (VMs) on your system. Table 8.1. VM management tasks that you can perform in the RHEL 9 web console Task For details, see Create a VM and install it with a guest operating system Creating virtual machines and installing guest operating systems by using the web console Delete a VM Deleting virtual machines by using the web console Start, shut down, and restart the VM Starting virtual machines by using the web console and Shutting down and restarting virtual machines by using the web console Connect to and interact with a VM using a variety of consoles Interacting with virtual machines by using the web console View a variety of information about the VM Viewing virtual machine information by using the web console Adjust the host memory allocated to a VM Adding and removing virtual machine memory by using the web console Manage network connections for the VM Using the web console for managing virtual machine network interfaces Manage the VM storage available on the host and attach virtual disks to the VM Managing storage for virtual machines by using the web console Configure the virtual CPU settings of the VM Managing virtal CPUs by using the web console Live migrate a VM Live migrating a virtual machine by using the web console Manage host devices Managing host devices by using the web console Manage virtual optical drives Managing virtual optical drives Attach watchdog device Attaching a watchdog device to a virtual machine by using the web console | [
"dnf install cockpit-machines"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/managing-virtual-machines-in-the-web-console_configuring-and-managing-virtualization |
Chapter 11. Clair security scanner | Chapter 11. Clair security scanner Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Quay.io, is automatically enabled, and is managed by the Red Hat Quay development team. For Quay.io users, images are automatically indexed after they are pushed to your repository. Reports are then fetched from Clair, which matches images against its CVE's database to report security information. This process happens automatically on Quay.io, and manual recans are not required. 11.1. About Clair Clair uses Common Vulnerability Scoring System (CVSS) data from the National Vulnerability Database (NVD) to enrich vulnerability data, which is a United States government repository of security-related information, including known vulnerabilities and security issues in various software components and systems. Using scores from the NVD provides Clair the following benefits: Data synchronization . Clair can periodically synchronize its vulnerability database with the NVD. This ensures that it has the latest vulnerability data. Matching and enrichment . Clair compares the metadata and identifiers of vulnerabilities it discovers in container images with the data from the NVD. This process involves matching the unique identifiers, such as Common Vulnerabilities and Exposures (CVE) IDs, to the entries in the NVD. When a match is found, Clair can enrich its vulnerability information with additional details from NVD, such as severity scores, descriptions, and references. Severity Scores . The NVD assigns severity scores to vulnerabilities, such as the Common Vulnerability Scoring System (CVSS) score, to indicate the potential impact and risk associated with each vulnerability. By incorporating NVD's severity scores, Clair can provide more context on the seriousness of the vulnerabilities it detects. If Clair finds vulnerabilities from NVD, a detailed and standardized assessment of the severity and potential impact of vulnerabilities detected within container images is reported to users on the UI. CVSS enrichment data provides Clair the following benefits: Vulnerability prioritization . By utilizing CVSS scores, users can prioritize vulnerabilities based on their severity, helping them address the most critical issues first. Assess Risk . CVSS scores can help Clair users understand the potential risk a vulnerability poses to their containerized applications. Communicate Severity . CVSS scores provide Clair users a standardized way to communicate the severity of vulnerabilities across teams and organizations. Inform Remediation Strategies . CVSS enrichment data can guide Quay.io users in developing appropriate remediation strategies. Compliance and Reporting . Integrating CVSS data into reports generated by Clair can help organizations demonstrate their commitment to addressing security vulnerabilities and complying with industry standards and regulations. 11.1.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database 11.1.2. Clair supported dependencies Clair supports identifying and managing the following dependencies: Java Golang Python Ruby This means that it can analyze and report on the third-party libraries and packages that a project in these languages relies on to work correctly. When an image that contains packages from a language unsupported by Clair is pushed to your repository, a vulnerability scan cannot be performed on those packages. Users do not receive an analysis or security report for unsupported dependencies or packages. As a result, the following consequences should be considered: Security risks . Dependencies or packages that are not scanned for vulnerability might pose security risks to your organization. Compliance issues . If your organization has specific security or compliance requirements, unscanned, or partially scanned, container images might result in non-compliance with certain regulations. Note Scanned images are indexed, and a vulnerability report is created, but it might omit data from certain unsupported languages. For example, if your container image contains a Lua application, feedback from Clair is not provided because Clair does not detect it. It can detect other languages used in the container image, and shows detected CVEs for those languages. As a result, Clair images are fully scanned based on what it supported by Clair. 11.2. Viewing Clair security scans by using the UI You can view Clair security scans on the UI. Procedure Navigate to a repository and click Tags in the navigation pane. This page shows the results of the security scan. To reveal more information about multi-architecture images, click See Child Manifests to see the list of manifests in extended view. Click a relevant link under See Child Manifests , for example, 1 Unknown to be redirected to the Security Scanner page. The Security Scanner page provides information for the tag, such as which CVEs the image is susceptible to, and what remediation options you might have available. Note Image scanning only lists vulnerabilities found by Clair security scanner. What users do about the vulnerabilities are uncovered is up to said user. 11.3. Clair severity mapping Clair offers a comprehensive approach to vulnerability assessment and management. One of its essential features is the normalization of security databases' severity strings. This process streamlines the assessment of vulnerability severities by mapping them to a predefined set of values. Through this mapping, clients can efficiently react to vulnerability severities without the need to decipher the intricacies of each security database's unique severity strings. These mapped severity strings align with those found within the respective security databases, ensuring consistency and accuracy in vulnerability assessment. 11.3.1. Clair severity strings Clair alerts users with the following severity strings: Unknown Negligible Low Medium High Critical These severity strings are similar to the strings found within the relevant security database. Alpine mapping Alpine SecDB database does not provide severity information. All vulnerability severities will be Unknown. Alpine Severity Clair Severity * Unknown AWS mapping AWS UpdateInfo database provides severity information. AWS Severity Clair Severity low Low medium Medium important High critical Critical Debian mapping Debian Oval database provides severity information. Debian Severity Clair Severity * Unknown Unimportant Low Low Medium Medium High High Critical Oracle mapping Oracle Oval database provides severity information. Oracle Severity Clair Severity N/A Unknown LOW Low MODERATE Medium IMPORTANT High CRITICAL Critical RHEL mapping RHEL Oval database provides severity information. RHEL Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical SUSE mapping SUSE Oval database provides severity information. Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical Ubuntu mapping Ubuntu Oval database provides severity information. Severity Clair Severity Untriaged Unknown Negligible Negligible Low Low Medium Medium High High Critical Critical OSV mapping Table 11.1. CVSSv3 Base Score Clair Severity 0.0 Negligible 0.1-3.9 Low 4.0-6.9 Medium 7.0-8.9 High 9.0-10.0 Critical Table 11.2. CVSSv2 Base Score Clair Severity 0.0-3.9 Low 4.0-6.9 Medium 7.0-10 High | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/about_quay_io/clair-vulnerability-scanner |
Chapter 1. Installation methods | Chapter 1. Installation methods You can install OpenShift Container Platform on Amazon Web Services (AWS) using installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. You can also install OpenShift Container Platform on a single node, which is a specialized installation method that is ideal for edge computing environments. 1.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on AWS : You can install OpenShift Container Platform on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on AWS : You can install a customized cluster on AWS infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on AWS with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on AWS in a restricted network : You can install OpenShift Container Platform on AWS on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. Installing a cluster on an existing Virtual Private Cloud : You can install OpenShift Container Platform on an existing AWS Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing AWS VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on AWS into a government or secret region : OpenShift Container Platform can be deployed into AWS regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads in the cloud. 1.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on AWS infrastructure that you provision, by using one of the following methods: Installing a cluster on AWS infrastructure that you provide : You can install OpenShift Container Platform on AWS infrastructure that you provide. You can use the provided CloudFormation templates to create stacks of AWS resources that represent each of the components required for an OpenShift Container Platform installation. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure : You can install OpenShift Container Platform on AWS infrastructure that you provide by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the AWS APIs. 1.3. Installing a cluster on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the requirements for installing on a single node , and the additional requirements for installing single-node OpenShift on a cloud provider . After addressing the requirements for single node installation, use the Installing a customized cluster on AWS procedure to install the cluster. The installing single-node OpenShift manually section contains an exemplary install-config.yaml file when installing an OpenShift Container Platform cluster on a single node. 1.4. Additional resources Installation process | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_aws/preparing-to-install-on-aws |
Chapter 78. subnet | Chapter 78. subnet This chapter describes the commands under the subnet command. 78.1. subnet create Create a subnet Usage: Table 78.1. Positional arguments Value Summary <name> New subnet name Table 78.2. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --subnet-pool <subnet-pool> Subnet pool from which this subnet will obtain a cidr (Name or ID) --use-prefix-delegation USE_PREFIX_DELEGATION Use prefix-delegation if ip is ipv6 format and ip would be delegated externally --use-default-subnet-pool Use default subnet pool for --ip-version --prefix-length <prefix-length> Prefix length for subnet allocation from subnet pool --subnet-range <subnet-range> Subnet range in cidr notation (required if --subnet- pool is not specified, optional otherwise) --dhcp Enable dhcp (default) --no-dhcp Disable dhcp --dns-publish-fixed-ip Enable publishing fixed ips in dns --no-dns-publish-fixed-ip Disable publishing fixed ips in dns (default) --gateway <gateway> Specify a gateway for the subnet. the three options are: <ip-address>: Specific IP address to use as the gateway, auto : Gateway address should automatically be chosen from within the subnet itself, none : This subnet will not use a gateway, e.g.: --gateway 192.168.9.1, --gateway auto, --gateway none (default is auto ). --ip-version {4,6} Ip version (default is 4). note that when subnet pool is specified, IP version is determined from the subnet pool and this option is ignored. --ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac} Ipv6 ra (router advertisement) mode, valid modes: [dhcpv6-stateful, dhcpv6-stateless, slaac] --ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac} Ipv6 address mode, valid modes: [dhcpv6-stateful, dhcpv6-stateless, slaac] --network-segment <network-segment> Network segment to associate with this subnet (name or ID) --network <network> Network this subnet belongs to (name or id) --description <description> Set subnet description --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses for this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to add multiple IP addresses) --dns-nameserver <dns-nameserver> Dns server for this subnet (repeat option to set multiple DNS servers) --host-route destination=<subnet>,gateway=<ip-address> Additional route for this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to add multiple routes) --service-type <service-type> Service type for this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to set multiple service types) --tag <tag> Tag to be added to the subnet (repeat option to set multiple tags) --no-tag No tags associated with the subnet Table 78.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 78.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 78.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 78.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 78.2. subnet delete Delete subnet(s) Usage: Table 78.7. Positional arguments Value Summary <subnet> Subnet(s) to delete (name or id) Table 78.8. Command arguments Value Summary -h, --help Show this help message and exit 78.3. subnet list List subnets Usage: Table 78.9. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --ip-version <ip-version> List only subnets of given ip version in output. Allowed values for IP version are 4 and 6. --dhcp List subnets which have dhcp enabled --no-dhcp List subnets which have dhcp disabled --service-type <service-type> List only subnets of a given service type in output e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to list multiple service types) --project <project> List only subnets which belong to a given project in output (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --network <network> List only subnets which belong to a given network in output (name or ID) --gateway <gateway> List only subnets of given gateway ip in output --name <name> List only subnets of given name in output --subnet-range <subnet-range> List only subnets of given subnet range (in cidr notation) in output e.g.: --subnet-range 10.10.0.0/16 --tags <tag>[,<tag>,... ] List subnets which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List subnets which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude subnets which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude subnets which have any given tag(s) (comma- separated list of tags) Table 78.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 78.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 78.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 78.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 78.4. subnet pool create Create subnet pool Usage: Table 78.14. Positional arguments Value Summary <name> Name of the new subnet pool Table 78.15. Command arguments Value Summary -h, --help Show this help message and exit --pool-prefix <pool-prefix> Set subnet pool prefixes (in cidr notation) (repeat option to set multiple prefixes) --default-prefix-length <default-prefix-length> Set subnet pool default prefix length --min-prefix-length <min-prefix-length> Set subnet pool minimum prefix length --max-prefix-length <max-prefix-length> Set subnet pool maximum prefix length --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --address-scope <address-scope> Set address scope associated with the subnet pool (name or ID), prefixes must be unique across address scopes --default Set this as a default subnet pool --no-default Set this as a non-default subnet pool --share Set this subnet pool as shared --no-share Set this subnet pool as not shared --description <description> Set subnet pool description --default-quota <num-ip-addresses> Set default per-project quota for this subnet pool as the number of IP addresses that can be allocated from the subnet pool --tag <tag> Tag to be added to the subnet pool (repeat option to set multiple tags) --no-tag No tags associated with the subnet pool Table 78.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 78.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 78.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 78.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 78.5. subnet pool delete Delete subnet pool(s) Usage: Table 78.20. Positional arguments Value Summary <subnet-pool> Subnet pool(s) to delete (name or id) Table 78.21. Command arguments Value Summary -h, --help Show this help message and exit 78.6. subnet pool list List subnet pools Usage: Table 78.22. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --share List subnet pools shared between projects --no-share List subnet pools not shared between projects --default List subnet pools used as the default external subnet pool --no-default List subnet pools not used as the default external subnet pool --project <project> List subnet pools according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --name <name> List only subnet pools of given name in output --address-scope <address-scope> List only subnet pools of given address scope in output (name or ID) --tags <tag>[,<tag>,... ] List subnet pools which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List subnet pools which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude subnet pools which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude subnet pools which have any given tag(s) (Comma-separated list of tags) Table 78.23. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 78.24. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 78.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 78.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 78.7. subnet pool set Set subnet pool properties Usage: Table 78.27. Positional arguments Value Summary <subnet-pool> Subnet pool to modify (name or id) Table 78.28. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set subnet pool name --pool-prefix <pool-prefix> Set subnet pool prefixes (in cidr notation) (repeat option to set multiple prefixes) --default-prefix-length <default-prefix-length> Set subnet pool default prefix length --min-prefix-length <min-prefix-length> Set subnet pool minimum prefix length --max-prefix-length <max-prefix-length> Set subnet pool maximum prefix length --address-scope <address-scope> Set address scope associated with the subnet pool (name or ID), prefixes must be unique across address scopes --no-address-scope Remove address scope associated with the subnet pool --default Set this as a default subnet pool --no-default Set this as a non-default subnet pool --description <description> Set subnet pool description --default-quota <num-ip-addresses> Set default per-project quota for this subnet pool as the number of IP addresses that can be allocated from the subnet pool --tag <tag> Tag to be added to the subnet pool (repeat option to set multiple tags) --no-tag Clear tags associated with the subnet pool. specify both --tag and --no-tag to overwrite current tags 78.8. subnet pool show Display subnet pool details Usage: Table 78.29. Positional arguments Value Summary <subnet-pool> Subnet pool to display (name or id) Table 78.30. Command arguments Value Summary -h, --help Show this help message and exit Table 78.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 78.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 78.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 78.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 78.9. subnet pool unset Unset subnet pool properties Usage: Table 78.35. Positional arguments Value Summary <subnet-pool> Subnet pool to modify (name or id) Table 78.36. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag to be removed from the subnet pool (repeat option to remove multiple tags) --all-tag Clear all tags associated with the subnet pool 78.10. subnet set Set subnet properties Usage: Table 78.37. Positional arguments Value Summary <subnet> Subnet to modify (name or id) Table 78.38. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Updated name of the subnet --dhcp Enable dhcp --no-dhcp Disable dhcp --dns-publish-fixed-ip Enable publishing fixed ips in dns --no-dns-publish-fixed-ip Disable publishing fixed ips in dns --gateway <gateway> Specify a gateway for the subnet. the options are: <ip-address>: Specific IP address to use as the gateway, none : This subnet will not use a gateway, e.g.: --gateway 192.168.9.1, --gateway none. --network-segment <network-segment> Network segment to associate with this subnet (name or ID). It is only allowed to set the segment if the current value is None , the network must also have only one segment and only one subnet can exist on the network. --description <description> Set subnet description --tag <tag> Tag to be added to the subnet (repeat option to set multiple tags) --no-tag Clear tags associated with the subnet. specify both --tag and --no-tag to overwrite current tags --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses for this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to add multiple IP addresses) --no-allocation-pool Clear associated allocation-pools from the subnet. Specify both --allocation-pool and --no-allocation- pool to overwrite the current allocation pool information. --dns-nameserver <dns-nameserver> Dns server for this subnet (repeat option to set multiple DNS servers) --no-dns-nameservers Clear existing information of dns nameservers. specify both --dns-nameserver and --no-dns-nameserver to overwrite the current DNS Nameserver information. --host-route destination=<subnet>,gateway=<ip-address> Additional route for this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to add multiple routes) --no-host-route Clear associated host-routes from the subnet. specify both --host-route and --no-host-route to overwrite the current host route information. --service-type <service-type> Service type for this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to set multiple service types) 78.11. subnet show Display subnet details Usage: Table 78.39. Positional arguments Value Summary <subnet> Subnet to display (name or id) Table 78.40. Command arguments Value Summary -h, --help Show this help message and exit Table 78.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 78.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 78.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 78.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 78.12. subnet unset Unset subnet properties Usage: Table 78.45. Positional arguments Value Summary <subnet> Subnet to modify (name or id) Table 78.46. Command arguments Value Summary -h, --help Show this help message and exit --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses to be removed from this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to unset multiple allocation pools) --gateway Remove gateway ip from this subnet --dns-nameserver <dns-nameserver> Dns server to be removed from this subnet (repeat option to unset multiple DNS servers) --host-route destination=<subnet>,gateway=<ip-address> Route to be removed from this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to unset multiple host routes) --service-type <service-type> Service type to be removed from this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to unset multiple service types) --tag <tag> Tag to be removed from the subnet (repeat option to remove multiple tags) --all-tag Clear all tags associated with the subnet | [
"openstack subnet create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project <project>] [--project-domain <project-domain>] [--subnet-pool <subnet-pool> | --use-prefix-delegation USE_PREFIX_DELEGATION | --use-default-subnet-pool] [--prefix-length <prefix-length>] [--subnet-range <subnet-range>] [--dhcp | --no-dhcp] [--dns-publish-fixed-ip | --no-dns-publish-fixed-ip] [--gateway <gateway>] [--ip-version {4,6}] [--ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}] [--ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}] [--network-segment <network-segment>] --network <network> [--description <description>] [--allocation-pool start=<ip-address>,end=<ip-address>] [--dns-nameserver <dns-nameserver>] [--host-route destination=<subnet>,gateway=<ip-address>] [--service-type <service-type>] [--tag <tag> | --no-tag] <name>",
"openstack subnet delete [-h] <subnet> [<subnet> ...]",
"openstack subnet list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--ip-version <ip-version>] [--dhcp | --no-dhcp] [--service-type <service-type>] [--project <project>] [--project-domain <project-domain>] [--network <network>] [--gateway <gateway>] [--name <name>] [--subnet-range <subnet-range>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]",
"openstack subnet pool create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --pool-prefix <pool-prefix> [--default-prefix-length <default-prefix-length>] [--min-prefix-length <min-prefix-length>] [--max-prefix-length <max-prefix-length>] [--project <project>] [--project-domain <project-domain>] [--address-scope <address-scope>] [--default | --no-default] [--share | --no-share] [--description <description>] [--default-quota <num-ip-addresses>] [--tag <tag> | --no-tag] <name>",
"openstack subnet pool delete [-h] <subnet-pool> [<subnet-pool> ...]",
"openstack subnet pool list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--share | --no-share] [--default | --no-default] [--project <project>] [--project-domain <project-domain>] [--name <name>] [--address-scope <address-scope>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]",
"openstack subnet pool set [-h] [--name <name>] [--pool-prefix <pool-prefix>] [--default-prefix-length <default-prefix-length>] [--min-prefix-length <min-prefix-length>] [--max-prefix-length <max-prefix-length>] [--address-scope <address-scope> | --no-address-scope] [--default | --no-default] [--description <description>] [--default-quota <num-ip-addresses>] [--tag <tag>] [--no-tag] <subnet-pool>",
"openstack subnet pool show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <subnet-pool>",
"openstack subnet pool unset [-h] [--tag <tag> | --all-tag] <subnet-pool>",
"openstack subnet set [-h] [--name <name>] [--dhcp | --no-dhcp] [--dns-publish-fixed-ip | --no-dns-publish-fixed-ip] [--gateway <gateway>] [--network-segment <network-segment>] [--description <description>] [--tag <tag>] [--no-tag] [--allocation-pool start=<ip-address>,end=<ip-address>] [--no-allocation-pool] [--dns-nameserver <dns-nameserver>] [--no-dns-nameservers] [--host-route destination=<subnet>,gateway=<ip-address>] [--no-host-route] [--service-type <service-type>] <subnet>",
"openstack subnet show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <subnet>",
"openstack subnet unset [-h] [--allocation-pool start=<ip-address>,end=<ip-address>] [--gateway] [--dns-nameserver <dns-nameserver>] [--host-route destination=<subnet>,gateway=<ip-address>] [--service-type <service-type>] [--tag <tag> | --all-tag] <subnet>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/subnet |
Chapter 2. Protect a service application by using OpenID Connect (OIDC) Bearer token authentication | Chapter 2. Protect a service application by using OpenID Connect (OIDC) Bearer token authentication Use the Quarkus OpenID Connect (OIDC) extension to secure a Jakarta REST application with Bearer token authentication. The bearer tokens are issued by OIDC and OAuth 2.0 compliant authorization servers, such as Keycloak . For more information about OIDC Bearer token authentication, see the Quarkus OpenID Connect (OIDC) Bearer token authentication guide. If you want to protect web applications by using OIDC Authorization Code Flow authentication, see the OpenID Connect authorization code flow mechanism for protecting web applications guide. 2.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.8.6 or later A working container runtime (Docker or Podman ) Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) The jq command-line processor tool 2.2. Architecture This example shows how you can build a simple microservice that offers two endpoints: /api/users/me /api/admin These endpoints are protected and can only be accessed if a client sends a bearer token along with the request, which must be valid (for example, signature, expiration, and audience) and trusted by the microservice. A Keycloak server issues the bearer token and represents the subject for which the token was issued. Because it is an OAuth 2.0 authorization server, the token also references the client acting on the user's behalf. Any user with a valid token can access the /api/users/me endpoint. As a response, it returns a JSON document with user details obtained from the information in the token. The /api/admin endpoint is protected with RBAC (Role-Based Access Control), which only users with the admin role can access. At this endpoint, the @RolesAllowed annotation is used to enforce the access constraint declaratively. 2.3. Solution Follow the instructions in the sections and create the application step by step. You can also go straight to the completed example. You can clone the Git repository by running the command git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.15 , or you can download an archive . The solution is located in the security-openid-connect-quickstart directory . 2.4. Create the Maven project You can either create a new Maven project with the oidc extension or you can add the extension to an existing Maven project. Complete one of the following commands: To create a new Maven project, use the following command: Using the Quarkus CLI: quarkus create app org.acme:security-openid-connect-quickstart \ --extension='oidc,rest-jackson' \ --no-code cd security-openid-connect-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-openid-connect-quickstart \ -Dextensions='oidc,rest-jackson' \ -DnoCode cd security-openid-connect-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-openid-connect-quickstart" If you already have your Quarkus project configured, you can add the oidc extension to your project by running the following command in your project base directory: Using the Quarkus CLI: quarkus extension add oidc Using Maven: ./mvnw quarkus:add-extension -Dextensions='oidc' Using Gradle: ./gradlew addExtension --extensions='oidc' This will add the following to your build file: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-oidc") 2.5. Write the application Implement the /api/users/me endpoint as shown in the following example, which is a regular Jakarta REST resource: package org.acme.security.openid.connect; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.jboss.resteasy.reactive.NoCache; import io.quarkus.security.identity.SecurityIdentity; @Path("/api/users") public class UsersResource { @Inject SecurityIdentity securityIdentity; @GET @Path("/me") @RolesAllowed("user") @NoCache public User me() { return new User(securityIdentity); } public static class User { private final String userName; User(SecurityIdentity securityIdentity) { this.userName = securityIdentity.getPrincipal().getName(); } public String getUserName() { return userName; } } } Implement the /api/admin endpoint as shown in the following example: package org.acme.security.openid.connect; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/api/admin") public class AdminResource { @GET @RolesAllowed("admin") @Produces(MediaType.TEXT_PLAIN) public String admin() { return "granted"; } } Note The main difference in this example is that the @RolesAllowed annotation is used to verify that only users granted the admin role can access the endpoint. Injection of the SecurityIdentity is supported in both @RequestScoped and @ApplicationScoped contexts. 2.6. Configure the application Configure the Quarkus OpenID Connect (OIDC) extension by setting the following configuration properties in the src/main/resources/application.properties file. %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=backend-service quarkus.oidc.credentials.secret=secret # Tell Dev Services for Keycloak to import the realm file # This property is not effective when running the application in JVM or native modes quarkus.keycloak.devservices.realm-path=quarkus-realm.json Where: %prod.quarkus.oidc.auth-server-url sets the base URL of the OpenID Connect (OIDC) server. The %prod. profile prefix ensures that Dev Services for Keycloak launches a container when you run the application in development (dev) mode. For more information, see the Run the application in dev mode section. quarkus.oidc.client-id sets a client id that identifies the application. quarkus.oidc.credentials.secret sets the client secret, which is used by the client_secret_basic authentication method. For more information, see the Quarkus OpenID Connect (OIDC) configuration properties guide. 2.7. Start and configure the Keycloak server Put the realm configuration file on the classpath ( target/classes directory) so that it gets imported automatically when running in dev mode. You do not need to do this if you have already built a complete solution , in which case, this realm file is added to the classpath during the build. Note Do not start the Keycloak server when you run the application in dev mode; Dev Services for Keycloak will start a container. For more information, see the Run the application in dev mode section. To start a Keycloak server, you can use Docker to run the following command: docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev Where the keycloak.version is set to version 25.0.6 or later. You can access your Keycloak server at localhost:8180 . To access the Keycloak Administration console, log in as the admin user by using the following login credentials: Username: admin Password: admin Import the realm configuration file from the upstream community repository to create a new realm. For more information, see the Keycloak documentation about creating and configuring a new realm . 2.8. Run the application in dev mode To run the application in dev mode, run the following commands: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev Dev Services for Keycloak will start a Keycloak container and import a quarkus-realm.json . Open a Dev UI , which you can find at /q/dev-ui . Then, in an OpenID Connect card, click the Keycloak provider link . When prompted to log in to a Single Page Application provided by OpenID Connect Dev UI , do the following steps: Log in as alice (password: alice ), who has a user role. Accessing /api/admin returns a 403 status code. Accessing /api/users/me returns a 200 status code. Log out and log in again as admin (password: admin ), who has both admin and user roles. Accessing /api/admin returns a 200 status code. Accessing /api/users/me returns a 200 status code. 2.9. Run the Application in JVM mode When you are done with dev mode, you can run the application as a standard Java application. Compile the application: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Run the application: java -jar target/quarkus-app/quarkus-run.jar 2.10. Run the application in native mode You can compile this same demo as-is into native mode without any modifications. This implies that you no longer need to install a JVM on your production environment. The runtime technology is included in the produced binary and optimized to run with minimal resources required. Compilation takes a bit longer, so this step is disabled by default. Build your application again by enabling the native profile: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.native.enabled=true After waiting a little while, you run the following binary directly: ./target/security-openid-connect-quickstart-1.0.0-SNAPSHOT-runner 2.11. Test the application For information about testing your application in dev mode, see the preceding Run the application in dev mode section. You can test the application launched in JVM or native modes with curl . Because the application uses Bearer token authentication, you must first obtain an access token from the Keycloak server to access the application resources: export access_token=USD(\ curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token \ --user backend-service:secret \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' \ ) The preceding example obtains an access token for the user alice . Any user can access the http://localhost:8080/api/users/me endpoint, which returns a JSON payload with details about the user. curl -v -X GET \ http://localhost:8080/api/users/me \ -H "Authorization: Bearer "USDaccess_token Only users with the admin role can access the http://localhost:8080/api/admin endpoint. If you try to access this endpoint with the previously-issued access token, you get a 403 response from the server. curl -v -X GET \ http://localhost:8080/api/admin \ -H "Authorization: Bearer "USDaccess_token To access the admin endpoint, obtain a token for the admin user: export access_token=USD(\ curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token \ --user backend-service:secret \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' \ ) For information about writing integration tests that depend on Dev Services for Keycloak , see the Dev Services for Keycloak section of the "OpenID Connect (OIDC) Bearer token authentication" guide. 2.12. References OIDC configuration properties OpenID Connect (OIDC) Bearer token authentication Keycloak Documentation OpenID Connect JSON Web Token OpenID Connect and OAuth2 Client and Filters Reference Guide Dev Services for Keycloak Sign and encrypt JWT tokens with SmallRye JWT Build Combining authentication mechanisms Quarkus Security overview | [
"quarkus create app org.acme:security-openid-connect-quickstart --extension='oidc,rest-jackson' --no-code cd security-openid-connect-quickstart",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create -DprojectGroupId=org.acme -DprojectArtifactId=security-openid-connect-quickstart -Dextensions='oidc,rest-jackson' -DnoCode cd security-openid-connect-quickstart",
"quarkus extension add oidc",
"./mvnw quarkus:add-extension -Dextensions='oidc'",
"./gradlew addExtension --extensions='oidc'",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency>",
"implementation(\"io.quarkus:quarkus-oidc\")",
"package org.acme.security.openid.connect; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.jboss.resteasy.reactive.NoCache; import io.quarkus.security.identity.SecurityIdentity; @Path(\"/api/users\") public class UsersResource { @Inject SecurityIdentity securityIdentity; @GET @Path(\"/me\") @RolesAllowed(\"user\") @NoCache public User me() { return new User(securityIdentity); } public static class User { private final String userName; User(SecurityIdentity securityIdentity) { this.userName = securityIdentity.getPrincipal().getName(); } public String getUserName() { return userName; } } }",
"package org.acme.security.openid.connect; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/api/admin\") public class AdminResource { @GET @RolesAllowed(\"admin\") @Produces(MediaType.TEXT_PLAIN) public String admin() { return \"granted\"; } }",
"%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=backend-service quarkus.oidc.credentials.secret=secret Tell Dev Services for Keycloak to import the realm file This property is not effective when running the application in JVM or native modes quarkus.keycloak.devservices.realm-path=quarkus-realm.json",
"docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev",
"quarkus dev",
"./mvnw quarkus:dev",
"./gradlew --console=plain quarkusDev",
"quarkus build",
"./mvnw install",
"./gradlew build",
"java -jar target/quarkus-app/quarkus-run.jar",
"quarkus build --native",
"./mvnw install -Dnative",
"./gradlew build -Dquarkus.native.enabled=true",
"./target/security-openid-connect-quickstart-1.0.0-SNAPSHOT-runner",
"export access_token=USD( curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token --user backend-service:secret -H 'content-type: application/x-www-form-urlencoded' -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' )",
"curl -v -X GET http://localhost:8080/api/users/me -H \"Authorization: Bearer \"USDaccess_token",
"curl -v -X GET http://localhost:8080/api/admin -H \"Authorization: Bearer \"USDaccess_token",
"export access_token=USD( curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token --user backend-service:secret -H 'content-type: application/x-www-form-urlencoded' -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' )"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_authentication/security-oidc-bearer-token-authentication-tutorial |
Appendix A. Testing Scripts Available with Directory Server | Appendix A. Testing Scripts Available with Directory Server Red Hat Directory Server provides a script which you can use to test Directory Server performance in different stress or load conditions. The test script simulates different environments which allow administrators to assess configuration or machine changes before putting them in production. The ldclt script is located in the /usr/bin directory. A.1. ldclt (Load Stress Tests) The LDAP client script ( ldclt ) establishes multiple client connections to a server, under user-defined scenarios, to load-test the Directory Server. Client operations include directory adds, searches, modifies, modRDNs, and deletes, as well setup operations like generating LDIF files. Operations can be randomized - binding and unbinding as random users, performing random tasks - to simulate more realistic usage environments for the directory. The ldclt tool measures the completion time of continuously-repeated operations to measure Directory Server performance. Using multiple threads makes it possible to test performance under high loads. Each test performs the same type of LDAP operation, but with different settings (like different user credentials, different attribute types or sizes, and different target subtrees). Along with defining the LDAP operation variables, administrators can control the thread performance in order to set a specific load on the server. The ldclt tool is specifically intended to be used for automated tests, so its options are extensive, flexible, and easily scripted, even for complex test operations. Note Remember that ldclt is a load test, and therefore uses a significant amount of system resources. The tool uses a minimum of 8 MB of memory. Depending on the numbers of threads, types of operations, and other configuration settings, it can use much more memory. Depending on the type of operations and the directory data used for those operations, ldclt may set its own resource limits. For information on managing system resource limits, see the man pages for ulimit and getrlimit . The ldclt utility is located in the /usr/bin directory. A.1.1. Syntax ldlt -q-Q-v-V-E max_errors -b base_DN -h host -p port -t timeout -D bind_DN -w password -o SASL_options -e execution_params -a max_pending -n number_of_threads -i inactivity_times -N number_of_samples -I error_code -T total_number_of_operations -r low_range -R high_range -f filter -s scope -S consumer -P supplier_port -W wait_time -Z certificate_file A.1.2. ldclt Options Table A.1. ldclt Options Option Description -a max_pending_ops Runs the tool in asynchronous mode with a defined maximum number of pending operations. -b base_dn Gives the base DN to use for running the LDAP operation tests. If not given, the default value is dc=example,dc=com . -D bind_dn Gives the bind DN for the ldclt utility to use to connect to the server. -E max_errors Sets the maximum number of errors that are allowed to occur in test LDAP operations before the tool exits. The default is 1000. -e execution_params Specifies the type of operation and other test environment parameters to use for the tests. The possible values for -e are listed in Table A.2, "Execution Parameters" . This option can accept multiple values, in a comma-separated list. -f filter Gives an LDAP search filter to use for search testing. -h Specifies the host name or IP address of the Directory Server to run tests against. If a host is not specified, ldclt uses the local host. -I error_code Tells ldclt to ignore any errors encountered that match a certain response code. For example, -I 89 tells the server to ignore error code 89. -i inactivity_times Sets a number of intervals that the tool can be inactive before exiting. By default, this setting is 3, which translates into 30 seconds (each operations interval being 10 seconds long). -N number_of_samples Sets the number of iterations to run, meaning how many ten-second test periods to run. By default, this is infinite and the tool only exits when it is manually stopped. -n number_of_threads Sets the number of threads to run simultaneously for operations. The default value is 10. -o SASL_option Tells the tool to connect to the server using SASL and gives the SASL mechanism to use. The format is -o saslOption=value . saslOption can have one of six values: * mech, the SASL authentication mechanism * authid, the user who is binding to the server (Kerberos principal) * authzid, a proxy authorization (ignored by the server since proxy authorization is not supported) * secProp, the security properties * realm, the Kerberos realm * flags The expected values depend on the supported mechanism. The -o can be used multiple times to pass all of the required SASL information for the mechanism. For example: [literal,subs="+quotes,verbatim"] ... . -o "mech=DIGEST-MD5" -o "authzid=test_user" -o "authid=test_user" ... . -P supplier_port Gives the port to use to connect to a supplier server for replication testing. The default, if one is not given, is 16000. -p port Gives the server port number of the Directory Server instance that is being tested. -Q Runs the tool in "super" quiet mode. This ignores any errors that are encountered in operations run by ldclt . -q Runs the tool in quiet mode. -R number Sets the high number for a range. -r number Sets the low number of a range. -S consumer_name Gives the host name of a consumer server to connect to run replication tests. -s scope Gives the search scope. As with ldapsearch , the values can be subtree, one, or base. -T ops_per_thread Sets a maximum number of operations allowed per thread. -t timeout Sets a timeout period for LDAP operations. The default is 30 seconds. -V Runs the tool in very verbose mode. -v Runs the tool in verbose mode. -W wait_time Sets a time, in seconds, for the ldclt tool to wait after one operation finishes to start the operation. The default is 0, which means there is no wait time. -w password Gives the password to use, with the -D identity, to bind to the Directory Server for testing. -Z /path/to/cert.db Enables TLS for the test connections and points to the file to use as the certificate database. The -e option sets execution parameters for the ldclt test operations. Multiple parameters can be configured, in a comma-separated list. For example: Table A.2. Execution Parameters Parameter Description abandon Initiates abandon operatons for asynchronous search requests. add Adds entries to the directory ( ldapadd ). append Appends entries to the end of the LDIF file generated with the genldif option. ascii Generates ASCII 7-bit strings. attreplace= name:mask Run modify operations that replace an attribute ( name ) in an existing entry. attrlist= name:name:name Specifies a list of attributes to return in a search operation. attrsonly= # Used with search operations, to set whether to read the attribute values. The possible values are 0 (read values) or 1 (do not read values). bindeach Tells the ldclt tool to bind with each operation it attempts. bindonly Tells the ldclt tool to only run bind/unbind operations. No other operation is performed. close Tells the tool to close the connection rather than perform an unbind operation. cltcertname= name Gives the name of the TLS client certificate to use for TLS connections. commoncounter Makes all threads opened by the ldclt tool to share the same counter. counteach Tells the tool to count each operation, not only successful ones. delete Initiates delete operations. deref Adds the dereference control to search operations ( esearch ). With adds, this tells ldclt to add the secretary attribute to new entries, to allow dereference searches. dontsleeponserverdown Causes the tool to loop very fast if server down. emailPerson This adds the emailPerson object class to generated entries. This is only valid with the add operation ( -e add ). esearch Performs an exact search. genldif= filename Generates an LDIF file to use with the operations. imagesdir= path Gives a location for images to use with tests. incr Enables incremental values. inetOrgPerson This adds the inetOrgPerson object class to generated entries. This is only valid with the add operation ( -e add ). keydbfile= file Contains the path and file name of the key database to use with TLS connections. keydbpin= password Contains the token password to access the key database. noglobalstats Tells the tool not to print periodical global statistics. noloop Does not loop the incremental numbers. object= filename Builds entry objects from an input file. person This adds the person object class to generated entries. This is only valid with the add operation ( -e add ). random Tells the ldclt utility to use all random elements, such as random filters and random base DNS. randomattrlist= name:name:name Tells the ldclt utility to select random attributes from the given list. randombase Tells the ldclt utility to select a random base DN from the directory. randombaselow= value Sets the low value for the random generator. randombasehigh= value Sets the high value for the random generator. randombinddn Tells the ldclt utility to use a random bind DN. randombinddnfromfile= file Tells the ldclt utility to use a random bind DN, selected from a file. Each entry in the file must have the appropriate DN-password pair. randombinddnlow= value Sets the low value for the random generator. randombinddnhigh= value Sets the high value for the random generator. rdn= attrname:value Gives an RDN to use as the search filter. This is used instead of the -f filter. referral= value Sets the referral behavior for operations. There are three options: on (allow referrals), off (disallow referrals), or rebind (attempt to connect again). smoothshutdown Tells the ldclt utility not to shut down its main thread until the worker threads exit. string Tells the ldclt utility to create random strings rather than random numbers. v2 Tells the ldclt utility to use LDAPv2 for test operations. withnewparent Performs a modRDN operation, renaming an entry with newparent set as an argument. randomauthid Uses a random SASL authentication ID. randomauthidlow= value Sets the low value for a random SASL authentication ID. randomauthidhigh= value Sets the high value for the random SASL authentication ID. A.1.3. Results from ldclt ldclt continuously runs whatever operation is specified, over the specified number of threads. By default, it prints the performance statistics to the screen every ten (10) seconds. The results show the average number of operations per thread and per second and then the total number of operations that were run in that ten-second window. For example: ldclt prints cumulative averages and totals every 15 minutes and when the tool is exited. Some operations (like adds) and using verbose output options like -v or -V output additional data to the screen. The kind of information depends on the type of operation, but it generally shows the thread performing the operation and the plug-ins called by the operation. For example: Most errors are handled by ldclt without interrupting the test. Any fatal errors that are encountered are listed with the tool's exit status and returned in the cumulative total. Any LDAP operations errors that occur are handled within the thread. A connection error kills the thread without affecting the overall test. The ldclt utility does count the number of times each LDAP error is encountered; if the total number of errors that are logged hits more than 1000 (by default), then the script itself will error out. The way that ldclt responds to LDAP errors can be configured. Using the -E option sets a different threshold for the script to error out after encountering LDAP errors. Using the -I option tells the script to ignore the specified LDAP error codes in all threads. Changing the error exit limit and ignoring certain error codes can allow you to tweak and improve test scripts or test configuration. A.1.4. Exiting ldclt and ldclt Exit Codes The ldclt command runs indefinitely. The script can stop itself in a handful of situations, like encountering a fatal runtime or initialization error, hitting the limit of LDAP errors, having all threads die, or hitting the operation or time limit. The statistics for the run are not displayed until the command completes, either through the script exiting or by a user terminating the script. There are two ways to interrupt the ldclt script. Hitting control-backslash (kbd:[^\]) or kill -3 prints the current statistics without exiting the script. Hitting control-C ( ^C ) or kill -2 exits the script and prints the global statistics. When the ldclt script exits or is interrupted, it returns an exit code along with the statistics and error information. Table A.3. ldclt Exit Codes Exit Code Description 0 Success (no errors). 1 An operation encountered a serious fatal error. 2 There was an error in the parameters passed with the tool. 3 The tool hit the maximum number of LDAP errors. 4 The tool could not bind to the Directory Server instance. 5 The tool could not load the TLS libraries to connect over TLS. 6 There was a multithreading (mutex) error. 7 There was an initialization problem. 8 The tool hit a resource limit, such as a memory allocation error. 99 The script encountered an unknown error. A.1.5. Usage Scenarios These provide general examples of using ldclt to test Directory Server. Test scripts with more complex examples are available in the ldclt source files. You can download this file from the 389 Directory Server project: https://github.com/389ds/389-ds-base/tree/master/ldap/servers/slapd/tools/ldclt/examples Every ldclt command requires a set of execution parameters (which varies depending on the type of test) and connection parameters (which are the same for every type of operation). For example: When ldclt runs, it first prints all of the configured parameters for that test. A.1.5.1. Generating LDIFs The ldclt tool itself can be used to generate LDIF files that can be used for testing. Note When generating an LDIF file, the ldclt tool does not attempt to connect to a server or run any operations. Generating an LDIF file requires a basic template file that the tool uses to create entries ( -e object ), and then a specified output file ( -e genldif ). The template file can give explicit values for entry attributes or can use variables. If you want a simple way to supply unique values for entry attributes, the /usr/share/dirsrv/data directory contains three data files to generate surnames, first names, and organizational units. These lists of values can be used to create test users and directory trees ( dbgen-FamilyNames , dbgen-GivenNames , and dbgen-OrgUnits , respectively). These files can be used with the rndfromfile , incrfromfile , or incrfromfilenoloop options. The basic format of the template file is: The variable can be any letter from A to H. The possible keywords are listed in Table A.4, "ldclt Template LDIF File Keywords" Some variables and keywords can be passed with the -e object option and other available parameters (like rdn ). Table A.4. ldclt Template LDIF File Keywords Keyword Description Format RNDN Generates a random value within the specified range (low - high) and of the given length. RNDN(low;high;length) RNDFROMFILE Pulls a random value from any of the ones available in the specified file. RNDFROMFILE(filename) INCRN Creates sequential values within the specified range (low - high) and of the given length. INCRN(low;high;length) INCRNOLOOP Creates sequential values within the specified range (low - high) and of the given length - without looping through the incremental range. INCRNOLOOP(low;high;length) INCRFROMFILE Creates values by incrementing through the values in the specified file. INCRFROMFILE(filename) INCRFROMFILENOLOOP Creates values by incrementing through the values in the file, without looping back through the values. INCRFROMFILENOLOOP(filename) RNDS Generates random values of a given length. RNDS(length) For example, this template file pulls names from sample files in the /usr/share/dirsrv/data and builds other attributes dynamically. Example A.1. Example Template File The ldclt command, then, uses that template to build an LDIF file with 100,000 entries: A.1.5.2. Adding Entries The ldclt tool can add entries that match either of two templates: person inetorgperson The -f filter sets the format of the naming attribute for the user entries. For example, -f "cn=MrXXXXX" creates a name like -f "cn=Mr01234" . Using the person or inetorgperson parameter with -f creates a basic entry. More complex entries (which are good for search and modify testing) can be created using the rdn parameter and an object file. The full range of options for the entries is covered in Section A.1.5.1, "Generating LDIFs" . The rdn and object parameters provide the format for the entries to add or edit in the directory. The rdn execution parameter takes a keyword pattern (as listed in Table A.4, "ldclt Template LDIF File Keywords" ) and draws its entry pool from the entries listed in a text file. The ldclt tool creates entries in a numeric sequence. That means that the method of adding those entries and of counting the sequence have to be defined as well. Some possible options for this include: -r and -R to set the numeric range for entries incr or random to set the method of assigning numbers (these are only used with -f) -r and -R to set the numeric range for entries noloop, to stop the add operations when it hits the end of the range rather than looping back Example A.2. Adding Entries The add operation can also be used to build a directory tree for more complex testing. Whenever an entry is added to the directory that belongs to a non-existent branch, the ldclt tool automatically creates that branch entry. Note The first time that an entry is added that is the child of non-existent branch, the branch entry is added to the directory. However, the entry itself is not added. Subsequent entries will be added to the new branch. For a branch entry to be added automatically, its naming attribute must be cn , o , or ou . Example A.3. Creating the Directory Tree A.1.5.3. Search Operations The most basic ldclt search test simply looks for all entries within the given base DN. This uses two execution parameters: esearch and random . Example A.4. Basic Search Operation Important A search that returns all entries can use a large amount of memory per thread, as much as 1 GB. ldclt is designed to perform searches that return one entry. The search results can be expanded to return attributes contained in the entries. ( Section A.1.5.1, "Generating LDIFs" has information on generating entries that contain multiple attributes.) To return a specific list of attributes for entries, use the attrlist execution parameter and a colon-separated list of attributes. Example A.5. Searching for a List of Attributes Alternatively, the ldclt search operation can return attribute values for attributes randomly selected from the search list. The list is given in the randomattrlist execution parameter with a colon-separated list of attributes. Example A.6. Searching for a List of Random Attributes The filter used to match entries can target other entry attributes, not just naming attributes. It depends on the attributes in the generated LDIF. Example A.7. Searches with Alternate Filters The search operation can also use the RDN-style filter to search for entries. The rdn and object execution parameters provide the format for the entries to add or edit in the directory. The rdn execution parameter takes a keyword pattern (as listed in Table A.4, "ldclt Template LDIF File Keywords" ) and draws its entry pool from the entries listed in a text file. Example A.8. Searches with RDN Filters A.1.5.4. Modify Operations The attreplace execution parameter replaces specific attributes in the entries. The modify operation uses the RDN filter to search for the entries to update. The rdn and object parameters provide the format for the entries to add or edit in the directory. The rdn execution parameter takes a keyword pattern (as listed in Table A.4, "ldclt Template LDIF File Keywords" ) and draws its entry pool from the entries listed in a text file. Example A.9. Modify Operation A.1.5.5. modrdn Operations The ldclt command supports two kinds of modrdn operations: Renaming entries Moving an entry to a new parent The ldclt utility creates the new entry name or parent from a randomly-selected DN. The basic rename operation requires three execution parameters: rename rdn=' pattern ' object= file The rdn and object parameters provide the format for the entries to add or edit in the directory. The rdn execution parameter takes a keyword pattern (as listed in Table A.4, "ldclt Template LDIF File Keywords" ) and draws its entry pool from the entries listed in a text file. Example A.10. Simple Rename Operation Using the withnewparent execution parameter renames the entry and moves it beneath a new parent entry. If the parent entry does not exist, then the ldclt tool creates it. [3] Example A.11. Renaming an Entry and Moving to a New Parent A.1.5.6. Delete Operations The ldclt delete operation is exactly the reverse of the add operation. As with the add, delete operations can remove entries in several different ways: Randomly ( -e delete,random ) RDN-ranges ( -e delete,rdn= [ pattern ]) Sequentially ( -e delete,incr ) Random deletes are configured to occur within the specified range of entries. This requires the following options: -e delete,random -r and -R for the range bounds -f for the filter to match the entries Example A.12. Random Delete Operations RDN-based deletes use the rdn execution parameter with a keyword (as listed in Table A.4, "ldclt Template LDIF File Keywords" ) and draws its entry pool from the entries listed in a text file. This format requires three execution parameters: -e delete -e rdn=' pattern ' -e object=' file ' Example A.13. RDN-Based Delete Operations The last delete operation format is much like the random delete format, only it moves sequentially through the given range, rather than randomly: -e delete,incr -r and -R for the range bounds -f for the filter to match the entries Example A.14. Sequential Delete Operations A.1.5.7. Bind Operations By default, each ldclt thread binds once to the server and then runs all of its operations in a single session. The -e bindeach can be used with any other operation to instruct the ldclt tool to bind for each operation and then unbind before initiating the operation. To test only bind and unbind operations, use the -e bindeach,bindonly execution parameters and no other operation information. For example: The bind operation can specify a single user to use for testing by using the -D and -w user name-password pair in the connection parameters. Note Use the -e close option with the bind parameters to test the affect that dropping connections has on the Directory Server, instead of unbinding cleanly. Example A.15. Bind Only and Close Tests There are also execution parameters which can be used to select a random bind identity from a given file ( randombinddnfromfile ) or using a DN selected randomly from within a range ( -e randombinddn,randombinddnlow=X,randombinddnhigh=Y ). Example A.16. Random Binds from Identities in a File Binding with a random identity is useful if identities have been added from a generated LDIF or using -e add , where the accounts were added in a range. The ldclt tool can autogenerate values using X as a variable and incrementing through the specified range. Example A.17. Random Binds from Random Base DN A.1.5.8. Running Operations on Random Base DNs Any operation can be run against randomly-selected base DNs. The trio of randombase parameters set the range of organizational units to select from. A variable in the -b base entry sets the format of the base DN. A.1.5.9. TLS Authentication Every operation can be run over TLS to test secure authentication and performance for secure connections. There are two parameters required for TLS authentication. The connection parameters, -Z , which gives the path to the security databases for the Directory Server The execution parameters, cltcertname , keydbfile , and keydbpin , which contains the information that the server will prompt to access the TLS databases For example, this runs bind tests over TLS: A.1.5.10. Abandon Operations The -e abandon parameter opens and then cancels operations on the server. This can be run by itself or with other types of operations (like -e add or -e esearch ). [3] As with the add operation, the first time that the parent is referenced by the tool, the parent entry is created, but the entry which prompted the add operation is not created. | [
"-e add,bindeach,genldif=/var/lib/dirsrv/slapd- instance /ldif/generated.ldif,inetOrgPerson",
"ldclt[ process_id ] Average rate: number_of_ops /thr ( number_of_ops /sec), total: total_number_of_ops",
"ldclt[22774]: Average rate: 10298.20/thr (15447.30/sec), total: 154473",
"ldclt[22774]: Global average rate: 821203.00/thr (16424.06/sec), total: 12318045 ldclt[22774]: Global number times \"no activity\" reports: never ldclt[22774]: Global no error occurs during this session. Catch SIGINT - exit ldclt[22774]: Ending at Wed Feb 24 18:39:38 2010 ldclt[22774]: Exit status 0 - No problem during execution.",
"ldclt -b ou=people,dc=example,dc=com -D \"cn=Directory Manager\" -w secret12 -e add,person,incr,noloop,commoncounter -r90000 -R99999 -f \"cn=testXXXXX\" -V ldclt[11176]: T002: After ldap_simple_bind_s (cn=Directory Manager, secret12) ldclt[11176]: T002: incremental mode:filter=\"cn=test00009\" ldclt[11176]: T002: tttctx->bufFilter=\"cn=test00009\" ldclt[11176]: T002: attrs[0]=(\"objectclass\" , \"person\") ldclt[11176]: T002: attrs[1]=(\"cn\" , \"test00009\") ldclt[11176]: T002: attrs[2]=(\"sn\" , \"toto sn\") ldclt[11176]: Average rate: 195.00/thr ( 195.00/sec), total: 1950 ldclt[10627]: Global average rate: 238.80/thr (238.80/sec), total: 2388 ldclt[10627]: Global number times \"no activity\" reports: never ldclt[10627]: Global no error occurs during this session. Catch SIGINT - exit ldclt[10627]: Ending at Tue Feb 23 11:46:04 2010 ldclt[10627]: Exit status 0 - No problem during execution.",
"Global no error occurs during this session.",
"ldclt -e execution_parameters -h localhost -p 389 -D \"cn=Directory Manager\" -w secret -b \"ou=people,dc=example,dc=com\"",
"Process ID = 1464 Host to connect = localhost Port number = 389 Bind DN = cn=Directory Manager Passwd = secret Referral = on Base DN = ou=people,dc=example,dc=com Filter = \"cn=MrXXX\" Max times inactive = 3 Max allowed errors = 1000 Number of samples = -1 Number of threads = 10 Total op. req. = -1 Running mode = 0xa0000009 Running mode = quiet verbose random exact_search LDAP oper. timeout = 30 sec Sampling interval = 10 sec Scope = subtree Attrsonly = 0 Values range = [0 , 1000000] Filter's head = \"cn=Mr\" Filter's tail = \"\"",
"comment attribute : string | variable=keyword(value)",
"-e object=inet.txt,rdn='uid:[A=INCRNNOLOOP(0;99999;5)]'",
"objectclass: inetOrgPerson sn: [B=RNDFROMFILE(/usr/share/dirsrv/data/dbgen-FamilyNames)] cn: [C=RNDFROMFILE(/usr/share/dirsrv/data/dbgen-GivenNames)] [B] password: test[A] description: user id [A] mail: [C].[B]@example.com telephonenumber: (555) [RNDN(0;999;3)]-[RNDN(0;9999;4)]",
"ldclt -b \"ou=people,dc=csb\" -e object=inet.txt,rdn='uid:[A=INCRNNOLOOP(0;99999;5)]' -e genldif=100Kinet.ldif,commoncounter",
"objectclass: person sn: ex sn cn: Mr01234",
"-e rdn='uid:[A=INCRNNOLOOP(0;99999;5)]',object=inet.txt",
"ldclt -b ou=people,dc=example,dc=com -D \"cn=Directory Manager\" -w secret -e add,person,incr,noloop,commoncounter -r0 -R99999 -f \"cn=MrXXXXX\" -v -q",
"ldclt -b ou=DeptXXX,dc=example,dc=com -D \"cn=Directory Manager\" -w secret -e add,person,incr,noloop,commoncounter -r0 -R99999 -f \"cn=MrXXXXX\" -v -q",
"ldclt -h localhost -p 389 -D \"cn=Directory Manager\" -w secret -b \"ou=people,dc=example,dc=com\" -f uid=testXXXXX -e esearch,random -r0 -R99999 -I 32",
"ldclt -h localhost -p 389 -b \"ou=people,dc=example,dc=com\" -f uid=XXXXX -e esearch,random -r0 -R99999 -I 32 -e attrlist=cn:mail",
"ldclt -h localhost -p 389 -b \"ou=people,dc=example,dc=com\" -f uid=XXXXX -e esearch,random -r0 -R99999 -I 32 -e randomattrlist=cn:sn:ou:uid:mail:mobile:description",
"ldclt -h localhost -p 389 -b \"ou=people,dc=example,dc=com\" -f [email protected] -e esearch,random -r0 -R99999 -I 32 -e randomattrlist=cn:sn:ou:uid:mail:mobile:description",
"ldclt -h localhost -p 389 -b \"ou=people,dc=example,dc=com\" -e rdn='mail:[RNDN(0;99999;5)]@example.com',object=\"inet.txt\" -e attrlist=cn:telephonenumber",
"ldclt -h localhost -p 389 -D \"cn=Directory Manager\" -w secret -b \"ou=people,dc=example,dc=com\" -e rdn='uid:[RNDN(0;99999;5)]' -I 32 -e attreplace='description: random modify XXXXX'",
"ldclt -h localhost -p 389 -D \"cn=Directory Manager\" -w secret -b \"ou=people,dc=example,dc=com\" -I 32 -I 68 -e rename,rdn='uid:[RNDN(0;999;5)]',object=\"inet.txt\"",
"ldclt -h localhost -p 389 -D \"cn=Directory Manager\" -w secret12 -b \"ou=DeptXXX,dc=example,dc-com\" -I 32 -I 68 -e rename,withnewparent,rdn='uid:Mr[RNDN(0;99999;5)]',object=\"inet.txt\"",
"ldclt -b \"ou=people,dc=example,dc=com\" -D \"cn=Directory Manager\" -w secret -e delete,random -r0 -R99999 -f \"uid=XXXXXX\" -I 32 -v -q",
"ldclt -b \"ou=people,dc=example,dc=com\" -D \"cn=Directory Manager\" -w secret -e delete,rdn='uid:[INCRNNOLOOP(0;99999;5)]',object=\"inet.txt\" -I 32 -v -q",
"ldclt -b \"ou=people,dc=example,dc=com\" -D \"cn=Directory Manager\" -w secret -e delete,incr -r0 -R99999 -f \"uid=XXXXXX\" -I 32 -v -q",
"-e add,bindeach",
"ldclt -h localhost -p 389 -b \"ou=people,dc=example,dc=com\" -e bindeach,bindonly -e bind_info",
"ldclt -h localhost -p 389 -D \"cn=Directory Manager\" -w secret -e bindeach,bindonly,close",
"ldclt -h localhost -p 389 -e bindeach,bindonly -e randombinddnfromfile=/tmp/testbind.txt",
"ldclt -h localhost -p 389 -e bindeach,bindonly -D \"uid=XXXXX,dc=example,dc=com\" -w testXXXXX -e randombinddn,randombinddnlow=0,randombinddnhigh=99999",
"-b \"ou=DeptXXX,dc=example,dc=com\" -e randombase,randombaselow=0,randombasehigh=999",
"ldclt -h host -p port -e bindeach,bindonly -Z certPath -e cltcertname= certName ,keydbfile= filename ,keydbpin= password",
"ldclt -e abandon -h localhost -p 389 -D \"cn=Directory Manager\" -w secret -v -q -b \"ou=people,dc=example,dc=com\""
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/testing-tools |
Chapter 3. Kernel | Chapter 3. Kernel The kernel shipped in Red Hat Enterprise Linux 6.2 includes several hundred bug fixes for, and enhancements to, the Linux kernel. For details concerning every bug fixed and every enhancement added to the kernel for this release, refer to the kernel section of the Red Hat Enterprise Linux 6.2 Technical Notes. Using open-iscsi to manage the qla4xxx discovery and login process Prior to Red Hat Enterprise Linux 6.2, the qla4xxx adapter firmware managed discovery and login to iSCSI targets. A new feature in Red Hat Enterprise Linux 6.2 allows you to use open-iscsi to manage the qla4xxx discovery and login process. This can result in a more uniform management process. This new feature is enabled by default. The qla4xxx iSCSI firmware settings are accessible via: This feature may be disabled by setting the module ql4xdisablesysfsboot=1 parameter as follows: Set the parameter in the /etc/modprobe.d file: Reload the qla4xxx module either by executing the following set of commands: or, if you are booted off the qla4xxx device, by rebooting your system. When booted off a qla4xxx device, upgrading from Red Hat Enterprise Linux 6.1 to Red Hat Enterprise Linux 6.2 will cause the system to fail to boot up with the new kernel. For more information on this known issue, refer to the Technical Notes . kexec kdump support on additional file systems Kdump (a kexec-based crash dumping mechanism) now supports dumping of the core on the following file systems on Red Hat Enterprise Linux 6: Btrfs (Note that this file system is a Technology Preview) ext4 XFS (Note that XFS is a layer product and must be installed to enable this feature) pkgtemp merged with coretemp The pkgtemp module has been merged with the coretemp module. The pkgtemp module is now deprecated. The coretemp module now supports all the features it previously did plus the features that were supported by the pkgtemp module. The coretemp previously only provided per core temperatures, while the pkgtemp module provided the temperatures of the CPU package. In Red Hat Enterprise Linux 6.2, the coretemp module allows you to read the temperatures of the cores, the uncore, and the package. It is advisable to adjust any scripts using either of these modules. Lockless dispatching of SCSI driver queuecommand functions In Red Hat Enterprise Linux 6.2, the SCSI midlayer supports optional lockless dispatching of SCSI driver queuecommand functions. This is a backport of the upstream SCSI lock pushdown commit. The backport retains binary compatibility with Red Hat Enterprise Linux 6.0 and Red Hat Enterprise Linux 6.1. Retaining binary compatibility requires divergence from the equivalent upstream SCSI lock pushdown mechanism. A previously unused flag in the scsi_host_template structure is used by SCSI drivers to indicate to the SCSI midlayer that driver queuecommand will be dispatched without the SCSI host bus lock held. The default behavior is that the Scsi_Host lock will be held during a driver queuecommand dispatch. Setting the scsi_host_template lockless bit prior to scsi_host_alloc will cause the driver queuecommand function to be dispatched without the Scsi_Host lock being held. In such a case, the responsibility for any lock protection required is pushed down into the driver queuecommand code path. SCSI Drivers updated to use lockless queuecommand in Red Hat Enterprise Linux 6.2 are listed below: iscsi_iser be2iscsi bnx2fc bnx2i cxgb3i cxgb4i fcoe (software fcoe) qla2xxx qla4xxx Support for Fiber Channel over Ethernet (FCoE) target mode Red Hat Enterprise Linux 6.2 includes support for Fiber Channel over Ethernet (FCoE) target mode, as a Technology Preview . This kernel feature is configurable via targetadmin , supplied by the fcoe-target-utils package. FCoE is designed to be used on a network supporting Data Center Bridging (DCB). Further details are available in the dcbtool(8) and targetadmin(8) man pages. Important This feature uses the new SCSI target layer, which falls under this Technology Preview, and should not be used independently from the FCoE target support. This package contains the AGPL license. Support for the crashkernel=auto boot parameter In Red Hat Enterprise Linux 6.1, with BZ# 605786 , the crashkernel=auto boot parameter was deprecated. However, in Red Hat Enterprise Linux 6.2, support for crashkernel=auto is continued on all Red Hat Enterprise Linux 6 systems. Support for MD RAID in user space The mdadm and mdmon utilities have been updated to support Array Auto-Rebuild, RAID Level Migrations, RAID 5 support limitation, and SAS-SATA drive roaming. Flush request merge Red Hat Enterprise Linux 6.2 supports merging of flush requests to assist devices which are slow to perform a flush. UV2 Hub Support Red Hat Enterprise Linux 6.2 adds UV2 Hub support. UV2 is the UVhub chip that is the successor to the current UV1 hub chip. UV2 uses the HARP hub chip that is currently in development. UV2 provides support for new Intel sockets. It provides new features to improve performance. UV2 is being designed to support 64 TB of memory in a Single System Image (SSI). Additionally, the node controller MMRs have been updated for UV systems. acpi_rsdp boot parameter Red Hat Enterprise Linux 6.2 introduces the acpi_rsdp boot parameter for kdump to pass an ACPI RSDP address, so that the kdump kernel can boot without Extensible Firmware Interface (EFI). QETH driver improvements The following enhancements have been added to the QETH network device driver: Support for af_iucv HiperSockets transport Support for forced signal adapter indications Support for asynchronous delivery of storage blocks New Ethernet Protocol ID added to the if_ether module CPACF algorithms Support for the new CPACF (CP Assist for Cryptographic Function) algorithms, supported by IBM zEnterprise 196, has been added. The new hardware accelerated algorithms are: CTR mode for AES CTR mode for DES and 3DES XTS mode for AES with key lengths of 128 and 256 bits GHASH message digest for GCM mode Red Hat Enterprise Linux 6.2 supports conditional resource-reallocation through the pci=realloc kernel parameter. This feature provides an interim solution for adding a dynamically reallocatable PCI resource without causing any regressions. It disables dynamic reallocation by default, but adds the ability to enable it through the pci=realloc kernel command line parameter. PCI improvements Dynamic reallocation is disabled by default. It can be enabled with the pci=realloc kernel command line parameter. In addition, bridge resources have been updated to provide larger ranges in the PCI assign unassigned call. SMEP Red Hat Enterprise Linux 6.2 enables SMEP (Supervision Mode Execution Protection) in the kernel. SMEP provides an enforcement mechanism, allowing the system to set a requirement that is not intended to be executed from user pages while in the supervisor mode. This requirement is then enforced by the CPU. This feature is able to prevent all attacks, irrespective of the vulnerability in the system code, that are executed from user mode pages while the CPU is in the supervisor mode. Enhanced fast string instructions Support for enhanced fast string REP MOVSB / STORESB instructions for the latest Intel platform has been added. USB 3.0 xHCI The USB 3.0 xHCI host side driver has been updated to add split-hub support, allowing the xHCI host controller to act as an external USB 3.0 hub by registering a USB 3.0 roothub and a USB 2.0 roothub. ACPI, APEI, and EINJ parameter support The ACPI, APEI, and EINJ parameter support is now disabled by default. pstore Red Hat Enterprise Linux 6.2 adds support for pstore -a file system interface for platform dependent persistent storage. PCIe AER error information printing Support for printk based APEI (ACPI Platform Error Interface) hardware error reporting has been added, providing a way to unify errors from various sources and send them to the system console. ioatdma driver The ioatdma driver ( dma engine driver) has been updated to support Intel processors with a dma engine. 8250 PCI serial driver Support for the Digi/IBM PCIe 2-port Async EIA-232 Adapter has been added to the 8250 PCI serial driver. Additionally, EEH (Enhanced Error Handling) support for the Digi/IBM PCIe 2-port Async EIA-232 Adapter has been added to the 8250 PCI serial driver. ARI support ARI (Alternative Routing- ID Interpretation) support, a PCIe v2 feature, has been to Red Hat Enterprise Linux 6.2. PCIe OBFF PCIe OBFF (Optimized Buffer Flush/Fill) enable/disable support has been added for Intel's latest platform. OBFF provides devices with information on interrupts and memory activity and their potentially reduced power impact, ultimately improving energy efficiency. Capture oops/panic reports to NVRAM In Red Hat Enterprise Linux 6.2, the kernel is enabled to capture kernel oops/panic reports from the dmesg buffer into NVRAM on PowerPC architectures. MXM driver The MXM driver, responsible for handling graphics switching on NVIDIA platforms, has been backported to Red Hat Enterprise Linux 6.2. Page coalescing Red Hat Enterprise Linux 6.2 introduces page coalescing, a feature on IBM Power servers which allows for coalescing identical pages between logical partitions. L3 cache partitioning Support for L3 Cache Partitioning has been added to the latest AMD family CPUs. thinkpad_acpi module The thinkpad_acpi module has been updated to add support for new ThinkPad models. C-State support Latest Intel processor C-State support has been added to intel_idle . IOMMU warnings Red Hat Enterprise Linux 6.2 now displays warnings for IOMMU (Input/Output Memory Management Unit) on AMD systems. Logging to dmesg during boot Logging of board, system, and BIOS information to dmesg during boot has been added. IBM PowerPC support cputable entries have been added to the kernel, providing support for the latest IBM PowerPC processor family. VPHN The VPHN (Virtual Processor Home Node) feature has been disabled on IBM System p. Driver support for latest Intel chipset The following drivers now support the latest Intel chipset: i2c-i801 SMBus driver ahci AHCI-mode SATA ata_piix IDE-mode SATA driver TCO Watchdog driver LPC Controller driver exec-shield On IBM PowerPC systems, the exec-shield value in sysctl or in the /proc/sys/kernel/exec-shield parameter is no longer enforced. kdump on PPC64 Additional checks and fixes have been added to support kdump on 64-bit PowerPC and 64-bit IBM POWER Series systems. UV MMTIMER module The UV MMTIMER module ( uv_mmtimer ) has been enabled on SGI platforms. The uv_mmtimer module allows direct userland access to the UV system's real time clock which is synchronized across all hubs. IB700 module Support for the IB700 module has been added in Red Hat Enterprise Linux 6.2 Override PCIe AER Mask Registers The aer_mask_override module parameter has been added, providing a way to override the corrected or uncorrected masks for a PCI device. The mask will have the bit corresponding to the status passed into the aer_inject() function. USB 3.0 host controller support on PPC64 USB 3.0 host controller support has been added to 64-bit PowerPC and 64-bit IBM POWER Series systems. Out-of-Memory (OOM) killer improvements An improved upstream Out-of-Memory (OOM) killer implementation has been backported to Red Hat Enterprise Linux 6.2. The improvements include: Processes which are about to exit are preferred by the OOM killer. The OOM kill process also kills the children of the selected processes. A heuristic has been added to kill the forkbomb processes. The oom_score_adj /proc tunable parameter adds the value stored in each process's oom_score_adj variable, which can be adjusted via /proc . This allows for an adjustment of each process's attractiveness to the OOM killer in user space; setting it to -1000 will disable OOM kills entirely, while setting it to +1000 marks this process as OOM's primary kill target. For more information on the new implementation, refer to http://lwn.net/Articles/391222/ . zram driver Red Hat Enterprise Linux 6.2 provides an updated zram driver (creates generic RAM based compressed block devices). taskstat utility In Red Hat Enterprise Linux 6.2, the taskstat utility in the kernel, which prints the status of ASET tasks, has been enhanced by providing microsecond CPU time resolution for the top utility to use. perf utility Red Hat Enterprise Linux 6.2 updates the perf utility to upstream version 3.1 along with the kernel upgrade to v 3.1. Refer to BZ# 725524 for newly supported kernel features provided by the perf utility. The updated version of the perf utility includes: Added cgroup support Added handling of /proc/sys/kernel/kptr_restrict Added more cache-miss percentage printouts Added the -d -d and -d -d -d options to show more CPU events Added the --sync/-S option Added support for the PERF_TYPE_RAW parameter Added more documentation about the -f/--fields option The python-perf package has been added for python binding support. OProfile support Red Hat Enterprise Linux 6.2 adds OProfile support for the latest Intel processors. IRQ counting The number of interrupt requests (IRQ) is now counted in a sum of all irq counter, reducing the cost of the look-up in the /proc/stat file. Scheduling improvement Red Hat Enterprise Linux 6.2 introduces a scheduling improvement where a hint is provided to the scheduler on the buddy hint on sleep and preempt path. This hint enhancement helps the workload of multiple tasks in multiple task groups. Transparent Huge Page improvement In Red Hat Enterprise Linux 6.2, Transparent Huge Pages are now supported in several places by the kernel: The system calls of mremap , mincore , and mprotect /proc tunable parameters: /proc/<pid>/smaps and /proc/vmstat Additionally, Transparent Huge Pages add some compaction improvements. XTS AES256 self-tests Red Hat Enterprise Linux 6.2 adds XTS (XEX-based Tweaked CodeBook) AES256 self-tests to meet the FIPS-140 requirements. SELinux netfilter packet drops Previously, the SELinux netfilter hooks returned NF_DROP if they dropped a packet. In Red Hat Enterprise Linux 6.2, a drop in the netfilter hooks is signaled as a permanent fatal error and is not transient. By doing this, the error is passed back up the stack, and in some situations applications will get a faster indication that something went wrong. LSM hook In Red Hat Enterprise Linux 6.2, the remount mount options ( mount -o remount ) are passed to a new LSM hook. Default mode for UEFI systems Red Hat Enterprise Linux 6.0 and 6.1 defaulted to running UEFI systems in a physical addressing mode. Red Hat Enterprise Linux 6.2 defaults to running UEFI systems in a virtual addressing mode. The behavior may be obtained by passing the physefi kernel parameter. Default method for kdumping over SSH In Red Hat Enterprise Linux 6, the default core_collector method for kdumping the core over SSH has been changed from scp to makedumpfile , which helps shrink the size of the core file when copying over the network link, resulting in faster copying. If you require the old vmcore full size core file, specify the following in the /etc/kdump.conf file: | [
"~]# iscsiadm -m fw",
"~]# echo \"options qla4xxx ql4xdisablesysfsboot=1\" >> /etc/modprobe.d/qla4xxx.conf",
"~]# rmmod qla4xxx ~]# modprobe qla4xxx",
"core_collector /usr/bin/scp"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/kernel |
Chapter 8. Scaling storage of IBM Z or IBM LinuxONE OpenShift Data Foundation cluster | Chapter 8. Scaling storage of IBM Z or IBM LinuxONE OpenShift Data Foundation cluster 8.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Z or IBM LinuxONE infrastructure You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites A running OpenShift Data Foundation Platform. Administrative privileges on the OpenShift Web Console. To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating storage classes and pools for details. Procedure Add additional hardware resources with zFCP disks. List all the disks. Example output: A SCSI disk is represented as a zfcp-lun with the structure <device-id>:<wwpn>:<lun-id> in the ID section. The first disk is used for the operating system. The device id for the new disk can be the same. Append a new SCSI disk. Note The device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID. List all the FCP devices to verify the new disk is configured. Navigate to the OpenShift Web Console. Click Operators on the left navigation bar. Select Installed Operators . In the window, click OpenShift Data Foundation Operator. In the top navigation bar, scroll right and click Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 8.2. Scaling out storage capacity on a IBM Z or IBM LinuxONE cluster 8.2.1. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 8.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . | [
"lszdev",
"TYPE ID ON PERS NAMES zfcp-host 0.0.8204 yes yes zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500407630c0b50a4:0x3002b03000000000 yes yes sdb sg1 qeth 0.0.bdd0:0.0.bdd1:0.0.bdd2 yes no encbdd0 generic-ccw 0.0.0009 yes no",
"chzdev -e 0.0.8204:0x400506630b1b50a4:0x3001301a00000000",
"lszdev zfcp-lun TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1 zfcp-lun 0.0.8204:0x400506630b1b50a4:0x3001301a00000000 yes yes sdc sg2",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/scaling_storage/scaling_storage_of_ibm_z_or_ibm_linuxone_openshift_data_foundation_cluster |
Chapter 16. Logging Tapset | Chapter 16. Logging Tapset This family of functions is used to send simple message strings to various destinations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/logging.stp |
Chapter 6. Persisting message data | Chapter 6. Persisting message data AMQ Broker has two options for persisting (that is, storing ) message data: Persisting messages in journals This is the default option. Journal-based persistence is a high-performance option that writes messages to journals on the file system. Persisting messages in a database This option uses a Java Database Connectivity (JDBC) connection to persist messages to a database of your choice. Alternatively, you can also configure the broker not to persist any message data. For more information, see Section 6.3, "Disabling persistence" . The broker uses a different solution for persisting large messages outside the message journal. See Chapter 8, Handling large messages for more information. The broker can also be configured to page messages to disk in low-memory situations. See Section 7.1, "Configuring message paging" for more information. Note For current information regarding which databases and network file systems are supported by AMQ Broker see Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal. 6.1. Persisting message data in journals A broker journal is a set of append-only files on disk. Each file is pre-created to a fixed size and initially filled with padding. As messaging operations are performed on the broker, records are appended to end of the journal. Appending records allows the broker to minimize disk head movement and random access operations, which are typically the slowest operation on a disk. When one journal file is full, the broker creates a new one. The journal file size is configurable, minimizing the number of disk cylinders used by each file. Modern disk topologies are complex, however, and the broker cannot control which cylinder(s) the file is mapped to. Therefore, journal file sizing is difficult to control precisely. Other persistence-related features that the broker uses are: A garbage collection algorithm that determines whether a particular journal file is still in use. If the journal file is no longer in use, the broker can reclaim the file for reuse. A compaction algorithm that removes dead space from the journal and compresses the data. This results in the journal using fewer files on disk. Support for local transactions. Support for Extended Architecture (XA) transactions when using JMS clients. Most of the journal is written in Java. However, interaction with the actual file system is abstracted, so that you can use different, pluggable implementations. AMQ Broker includes the following implementations: NIO NIO (New I/O) uses standard Java NIO to interface with the file system. This provides extremely good performance and runs on any platform with a Java 6 or later runtime. For more information about Java NIO, see Java NIO . AIO AIO (Aynshcronous I/O) uses a thin native wrapper to talk to the Linux Asynchronous I/O Library ( libaio ). With AIO, the broker is called back after the data has made it to disk, avoiding explicit syncs altogether. By default, the broker tries to use an AIO journal, and falls back to using NIO if AIO is not available. AIO typically provides even better performance than Java NIO. To learn how to install libaio , see Section 6.1.1, "Installing the Linux Asynchronous I/O Library" . The procedures in the sub-sections that follow show how to configure the broker for journal-based persistence. 6.1.1. Installing the Linux Asynchronous I/O Library Red Hat recommends using the AIO journal (instead of NIO) for better persistence performance. Note It is not possible to use the AIO journal with other operating systems or earlier versions of the Linux kernel. To use the AIO journal, you must install the Linux Asynchronous I/O Library ( libaio ). To install libaio , use the yum command, as shown below: 6.1.2. Configuring journal-based persistence The following procedure describes how to review the default configuration that the broker uses for journal-based persistence. You can use this description to adjust your configuration as needed. Open the <broker_instance_dir> /etc/broker.xml configuration file. By default, the broker is configured to use journal-based persistence, as shown below. <configuration> <core> ... <persistence-enabled>true</persistence-enabled> <journal-type>ASYNCIO</journal-type> <bindings-directory>./data/bindings</bindings-directory> <journal-directory>./data/journal</journal-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>-1</journal-pool-files> <journal-device-block-size>4096</journal-device-block-size> <journal-file-size>10M</journal-file-size> <journal-buffer-timeout>12000</journal-buffer-timeout> <journal-max-io>4096</journal-max-io> ... </core> </configuration> persistence-enabled If the value of this parameter is set to true , the broker uses the file-based journal for message persistence. journal-type Type of journal to use. If set to ASYNCIO , the broker first attempts to use AIO. If AIO is not found, the broker uses NIO. bindings-directory File system location of the bindings journal. The default value is relative to the <broker_instance_dir> directory. journal-directory File system location of the message journal. The default value is relative to the <broker_instance_dir> directory. journal-datasync If the value of this parameter is set to true , the broker uses the fdatasync function to confirm disk writes. journal-min-files Number of journal files to initially create when the broker starts. journal-pool-files Number of files to keep after reclaiming unused files. The default value of -1 means that no files are deleted during cleanup. journal-device-block-size Maximum size, in bytes, of the data blocks used by the journal on your storage device. The default value is 4096 bytes. journal-file-size Maximum size, in bytes, of each journal file in the specified journal directory. When this limit is reached, the broker starts a new file. This parameter also supports byte notation (for example, K, M, G), or the binary equivalents (Ki, Mi, Gi). If this parameter is not explicitly specified in your configuration, the default value is 10485760 bytes (10MiB). journal-buffer-timeout Specifies how often, in nanoseconds, the broker flushes the journal buffer. AIO typically uses a higher flush rate than NIO, so the broker maintains different default values for both NIO and AIO. If this parameter not explicitly specified in your configuration, the default value for NIO is 3333333 nanoseconds (that is, 300 times per second). The default value for AIO is 50000 nanoseconds (that is, 2000 times per second). journal-max-io Maximum number of write requests that can be in the IO queue at any one time. If the queue becomes full, the broker blocks further writes until space is available. If you are using NIO, this value should always be 1 . If you are using AIO andthis parameter not explicitly specified in your configuration, the default value is 500 . Based on the preceding descriptions, adjust your persistence configuration as needed for your storage device. Additional resources To learn about all of the parameters that are available for configuring journal-based persistence, see Appendix E, Messaging Journal Configuration Elements . 6.1.3. About the bindings journal The bindings journal is used to store bindings-related data, such as the set of queues deployed on the broker and their attributes. It also stores data such as ID sequence counters. The bindings journal always uses NIO because it is typically low throughput when compared to the message journal. Files on this journal are prefixed with activemq-bindings . Each file also has an extension of .bindings and a default size of 1048576 bytes. To configure the bindings journal, include the following parameters in the core element of the <broker_instance_dir> /etc/broker.xml configuration file. bindings-directory Directory for the bindings journal. The default value is <broker_instance_dir> /data/bindings . create-bindings-dir If the value of this parameter is set to true , the broker automatically creates the bindings directory in the location specified in bindings-directory , if it does not already exist. The default value is true . 6.1.4. About the JMS journal The JMS journal stores all JMS-related data, including JMS queues, topics, and connection factories, as well as any JNDI bindings for these resources. Any JMS resources created via the management API are persisted to this journal, but any resources configured via configuration files are not. The broker creates the JMS journal only if JMS is being used. Files in the JMS journal are prefixed with activemq-jms . Each file also has an extension of .jms and a default size of 1048576 bytes. The JMS journal shares its configuration with the bindings journal. Additional resources For more information about the bindings journal, see Section 6.1.3, "About the bindings journal" . 6.1.5. Compacting journal files AMQ Broker includes a compaction algorithm that removes dead space from the journal and compresses the data so that it takes up less disk space. The following sub-sections show how to: Configure the broker to automatically compact journal files when certain criteria are met Run the compaction process manually from the command-line interface 6.1.5.1. Configuring journal file compaction The broker uses the following criteria to determine when to start compaction: The number of files created for the journal. The percentage of live data in the journal files. After the configured values for both of these criteria are reached, the compaction process parses the journal and removes all dead records. Consequently, the journal comprises fewer files. The following procedure shows how to configure the broker for journal file compaction. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the journal-compact-min-files and journal-compact-percentage parameters and specify values. For example: <configuration> <core> ... <journal-compact-min-files>15</journal-compact-min-files> <journal-compact-percentage>25</journal-compact-percentage> ... </core> </configuration> journal-compact-min-files The minimum number of journal files that the broker has to create before compaction begins. The default value is 10 . Setting the value to 0 disables compaction. You should take care when disabling compaction, because the size of the journal can grow indefinitely. journal-compact-percentage The percentage of live data in the journal files. When less than this percentage is considered live data (and the configured value of journal-compact-min-files has also been reached), compaction begins. The default value is 30 . 6.1.5.2. Running compaction from the command-line interface The following procedure shows how to use the command-line interface (CLI) to compact journal files. Procedure As the owner of the <broker_instance_dir> directory, stop the broker. The example below shows the user amq-broker . (Optional) Run the following CLI command to get a full list of parameters for the data tool. By default, the tool uses settings found in <broker_instance_dir> /etc/broker.xml . Run the following CLI command to compact the data. After the tool has successfully compacted the data, restart the broker. Additional resources AMQ Broker includes a number of CLI commands for managing your journal files. See command-line Tools in the Appendix for more information. 6.1.6. Disabling the disk write cache Most disks contain hardware write caches. A write cache can increase the apparent performance of the disk because writes are lazily written to the disk later. By default, many systems ship with disk write cache enabled. This means that even after syncing from the operating system, there is no guarantee that the data has actually made it to disk. Therefore, if a failure occurs, critical data can be lost. Some more expensive disks have non-volatile or battery-backed write caches that do not necessarily lose data in event of failure, but you should test them. If your disk does not have such features, you should ensure that write cache is disabled. Be aware that disabling disk write cache can negatively affect performance. The following procedure shows how to disable the disk write cache on Linux on Windows. Procedure On Linux, to manage the disk write cache settings, use the tools hdparm (for IDE disks) or sdparm or sginfo (for SDSI/SATA disks). On Windows, to manage the disk writer cache settings, right-click the disk. Select Properties . 6.2. Persisting message data in a database When you persist message data in a database, the broker uses a Java Database Connectivity (JDBC) connection to store message and bindings data in database tables. The data in the tables is encoded using AMQ Broker journal encoding. For information about supported databases, see Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal. Important An administrator might choose to store message data in a database based on the requirements of an organization's wider IT infrastructure. However, use of a database can negatively effect the performance of a messaging system. Specifically, writing messaging data to database tables via JDBC creates a significant performance overhead for a broker. 6.2.1. Configuring JDBC persistence The following procedure shows how to configure the broker to store messages and bindings data in database tables. Procedure Add the appropriate JDBC client libraries to the broker runtime. To do this, add the relevant .jar files to the <broker_instance_dir> /lib directory. Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add a store element that contains a database-store element. <configuration> <core> <store> <database-store> </database-store> </store> </core> </configuration> Within the database-store element, add configuration parameters for JDBC persistence and specify values. For example: <configuration> <core> <store> <database-store> <jdbc-connection-url>jdbc:oracle:data/oracle/database-store;create=true</jdbc-connection-url> <jdbc-user>ENC(5493dd76567ee5ec269d11823973462f)</jdbc-user> <jdbc-password>ENC(56a0db3b71043054269d11823973462f)</jdbc-password> <bindings-table-name>BINDINGS_TABLE</bindings-table-name> <message-table-name>MESSAGE_TABLE</message-table-name> <large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name> <page-store-table-name>PAGE_STORE_TABLE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_TABLE</node-manager-store-table-name> <jdbc-driver-class-name>oracle.jdbc.driver.OracleDriver</jdbc-driver-class-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>20000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> </core> </configuration> jdbc-connection-url Full JDBC connection URL for your database server. The connection URL should include all configuration parameters and the database name. jdbc-user Encrypted user name for your database server. For more information about encrypting user names and passwords for use in configuration files, see Section 5.9, "Encrypting passwords in configuration files" . jdbc-password Encrypted password for your database server. For more information about encrypting user names and passwords for use in configuration files, see Section 5.9, "Encrypting passwords in configuration files" . bindings-table-name Name of the table in which bindings data is stored. Specifying a table name enables you to share a single database between multiple servers, without interference. message-table-name Name of the table in which message data is stored. Specifying this table name enables you to share a single database between multiple servers, without interference. large-message-table-name Name of the table in which large messages and related data are persisted. In addition, if a client streams a large message in chunks, the chunks are stored in this table. Specifying this table name enables you to share a single database between multiple servers, without interference. page-store-table-name Name of the table in which paged store directory information is stored. Specifying this table name enables you to share a single database between multiple servers, without interference. node-manager-store-table-name Name of the table in which the shared store high-availability (HA) locks for live and backup brokers and other HA-related data is stored on the broker server. Specifying this table name enables you to share a single database between multiple servers, without interference. Each live-backup pair that uses shared store HA must use the same table name. You cannot share the same table between multiple (and unrelated) live-backup pairs. jdbc-driver-class-name Fully-qualified class name of the JDBC database driver. For information about supported databases, see Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal. jdbc-network-timeout JDBC network connection timeout, in milliseconds. The default value is 20000 milliseconds. When using a JDBC for shared store HA, it is recommended to set the timeout to a value less than or equal to jdbc-lock-expiration . jdbc-lock-renew-period Length, in milliseconds, of the renewal period for the current JDBC lock. When this time elapses, the broker can renew the lock. It is recommended to set a value that is several times smaller than the value of jdbc-lock-expiration . This gives the broker sufficient time to extend the lease and also gives the broker time to try to renew the lock in the event of a connection problem. The default value is 2000 milliseconds. jdbc-lock-expiration Time, in milliseconds, that the current JDBC lock is considered owned (that is, acquired or renewed), even if the value of jdbc-lock-renew-period has elapsed. The broker periodically tries to renew a lock that it owns according to the value of jdbc-lock-renew-period . If the broker fails to renew the lock (for example, due to a connection problem) the broker keeps trying to renew the lock until the value of jdbc-lock-expiration has passed since the lock was last successfully acquired or renewed. An exception to the renewal behavior described above is when another broker acquires the lock. This can happen if there is a time misalignment between the Database Management System (DBMS) and the brokers, or if there is a long pause for garbage collection. In this case, the broker that originally owned the lock considers the lock lost and does not try to renew it. After the expiration time elapses, if the JDBC lock has not been renewed by the broker that currently owns it, another broker can establish a JDBC lock. The default value of jdbc-lock-expiration is 20000 milliseconds. jdbc-journal-sync-period Duration, in milliseconds, for which the broker journal synchronizes with JDBC. The default value is 5 milliseconds. 6.2.2. Configuring JDBC connection pooling If you have configured the broker for JDBC persistence, the broker uses a JDBC connection to store messages and bindings data in database tables. In the event of a JDBC connection failure, and provided that there is no active connection activity (such as a database read or write) when the failure occurs, the broker stays running and tries to re-establish the database connection. To achieve this, AMQ Broker uses JDBC connection pooling . In general, a connection pool provides a set of open connections to a specified database that can be shared between multiple applications. For a broker, if the connection between a broker and the database fails, the broker attempts to reconnect to the database using a different connection from the pool. The pool tests the new connection before the broker receives it. The following example shows how to configure JDBC connection pooling. Important If you do not explicitly configure JDBC connection pooling, the broker uses connection pooling with a default configuration. The default configuration uses values from your existing JDBC configuration. For more information, see Default connection pooling configuration . Prerequisites This example builds on the example for configuring JDBC persistence. See Section 6.2.1, "Configuring JDBC persistence" To enable connection pooling, AMQ Broker uses the Apache Commons DBCP package. Before configuring JDBC connection pooling for the broker, you should be familiar with what this package provides. For more information, see: Overview of Apache Commons DBCP Apache Commons DBCP Configuration Parameters Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. Within the database-store element that you previously added for your JDBC configuration, remove the jdbc-driver-class-name , jdbc-connection-url , jdbc-user , jdbc-password , parameters. Later in this procedure, you will replace these with corresponding DBCP configuration parameters. Note If you do not explicitly remove the preceding parameters, the corresponding DBCP parameters that you add later in this procedure take precedence. Within the database-store element, add a data-source-properties element. For example: <store> <database-store> <data-source-properties> </data-source-properties> <bindings-table-name>BINDINGS</bindings-table-name> <message-table-name>MESSAGES</message-table-name> <large-message-table-name>LARGE_MESSAGES</large-message-table-name> <page-store-table-name>PAGE_STORE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_STORE</node-manager-store-table-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>20000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> Within the new data-source-properties element, add DBCP data source properties for connection pooling. Specify key-value pairs. For example: <store> <database-store> <data-source-properties> <data-source-property key="driverClassName" value="com.mysql.jdbc.Driver" /> <data-source-property key="url" value="jdbc:mysql://localhost:3306/artemis" /> <data-source-property key="username" value="ENC(5493dd76567ee5ec269d1182397346f)"/> <data-source-property key="password" value="ENC(56a0db3b71043054269d1182397346f)"/> <data-source-property key="poolPreparedStatements" value="true" /> <data-source-property key="maxTotal" value="-1" /> </data-source-properties> <bindings-table-name>BINDINGS</bindings-table-name> <message-table-name>MESSAGES</message-table-name> <large-message-table-name>LARGE_MESSAGES</large-message-table-name> <page-store-table-name>PAGE_STORE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_STORE</node-manager-store-table-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>20000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> driverClassName Fully-qualified class name of the JDBC database driver. url Full JDBC connection URL for your database server. username Encrypted user name for your database server. You can also specify this value as unencrypted, plain text. For more information about encrypting user names and passwords for use in configuration files, see Section 5.9, "Encrypting passwords in configuration files" . password Encrypted password for your database server. You can also specify this value as unencrypted, plain text. For more information about encrypting user names and passwords for use in configuration files, see Section 5.9, "Encrypting passwords in configuration files" . poolPreparedStatements When the value of this parameter is set to true , the pool can have an unlimited number of cached prepared statements. This reduces initialization costs. maxTotal Maximum number of connections in the pool. When the value of this parameter is set to -1 , there is no limit. If you do not explicitly configure JDBC connection pooling, the broker uses connection pooling with a default configuration. The default configuration is described in the table. Table 6.1. Default connection pooling configuration DBCP configuration parameter Default value driverClassName The value of the existing jdbc-driver-class-name parameter url The value of the existing jdbc-connection-url parameter username The value of the existing jdbc-user parameter password The value of the existing jdbc-password parameter poolPreparedStatements true maxTotal -1 Note Reconnection works only if no client is actively sending messages to the broker. If there is an attempt to write to the database tables during reconnection, the broker fails and shuts down. Additional resources For information about databases supported by AMQ Broker, see Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal. To learn about all of the configuration options available in the Apache Commons DBCP package, see Apache Commons DBCP Configuration Parameters . 6.3. Disabling persistence In some situations, it might be a requirement that a messaging system does not store any data. In these situations you can disable persistence on the broker. The following procedure shows how to disable persistence. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, set the value of the persistence-enabled parameter to false . <configuration> <core> ... <persistence-enabled>false</persistence-enabled> ... </core> </configuration> No message data, bindings data, large message data, duplicate ID cache, or paging data is persisted. | [
"install libaio",
"<configuration> <core> <persistence-enabled>true</persistence-enabled> <journal-type>ASYNCIO</journal-type> <bindings-directory>./data/bindings</bindings-directory> <journal-directory>./data/journal</journal-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>-1</journal-pool-files> <journal-device-block-size>4096</journal-device-block-size> <journal-file-size>10M</journal-file-size> <journal-buffer-timeout>12000</journal-buffer-timeout> <journal-max-io>4096</journal-max-io> </core> </configuration>",
"<configuration> <core> <journal-compact-min-files>15</journal-compact-min-files> <journal-compact-percentage>25</journal-compact-percentage> </core> </configuration>",
"su - amq-broker cd <broker_instance_dir> /bin ./artemis stop",
"./artemis help data compact.",
"./artemis data compact.",
"./artemis run",
"<configuration> <core> <store> <database-store> </database-store> </store> </core> </configuration>",
"<configuration> <core> <store> <database-store> <jdbc-connection-url>jdbc:oracle:data/oracle/database-store;create=true</jdbc-connection-url> <jdbc-user>ENC(5493dd76567ee5ec269d11823973462f)</jdbc-user> <jdbc-password>ENC(56a0db3b71043054269d11823973462f)</jdbc-password> <bindings-table-name>BINDINGS_TABLE</bindings-table-name> <message-table-name>MESSAGE_TABLE</message-table-name> <large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name> <page-store-table-name>PAGE_STORE_TABLE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_TABLE</node-manager-store-table-name> <jdbc-driver-class-name>oracle.jdbc.driver.OracleDriver</jdbc-driver-class-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>20000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> </core> </configuration>",
"<store> <database-store> <data-source-properties> </data-source-properties> <bindings-table-name>BINDINGS</bindings-table-name> <message-table-name>MESSAGES</message-table-name> <large-message-table-name>LARGE_MESSAGES</large-message-table-name> <page-store-table-name>PAGE_STORE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_STORE</node-manager-store-table-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>20000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store>",
"<store> <database-store> <data-source-properties> <data-source-property key=\"driverClassName\" value=\"com.mysql.jdbc.Driver\" /> <data-source-property key=\"url\" value=\"jdbc:mysql://localhost:3306/artemis\" /> <data-source-property key=\"username\" value=\"ENC(5493dd76567ee5ec269d1182397346f)\"/> <data-source-property key=\"password\" value=\"ENC(56a0db3b71043054269d1182397346f)\"/> <data-source-property key=\"poolPreparedStatements\" value=\"true\" /> <data-source-property key=\"maxTotal\" value=\"-1\" /> </data-source-properties> <bindings-table-name>BINDINGS</bindings-table-name> <message-table-name>MESSAGES</message-table-name> <large-message-table-name>LARGE_MESSAGES</large-message-table-name> <page-store-table-name>PAGE_STORE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_STORE</node-manager-store-table-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>20000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store>",
"<configuration> <core> <persistence-enabled>false</persistence-enabled> </core> </configuration>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/assembly-br-persisting-message-data_configuring |
Chapter 1. SSO with Kerberos Deeper Dive | Chapter 1. SSO with Kerberos Deeper Dive 1.1. What are SSO and Kerberos? A basic background of single sign-on (SSO) and Kerberos is provided in the Single Sign-On section of the JBoss EAP Security Architecture guide. 1.2. Kerberos Components Kerberos itself is a network protocol that enables authentication for users of client/server applications through the use of secret-key cryptography. Kerberos is usually used for authenticating desktop users on networks, but through the use of some additional tools, it can be used to authenticate users to web applications and to provide SSO for a set of web applications. This essentially allows users who have already authenticated on their desktop network to seamlessly access secured resources in web applications without having to reauthenticate. This concept is known as desktop-based SSO since the user is being authenticated using a desktop-based authentication mechanism, and their authentication token or ticket is being used by the web application as well. This differs from other SSO mechanisms such as browser-based SSO, which authenticates users and issues tokens all through the browser. The Kerberos protocol defines several components that it uses in authentication and authorization: Tickets A ticket is a form of a security token that Kerberos uses for issuing and making authentication and authorization decisions about principals. Authentication Service The authentication service (AS) challenges principals to log in when they first log into the network. The authentication service is responsible for issuing a ticket granting ticket (TGT), which is needed for authenticating against the ticket granting service and subsequent access to secured services and resources. Ticket Granting Service The ticket granting service (TGS) is responsible for issuing service tickets and specific session information to principals and the target server they are attempting to access. This is based on the TGT and destination information provided by the principal. This service ticket and session information is then used to establish a connection to the destination and access the desired secured service or resource. Key Distribution Center The key distribution center (KDC) is the component that houses both the TGS and AS. The KDC, along with the client, or principal, and server, or secured service, are the three pieces required to perform Kerberos authentication. Ticket Granting Ticket A ticket granting ticket (TGT) is a type of ticket issued to a principal by the AS. The TGT is granted once a principal successfully authenticates against the AS using their username and password. The TGT is cached locally by the client, but is encrypted such that only the KDC can read it and is unreadable by the client. This allows the AS to securely store authorization data and other information in the TGT for use by the TGS and enables the TGS to make authorization decisions using this data. Service Ticket A service ticket (ST) is a type of ticket issued to a principal by the TGS based on their TGT and the intended destination. The principal provides the TGS with their TGT and the intended destination, and the TGS verifies the principal has access to the destination based on the authorization data in the TGT. If successful, the TGS issues an ST to the client for both the client as well as the destination server which is the server containing the secured service or resource. This grants the client access to the destination server. The ST, which is cached by the client and readable by both the client and server, also contains session information that allows the client and server to communicate securely. Note There is a tight relationship between Kerberos and the DNS settings of the network. For instance, certain assumptions are made when clients access the KDC based on the name of the host it is running on. As a result, it is important that all DNS settings in addition to the Kerberos settings are properly configured to ensure that clients can connect. 1.3. Additional Components In addition to the Kerberos components, several other items are needed to enable Kerberos SSO with JBoss EAP. 1.3.1. SPNEGO Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO) provides a mechanism for extending a Kerberos-based single sign-on environment for use in web applications. SPNEGO is an authentication method used by a client application to authenticate itself to the server. This technology is used when the client application and server are trying to communicate with each other, but neither are sure of the authentication protocol the other supports. SPNEGO determines the common GSSAPI mechanisms between the client application and the server and then dispatches all further security operations to it. When an application on a client computer, such as a web browser, attempts to access a protected page on the web server, the server responds that authorization is required. The application then requests a service ticket from the Kerberos KDC. After the ticket is obtained, the application wraps it in a request formatted for SPNEGO, and sends it back to the web application, through the browser. The web container running the deployed web application unpacks the request and attempts to authenticate the ticket. Upon successful authentication, access is granted. SPNEGO works with all types of Kerberos providers, including the Kerberos service included in Red Hat Enterprise Linux and the Kerberos server, which is an integral part of Microsoft Active Directory. 1.3.2. JBoss Negotiation JBoss Negotiation is a framework that ships with JBoss EAP that provides an authenticator and Jakarta Authentication login module to support SPNEGO in JBoss EAP. JBoss Negotiation is only used with the legacy security subsystem and legacy core management authentication. For more information on Jakarta Authentication login modules, please see the Declarative Security and Jakarta Authentication and Security Domains sections of the JBoss EAP Security Architecture guide. Note When using JBoss Negotiation to secure certain applications, such as REST web services, one or more sessions may be created and left open for the timeout period, which defaults to 30 minutes, when a client makes a request. This differs from the expected behavior of securing an application using basic authentication, which would leave no open sessions. JBoss Negotiation is implemented to use sessions to maintain the state of the negotiation/connection so the creation of these sessions is expected behavior. 1.4. Kerberos Integration Kerberos is integrated with many operating systems including Linux distributions such as Red Hat Enterprise Linux. Kerberos is also an integral part of Microsoft Active Directory and is supported by Red Hat Directory Server and Red Hat IDM. 1.5. How Does Kerberos Provide SSO for JBoss EAP? Kerberos provides desktop-based SSO by issuing tickets from a KDC for use by the client and server. JBoss EAP can integrate with this existing process by using those same tickets in its own authentication and authorization process. Before trying to understand how JBoss EAP can reuse those tickets, it is best to first understand in greater detail how these tickets are issued as well as how authentication and authorization works with Kerberos in desktop-based SSO without JBoss EAP. 1.5.1. Authentication and Authorization with Kerberos in Desktop-Based SSO To provide authentication and authorization, Kerberos relies on a third party, the KDC, to provide authentication and authorization decisions for clients accessing servers. These decisions happen in three steps: Authentication exchange. When a principal first accesses the network or attempts to access a secured service without a ticket granting ticket (TGT), they are challenged to authenticate against the authentication service (AS) with their credentials. The AS validates the user's provided credentials against the configured identity store, and upon successful authentication, the principal is issued a TGT which is cached by the client. The TGT also contains some session information so future communication between the client and KDC is secured. Ticket granting, or authorization, exchange. Once the principal has been issued a TGT, they may attempt to access secured services or resources. The principal sends a request to the ticket granting service (TGS), passing the TGT it was issued by the KDC and requesting a service ticket (ST) for a specific destination. The TGS checks the TGT provided by the principal and verifies they have proper permissions to access the requested resource. If successful, the TGS issues an ST for the principal to access that specific destination. The TGS also creates session information for both the client as well as the destination server to allow for secure communication between the two. This session information is encrypted separately such that the client and server can only decrypt its own session information using long-term keys separately provided by the KDC to each, from transactions. The TGS then responds to the client with the ST which includes the session information for both the client and server. Accessing the server. Now that the principal has an ST for the secured service as well as a mechanism for secure communication to that server, the client may now establish a connection and attempt to access the secured resource. The client starts by passing the ST to the destination server. This ST contains the server component of the session information which it received from the TGS for that destination. The server attempts to decrypt the session information passed to it by the client, using its long-term key from the KDC. If it succeeds, the client has been successfully authenticated to the server and the server is also considered authenticated to the client. At this point, trust has been established and secured communication between the client and server may proceed. Note Despite the fact that unauthorized principals cannot actually use a TGT, a principal will only be issued a TGT after they first successfully authenticate with the AS. Not only does this ensure that only properly authorized principals are ever issued a TGT, it also reduces the ability for unauthorized third parties to obtain TGTs in an attempt to compromise or exploit them, for example using offline dictionary or brute-force attacks. 1.5.2. Kerberos and JBoss EAP JBoss EAP can integrate with an existing Kerberos desktop-based SSO environment to allow for those same tickets to provide access to web applications hosted on JBoss EAP instances. In a typical setup, a JBoss EAP instance would be configured to use Kerberos authentication with SPNEGO using either the legacy security subsystem or the elytron subsystem. An application, configured to use SPNEGO authentication, is deployed to that JBoss EAP instance. A user logs in to a desktop, which is governed by Kerberos, and completes an authentication exchange with the KDC. The user then attempts to access a secured resource in the deployed application on that JBoss EAP instance directly using a web browser. JBoss EAP responds that authorization is required to access the secured resource. The web browser obtains the user's TGT ticket and then performs the ticket granting, or authorization, exchange with the KDC to validate the user and obtain a service ticket. Once the ST is returned to the browser, it wraps the ST in a request formatted for SPNEGO, and sends it back to the web application running on JBoss EAP. JBoss EAP then unpacks the SPNEGO request and performs the authentication using the either the legacy security subsystem or elytron subsystem. If the authentication succeeds, the user is granted access to the secured resource. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_set_up_sso_with_kerberos/krb_sso_intro |
Chapter 25. Using Ansible to manage IdM service vaults: storing and retrieving secrets | Chapter 25. Using Ansible to manage IdM service vaults: storing and retrieving secrets This section shows how an administrator can use the ansible-freeipa vault module to securely store a service secret in a centralized location. The vault used in the example is asymmetric, which means that to use it, the administrator needs to perform the following steps: Generate a private key using, for example, the openssl utility. Generate a public key based on the private key. The service secret is encrypted with the public key when an administrator archives it into the vault. Afterwards, a service instance hosted on a specific machine in the domain retrieves the secret using the private key. Only the service and the administrator are allowed to access the secret. If the secret is compromised, the administrator can replace it in the service vault and then redistribute it to those individual service instances that have not been compromised. Prerequisites The Key Recovery Authority (KRA) Certificate System component has been installed on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . This section includes these procedures: Ensuring the presence of an asymmetric service vault in IdM using Ansible Storing an IdM service secret in an asymmetric vault using Ansible Retrieving a service secret for an IdM service using Ansible Changing an IdM service vault secret when compromised using Ansible In the procedures: admin is the administrator who manages the service password. private-key-to-an-externally-signed-certificate.pem is the file containing the service secret, in this case a private key to an externally signed certificate. Do not confuse this private key with the private key used to retrieve the secret from the vault. secret_vault is the vault created to store the service secret. HTTP/webserver1.idm.example.com is the service that is the owner of the vault. HTTP/webserver2.idm.example.com and HTTP/webserver3.idm.example.com are the vault member services. service-public.pem is the service public key used to encrypt the password stored in password_vault . service-private.pem is the service private key used to decrypt the password stored in secret_vault . 25.1. Ensuring the presence of an asymmetric service vault in IdM using Ansible Follow this procedure to use an Ansible playbook to create a service vault container with one or more private vaults to securely store sensitive information. In the example used in the procedure below, the administrator creates an asymmetric vault named secret_vault . This ensures that the vault members have to authenticate using a private key to retrieve the secret in the vault. The vault members will be able to retrieve the file from any IdM client. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Obtain the public key of the service instance. For example, using the openssl utility: Generate the service-private.pem private key. Generate the service-public.pem public key based on the private key. Optional: Create an inventory file if it does not exist, for example inventory.file : Open your inventory file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-asymmetric-vault-is-present.yml Ansible playbook file. For example: Open the ensure-asymmetric-vault-is-present-copy.yml file for editing. Add a task that copies the service-public.pem public key from the Ansible controller to the server.idm.example.com server. Modify the rest of the file by setting the following variables in the ipavault task section: Set the ipaadmin_password variable to the IdM administrator password. Define the name of the vault using the name variable, for example secret_vault . Set the vault_type variable to asymmetric . Set the service variable to the principal of the service that owns the vault, for example HTTP/webserver1.idm.example.com . Set the public_key_file to the location of your public key. This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: 25.2. Adding member services to an asymmetric vault using Ansible Follow this procedure to use an Ansible playbook to add member services to a service vault so that they can all retrieve the secret stored in the vault. In the example used in the procedure below, the IdM administrator adds the HTTP/webserver2.idm.example.com and HTTP/webserver3.idm.example.com service principals to the secret_vault vault that is owned by HTTP/webserver1.idm.example.com . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. You have created an asymmetric vault to store the service secret. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Optional: Create an inventory file if it does not exist, for example inventory.file : Open your inventory file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the data-archive-in-asymmetric-vault.yml Ansible playbook file. For example: Open the data-archive-in-asymmetric-vault-copy.yml file for editing. Modify the file by setting the following variables in the ipavault task section: Set the ipaadmin_password variable to the IdM administrator password. Set the name variable to the name of the vault, for example secret_vault . Set the service variable to the service owner of the vault, for example HTTP/webserver1.idm.example.com . Define the services that you want to have access to the vault secret using the services variable. Set the action variable to member . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: 25.3. Storing an IdM service secret in an asymmetric vault using Ansible Follow this procedure to use an Ansible playbook to store a secret in a service vault so that it can be later retrieved by the service. In the example used in the procedure below, the administrator stores a PEM file with the secret in an asymmetric vault named secret_vault . This ensures that the service will have to authenticate using a private key to retrieve the secret from the vault. The vault members will be able to retrieve the file from any IdM client. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. You have created an asymmetric vault to store the service secret. The secret is stored locally on the Ansible controller, for example in the /usr/share/doc/ansible-freeipa/playbooks/vault/private-key-to-an-externally-signed-certificate.pem file. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Optional: Create an inventory file if it does not exist, for example inventory.file : Open your inventory file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the data-archive-in-asymmetric-vault.yml Ansible playbook file. For example: Open the data-archive-in-asymmetric-vault-copy.yml file for editing. Modify the file by setting the following variables in the ipavault task section: Set the ipaadmin_password variable to the IdM administrator password. Set the name variable to the name of the vault, for example secret_vault . Set the service variable to the service owner of the vault, for example HTTP/webserver1.idm.example.com . Set the in variable to "{{ lookup('file', 'private-key-to-an-externally-signed-certificate.pem') | b64encode }}" . This ensures that Ansible retrieves the file with the private key from the working directory on the Ansible controller rather than from the IdM server. Set the action variable to member . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: 25.4. Retrieving a service secret for an IdM service using Ansible Follow this procedure to use an Ansible playbook to retrieve a secret from a service vault on behalf of the service. In the example used in the procedure below, running the playbook retrieves a PEM file with the secret from an asymmetric vault named secret_vault , and stores it in the specified location on all the hosts listed in the Ansible inventory file as ipaservers . The services authenticate to IdM using keytabs, and they authenticate to the vault using a private key. You can retrieve the file on behalf of the service from any IdM client on which ansible-freeipa is installed. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. You have created an asymmetric vault to store the service secret. You have archived the secret in the vault . You have stored the private key used to retrieve the service vault secret in the location specified by the private_key_file variable on the Ansible controller. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Optional: Create an inventory file if it does not exist, for example inventory.file : Open your inventory file and define the following hosts: Define your IdM server in the [ipaserver] section. Define the hosts onto which you want to retrieve the secret in the [webservers] section. For example, to instruct Ansible to retrieve the secret to webserver1.idm.example.com , webserver2.idm.example.com , and webserver3.idm.example.com , enter: Make a copy of the retrieve-data-asymmetric-vault.yml Ansible playbook file. For example: Open the retrieve-data-asymmetric-vault-copy.yml file for editing. Modify the file by setting the following variables in the ipavault task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to the name of the vault, for example secret_vault . Set the service variable to the service owner of the vault, for example HTTP/webserver1.idm.example.com . Set the private_key_file variable to the location of the private key used to retrieve the service vault secret. Set the out variable to the location on the IdM server where you want to retrieve the private-key-to-an-externally-signed-certificate.pem secret, for example the current working directory. Set the action variable to member . This the modified Ansible playbook file for the current example: Add a section to the playbook that retrieves the data file from the IdM server to the Ansible controller: Add a section to the playbook that transfers the retrieved private-key-to-an-externally-signed-certificate.pem file from the Ansible controller on to the webservers listed in the webservers section of the inventory file: Save the file. Run the playbook: 25.5. Changing an IdM service vault secret when compromised using Ansible Follow this procedure to reuse an Ansible playbook to change the secret stored in a service vault when a service instance has been compromised. The scenario in the following example assumes that on webserver3.idm.example.com , the retrieved secret has been compromised, but not the key to the asymmetric vault storing the secret. In the example, the administrator reuses the Ansible playbooks used when storing a secret in an asymmetric vault and retrieving a secret from the asymmetric vault onto IdM hosts . At the start of the procedure, the IdM administrator stores a new PEM file with a new secret in the asymmetric vault, adapts the inventory file so as not to retrieve the new secret on to the compromised web server, webserver3.idm.example.com , and then re-runs the two procedures. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. You have created an asymmetric vault to store the service secret. You have generated a new httpd key for the web services running on IdM hosts to replace the compromised old key. The new httpd key is stored locally on the Ansible controller, for example in the /usr/share/doc/ansible-freeipa/playbooks/vault/private-key-to-an-externally-signed-certificate.pem file. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Open your inventory file and make sure that the following hosts are defined correctly: The IdM server in the [ipaserver] section. The hosts onto which you want to retrieve the secret in the [webservers] section. For example, to instruct Ansible to retrieve the secret to webserver1.idm.example.com and webserver2.idm.example.com , enter: Important Make sure that the list does not contain the compromised webserver, in the current example webserver3.idm.example.com . Make a copy of the data-archive-in-asymmetric-vault.yml Ansible playbook file, for example: Open the data-archive-in-asymmetric-vault-copy.yml file for editing. Modify the file by setting the following variables in the ipavault task section: Set the ipaadmin_password variable to the IdM administrator password. Set the name variable to the name of the vault, for example secret_vault . Set the service variable to the service owner of the vault, for example HTTP/webserver.idm.example.com . Set the in variable to "{{ lookup('file', 'new-private-key-to-an-externally-signed-certificate.pem') | b64encode }}" . This ensures that Ansible retrieves the file with the private key from the working directory on the Ansible controller rather than from the IdM server. Set the action variable to member . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Open the retrieve-data-asymmetric-vault-copy.yml file for editing. Modify the file by setting the following variables in the ipavault task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to the name of the vault, for example secret_vault . Set the service variable to the service owner of the vault, for example HTTP/webserver1.idm.example.com . Set the private_key_file variable to the location of the private key used to retrieve the service vault secret. Set the out variable to the location on the IdM server where you want to retrieve the new-private-key-to-an-externally-signed-certificate.pem secret, for example the current working directory. Set the action variable to member . This the modified Ansible playbook file for the current example: Add a section to the playbook that retrieves the data file from the IdM server to the Ansible controller: Add a section to the playbook that transfers the retrieved new-private-key-to-an-externally-signed-certificate.pem file from the Ansible controller on to the webservers listed in the webservers section of the inventory file: Save the file. Run the playbook: 25.6. Additional resources See the README-vault.md Markdown file in the /usr/share/doc/ansible-freeipa/ directory. See the sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/vault/ directory. | [
"cd /usr/share/doc/ansible-freeipa/playbooks/vault",
"openssl genrsa -out service-private.pem 2048 Generating RSA private key, 2048 bit long modulus .+++ ...........................................+++ e is 65537 (0x10001)",
"openssl rsa -in service-private.pem -out service-public.pem -pubout writing RSA key",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp ensure-asymmetric-vault-is-present.yml ensure-asymmetric-service-vault-is-present-copy.yml",
"--- - name: Tests hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Copy public key to ipaserver. copy: src: /path/to/service-public.pem dest: /usr/share/doc/ansible-freeipa/playbooks/vault/service-public.pem mode: 0600 - name: Add data to vault, from a LOCAL file. ipavault: ipaadmin_password: \"{{ ipaadmin_password }}\" name: secret_vault vault_type: asymmetric service: HTTP/webserver1.idm.example.com public_key_file: /usr/share/doc/ansible-freeipa/playbooks/vault/service-public.pem",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-asymmetric-service-vault-is-present-copy.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/vault",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp data-archive-in-asymmetric-vault.yml add-services-to-an-asymmetric-vault.yml",
"--- - name: Tests hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_password: \"{{ ipaadmin_password }}\" name: secret_vault service: HTTP/webserver1.idm.example.com services: - HTTP/webserver2.idm.example.com - HTTP/webserver3.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file add-services-to-an-asymmetric-vault.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/vault",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp data-archive-in-asymmetric-vault.yml data-archive-in-asymmetric-vault-copy.yml",
"--- - name: Tests hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_password: \"{{ ipaadmin_password }}\" name: secret_vault service: HTTP/webserver1.idm.example.com in: \"{{ lookup('file', 'private-key-to-an-externally-signed-certificate.pem') | b64encode }}\" action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file data-archive-in-asymmetric-vault-copy.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/vault",
"touch inventory.file",
"[ipaserver] server.idm.example.com [webservers] webserver1.idm.example.com webserver2.idm.example.com webserver3.idm.example.com",
"cp retrieve-data-asymmetric-vault.yml retrieve-data-asymmetric-vault-copy.yml",
"--- - name: Retrieve data from vault hosts: ipaserver become: no gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Retrieve data from the service vault ipavault: ipaadmin_password: \"{{ ipaadmin_password }}\" name: secret_vault service: HTTP/webserver1.idm.example.com vault_type: asymmetric private_key: \"{{ lookup('file', 'service-private.pem') | b64encode }}\" out: private-key-to-an-externally-signed-certificate.pem state: retrieved",
"--- - name: Retrieve data from vault hosts: ipaserver become: no gather_facts: false tasks: [...] - name: Retrieve data file fetch: src: private-key-to-an-externally-signed-certificate.pem dest: ./ flat: true mode: 0600",
"--- - name: Send data file to webservers become: no gather_facts: no hosts: webservers tasks: - name: Send data to webservers copy: src: private-key-to-an-externally-signed-certificate.pem dest: /etc/pki/tls/private/httpd.key mode: 0444",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file retrieve-data-asymmetric-vault-copy.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/vault",
"[ipaserver] server.idm.example.com [webservers] webserver1.idm.example.com webserver2.idm.example.com",
"cp data-archive-in-asymmetric-vault.yml data-archive-in-asymmetric-vault-copy.yml",
"--- - name: Tests hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_password: \"{{ ipaadmin_password }}\" name: secret_vault service: HTTP/webserver.idm.example.com in: \"{{ lookup('file', 'new-private-key-to-an-externally-signed-certificate.pem') | b64encode }}\" action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file data-archive-in-asymmetric-vault-copy.yml",
"--- - name: Retrieve data from vault hosts: ipaserver become: no gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Retrieve data from the service vault ipavault: ipaadmin_password: \"{{ ipaadmin_password }}\" name: secret_vault service: HTTP/webserver1.idm.example.com vault_type: asymmetric private_key: \"{{ lookup('file', 'service-private.pem') | b64encode }}\" out: new-private-key-to-an-externally-signed-certificate.pem state: retrieved",
"--- - name: Retrieve data from vault hosts: ipaserver become: true gather_facts: false tasks: [...] - name: Retrieve data file fetch: src: new-private-key-to-an-externally-signed-certificate.pem dest: ./ flat: true mode: 0600",
"--- - name: Send data file to webservers become: true gather_facts: no hosts: webservers tasks: - name: Send data to webservers copy: src: new-private-key-to-an-externally-signed-certificate.pem dest: /etc/pki/tls/private/httpd.key mode: 0444",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file retrieve-data-asymmetric-vault-copy.yml"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/using-ansible-to-manage-idm-service-vaults-storing-and-retrieving-secrets_using-ansible-to-install-and-manage-identity-management |
11.2. Configuring an ACME Database | 11.2. Configuring an ACME Database This section describes how to configure a database for the ACME responder. The database configuration is located at /etc/pki/pki-tomcat/acme/database.conf . You can configure the database via command-line using the pki-server acme-database-mod command. Invoking this command without any parameters launches an interactive mode, for example: Invoking the command with the --type parameter creates a new configuration based on the specified type. Invoking the command with other parameters updates the specified parameters. Certain ACME configuration properties are stored in the database, enabling you to configure all ACME responders in the cluster consistently. By default, the ACME responder directly accesses the database when retrieving or updating the ACME configuration properties, which may increase the load on the database. Some databases might provide an ACME configuration monitor to reduce this load. 11.2.1. Configuring a DS Database You can configure the ACME responder to use a DS database. A sample DS database configuration is available at /usr/share/pki/acme/database/ds/database.conf . To configure a DS database: First add the ACME DS schema by importing the /usr/share/pki/acme/database/ds/schema.ldif file with the following command: , prepare an LDIF file to create the ACME subtree. A sample LDIF file is available at usr/share/pki/acme/database/ds/create.ldif . This example uses dc=acme,dc=pki,dc=example,dc=com as the base DN. Import the LDIF file using the ldapadd command: Copy the sample database configuration file from /usr/share/pki/acme/database/ds/database.conf into the /etc/pki/pki-tomcat/acme directory, or execute the following command to customize some of the parameters: Customize the configuration as needed: In a standalone ACME deployment, the database.conf should look like the following: In a shared CA and ACME deployment, the database.conf should look like the following: The DS database provides an ACME configuration monitor using search persistence. You can enable it by enabling setting the following parameter: monitor.enabled=true | [
"pki-server acme-database-mod The current value is displayed in the square brackets. To keep the current value, simply press Enter. To change the current value, enter the new value. To remove the current value, enter a blank space. Enter the type of the database. Available types: ds, in-memory, ldap, openldap, postgresql. Database Type: ds Enter the location of the LDAP server (e.g. ldap://localhost.localdomain:389). Server URL [ldap://localhost.localdomain:389]: Enter the authentication type. Available types: BasicAuth, SslClientAuth. Authentication Type [BasicAuth]: Enter the bind DN. Bind DN [cn=Directory Manager]: Enter the bind password. Bind Password [********]: Enter the base DN for the ACME subtree. Base DN [dc=acme,dc=pki,dc=example,dc=com]:",
"ldapmodify -h USDHOSTNAME -x -D \"cn=Directory Manager\" -w Secret.123 -f /usr/share/pki/acme/database/ds/schema.ldif",
"ldapadd -h USDHOSTNAME -x -D \"cn=Directory Manager\" -w Secret.123 -f /usr/share/pki/acme/database/ds/create.ldif",
"pki-server acme-database-mod --type ds -DbindPassword=Secret.123",
"class=org.example.acme.database.DSDatabase url=ldap://<hostname>:389 authType=BasicAuth bindDN=cn=Directory Manager bindPassword=Secret.123 baseDN=dc=acme,dc=pki,dc=example,dc=com",
"class=org.example.acme.database.DSDatabase configFile=conf/ca/CS.cfg baseDN=dc=acme,dc=pki,dc=example,dc=com"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/configuring_acme |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_spring_boot_starter/making-open-source-more-inclusive |
Chapter 8. Disabling anonymous binds | Chapter 8. Disabling anonymous binds If a user attempts to connect to Directory Server without supplying any credentials, this operation is called anonymous bind . Anonymous binds simplify searches and read operations, such as finding a phone number in the directory by not requiring users to authenticate first. However, anonymous binds can also be a security risk, because users without an account are able to access the data. Warning By default, anonymous binds are enabled in Directory Server for search and read operations. This allows unauthorized access to user entries as well as configuration entries, such as the root directory server entry (DSE). 8.1. Disabling anonymous binds using the command line To increase the security, you can disable anonymous binds. Procedure Set the nsslapd-allow-anonymous-access configuration parameter to off : # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace nsslapd-allow-anonymous-access=off Verification Run a search without specifying a user account: # ldapsearch -H ldap://server.example.com -b " dc=example,dc=com " -x ldap_bind: Inappropriate authentication (48) additional info: Anonymous access is not allowed 8.2. Disabling anonymous binds using the web console To increase the security, you can disable anonymous binds. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Server Server Settings Advanced Settings . Set the Allow Anonymous Access parameter to off . Click Save . Verification Run a search without specifying a user account: # ldapsearch -H ldap://server.example.com -b " dc=example,dc=com " -x ldap_bind: Inappropriate authentication (48) additional info: Anonymous access is not allowed | [
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace nsslapd-allow-anonymous-access=off",
"ldapsearch -H ldap://server.example.com -b \" dc=example,dc=com \" -x ldap_bind: Inappropriate authentication (48) additional info: Anonymous access is not allowed",
"ldapsearch -H ldap://server.example.com -b \" dc=example,dc=com \" -x ldap_bind: Inappropriate authentication (48) additional info: Anonymous access is not allowed"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/user_management_and_authentication/assembly_disabling-anonymous-binds |
26.3. Installing a CA Certificate Manually | 26.3. Installing a CA Certificate Manually To install a new certificate to IdM, use the ipa-cacert-manage install command. For example, the command allows you to change the current certificate when it is nearing its expiration date. Run the ipa-cacert-manage install command, and specify the path to the file containing the certificate. The command accepts PEM-formatted certificate files: The certificate is now present in the LDAP certificate store. Run the ipa-certupdate utility on all servers and clients to update them with the information about the new certificate from LDAP. You must run ipa-certupdate on every server and client separately. Important Always run ipa-certupdate after manually installing a certificate. If you do not, the certificate will not be distributed to the other machines. The ipa-cacert-manage install command can take the following options: -n gives the nickname of the certificate; the default value is the subject name of the certificate -t specifies the trust flags for the certificate in the certutil format; the default value is C,, . For information about the format in which to specify the trust flags, see the ipa-cacert-manage (1) man page. | [
"ipa-cacert-manage install /etc/group/cert.pem"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/manual-cert-install |
Getting Started with Red Hat OpenShift API Management | Getting Started with Red Hat OpenShift API Management Red Hat OpenShift API Management 1 Getting started with your Red Hat OpenShift API Management installation. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/getting_started_with_red_hat_openshift_api_management/index |
Appendix B. iSCSI Gateway Variables | Appendix B. iSCSI Gateway Variables iSCSI Gateway General Variables seed_monitor Purpose Each iSCSI gateway needs access to the Ceph storage cluster for RADOS and RBD calls. This means the iSCSI gateway must have an appropriate /etc/ceph/ directory defined. The seed_monitor host is used to populate the iSCSI gateway's /etc/ceph/ directory. gateway_keyring Purpose Define a custom keyring name. perform_system_checks Purpose This is a Boolean value that checks for multipath and LVM configuration settings on each iSCSI gateway. It must be set to true for at least the first run to ensure the multipathd daemon and LVM are configured properly. iSCSI Gateway RBD-TARGET-API Variables api_user Purpose The user name for the API. The default is admin . api_password Purpose The password for using the API. The default is admin . api_port Purpose The TCP port number for using the API. The default is 5000 . api_secure Purpose Value can be true or false . The default is false . loop_delay Purpose Controls the sleeping interval in seconds for polling the iSCSI management object. The default value is 1 . trusted_ip_list Purpose A list of IPv4 or IPv6 addresses that have access to the API. By default, only the iSCSI gateway hosts have access. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/block_device_guide/iscsi-gateway-variables_block |
7.85. ipset | 7.85. ipset 7.85.1. RHBA-2015:1353 - ipset bug fix update Updated ipset packages that fix one bug are now available for Red Hat Enterprise Linux 6. The ipset packages provide IP sets, a framework inside the Linux 2.4.x and 2.6.x kernel, which can be administered by the ipset utility. Depending on the type, an IP set can currently store IP addresses, TCP/UDP port numbers or IP addresses with MAC addresses in a way that ensures high speed when matching an entry against a set. Bug Fix BZ# 1121665 When the user was trying to create a program using the ipset library, linking failed with an undefined reference to the ipset_port_usage() function. With this update, ipset_port_usage() is now provided by the library and a program using the ipset library is now compiled successfully. Users of ipset are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-ipset |
Chapter 24. Managing Instance Groups | Chapter 24. Managing Instance Groups An Instance Group enables you to group instances in a clustered environment. Policies dictate how instance groups behave and how jobs are executed. The following view displays the capacity levels based on policy algorithms: Additional resources For more information about the policy or rules associated with instance groups, see the Instance Groups section of the Automation controller Administration Guide . For more information on connecting your instance group to a container, see Container Groups . 24.1. Creating an instance group Use the following procedure to create a new instance group. Procedure From the navigation panel, select Administration Instance Groups . Select Add from the Add instance group list. Enter the appropriate details into the following fields: Name : Names must be unique and must not be named "controller". Policy instance minimum : Enter the minimum number of instances to automatically assign to this group when new instances come online. Policy instance percentage : Use the slider to select a minimum percentage of instances to automatically assign to this group when new instances come online. Note Policy instance fields are not required to create a new instance group. If you do not specify values, then the Policy instance minimum and Policy instance percentage default to 0. Max concurrent jobs : Specify the maximum number of forks that can be run for any given job. Max forks : Specify the maximum number of concurrent jobs that can be run for any given job. Note The default value of 0 for Max concurrent jobs and Max forks denotes no limit. For more information, see Instance group capacity limits in the Automation controller Administration Guide . Click Save . When you have successfully created the instance group the Details tab of the newly created instance group remains, enabling you to review and edit your instance group information. This is the same screen that opens when you click the Edit icon from the Instance Groups list view. You can also edit Instances and review Jobs associated with this instance group: 24.1.1. Associating instances to an instance group Procedure Select the Instances tab of the Instance Groups window. Click Associate . Click the checkbox to one or more available instances from the list to select the instances you want to associate with the instance group: In the following example, the instances added to the instance group displays along with information about their capacity: 24.1.2. Viewing jobs associated with an instance group Procedure Select the Jobs tab of the Instance Group window. Click the arrow icon to a job to expand the view and show details about each job. Each job displays the following details: The job status The ID and name The type of job The time it started and completed Who started the job and applicable resources associated with it, such as the template, inventory, project, and execution environment Additional resources The instances are run in accordance with instance group policies. For more information, see Instance Group Policies in the Automation controller Administration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/controller-instance-groups |
Chapter 5. Using the Operator on restricted networks | Chapter 5. Using the Operator on restricted networks With 1.10.4, AMQ Interconnect is supported on restricted networks. See Deploying AMQ Interconnect on OpenShift for instructions on deploying AMQ Interconnect in a restricted environment. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/release_notes_for_amq_interconnect_1.10/using_the_operator_on_restricted_networks |
14.7. Managing the SELinux Policies for Subsystems | 14.7. Managing the SELinux Policies for Subsystems SELinux is a collection of mandatory access control rules which are enforced across a system to restrict unauthorized access and tampering. For more information about SELinux, see the Using SELinux guide for Red Hat Enterprise Linux 8 . 14.7.1. About SELinux Basically, SELinux identifies objects on a system, which can be files, directories, users, processes, sockets, or any other thing on a Linux host. These objects correspond to the Linux API objects. Each object is then mapped to a security context , which defines the type of object it is and how it is allowed to function on the Linux server. System processes run within SELinux domains. Each domain has a set of rules that defines how the SELinux domain interacts with other SELinux objects on the system. This set of rules, then, determines which resources a process may access and what operations it may perform on those resources. For Certificate System, each subsystem type runs within a specific domain for that subsystem type. Every instance of that subsystem type belongs to the same SELinux domain, regardless of how many instances are on the system For example, if there are three CAs installed on a server, all three belong to the http_port_t SELinux domain. The rules and definitions for all the subsystems comprise the overall Certificate System SELinux policy. Certificate System SELinux policies are already configured when the subsystems are installed, and all SELinux policies are updated every time a subsystem is added with pkispawn or removed with pkidestroy . The Certificate System subsystems run with SELinux set in enforcing mode, meaning that Certificate System operations can be successfully performed even when all SELinux rules are required to be followed. By default, the Certificate System subsystems run confined by SELinux policies. 14.7.2. Viewing SELinux Policies for Subsystems All Certificate System policies are are part of the system SELinux policy. The configured policies can be viewed using the SELinux Administration GUI, which you can get by installing the policycoreutils-gui package. Either run the system-config-selinux command or open the utility by accessing Applications Other SELinux Management for the main system menu. To check the version of the Certificate System SELinux policy installed, click the Policy Module section in the left bar. To view the policies set on the individual files and processes, click the File Labeling section. To view the policies for the port assignments for the subsystems, click the Network Port section. 14.7.3. Relabeling nCipher netHSM Contexts The nCipher netHSM software does not come with its own SELinux policy, so the Certificate System contains a default netHSM policy, shown in Example 14.1, "netHSM SELinux Policy" . Example 14.1. netHSM SELinux Policy # default labeling for nCipher /opt/nfast/scripts/init.d/(.*) gen_context(system_u:object_r:initrc_exec_t,s0) /opt/nfast/sbin/init.d-ncipher gen_context(system_u:object_r:initrc_exec_t,s0) /opt/nfast(/.*)? gen_context(system_u:object_r:pki_common_t, s0) /dev/nfast(/.*)? gen_context(system_u:object_r:pki_common_dev_t,0) Other rules allow the pki_*_t domain to talk to files that are labeled pki_common_t and pki_common_dev_t . If any of the nCipher configuration is changed (even if it is in the default directory, /opt/nfast ), run the restorecon to make sure all files are properly labeled: If the nCipher software is installed in a different location or if a different HSM is used, the default Certificate System HSM policy needs to be relabelled using semanage . | [
"default labeling for nCipher /opt/nfast/scripts/init.d/(.*) gen_context(system_u:object_r:initrc_exec_t,s0) /opt/nfast/sbin/init.d-ncipher gen_context(system_u:object_r:initrc_exec_t,s0) /opt/nfast(/.*)? gen_context(system_u:object_r:pki_common_t, s0) /dev/nfast(/.*)? gen_context(system_u:object_r:pki_common_dev_t,0)",
"restorecon -R /dev/nfast restorecon -R /opt/nfast"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/managing-the-selinux-policies |
19.2. Installation and Deployment | 19.2. Installation and Deployment Installation Guide The Installation Guide documents relevant information regarding the installation of Red Hat Enterprise Linux 7. This book also covers advanced installation methods such as kickstart, PXE installations, and installations over VNC, as well as common post-installation tasks. System Administrator's Guide The System Administrator's Guide provides information about deploying, configuring, and administering Red Hat Enterprise Linux 7. Storage Administration Guide The Storage Administration Guide provides instructions on how to effectively manage storage devices and file systems on Red Hat Enterprise Linux 7. It is intended for use by system administrators with intermediate experience in Red Hat Enterprise Linux. Global File System 2 The Global File System 2 book provides information about configuring and maintaining Red Hat GFS2 (Global File System 2) in Red Hat Enterprise Linux 7. Logical Volume Manager Administration The Logical Volume Manager Administration guide describes the LVM logical volume manager and provides information on running LVM in a clustered environment. Kernel Crash Dump Guide The Kernel Crash Dump Guide documents how to configure, test, and use the kdump crash recovery service available in Red Hat Enterprise Linux 7. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-documentation-installation_and_deployment |
Chapter 3. Updating the overcloud | Chapter 3. Updating the overcloud After you update the undercloud, you can update the overcloud by running the overcloud and container image preparation commands, and updating your nodes. The control plane API is fully available during a minor update. Prerequisites You have updated the undercloud node to the latest version. For more information, see Chapter 2, Updating the undercloud . If you use a local set of core templates in your stack user home directory, ensure that you update the templates and use the recommended workflow in Understanding heat templates in the Director Installation and Usage guide. You must update the local copy before you upgrade the overcloud. Procedure To update the overcloud, you must complete the following procedures: Section 3.1, "Running the overcloud update preparation" Section 3.2, "Running the container image preparation" Section 3.3, "Optional: Updating the ovn-controller container on all overcloud servers" Section 3.4, "Updating all Controller nodes" Section 3.5, "Updating all Compute nodes" Section 3.6, "Updating all HCI Compute nodes" Section 3.8, "Updating all Ceph Storage nodes" Section 3.9, "Updating the Red Hat Ceph Storage cluster" Section 3.10, "Performing online database updates" Section 3.11, "Re-enabling fencing in the overcloud" 3.1. Running the overcloud update preparation To prepare the overcloud for the update process, you must run the openstack overcloud update prepare command, which updates the overcloud plan to Red Hat OpenStack Platform (RHOSP) 17.0 and prepares the nodes for the update. Prerequisites If you use a Ceph subscription and have configured director to use the overcloud-minimal image for Ceph storage nodes, you must ensure that in the roles_data.yaml role definition file, the rhsm_enforce parameter is set to False . If you rendered custom NIC templates, you must regenerate the templates with the updated version of the openstack-tripleo-heat-templates collection to avoid incompatibility with the overcloud version. For more information about custom NIC templates, see Custom network interface templates in the Director Installation and Usage guide. Note For distributed compute node (edge) architectures with OVN deployments, you must complete this procedure for each stack with Compute, DistributedCompute, or DistributedComputeHCI nodes before proceeding with section Updating the ovn-controller container on all overcloud servers . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update preparation command: USD openstack overcloud update prepare \ --templates \ --stack <stack_name> \ -r <roles_data_file> \ -n <network_data_file> \ -e <environment_file> \ -e <environment_file> \ ... Include the following options relevant to your environment: If the name of your overcloud stack is different to the default name overcloud , include the --stack option in the update preparation command and replace <stack_name> with the name of your stack. If you use your own custom roles, use the -r option to include the custom roles ( <roles_data_file> ) file. If you use custom networks, use the -n option to include your composable network in the ( <network_data_file> ) file. If you deploy a high availability cluster, include the --ntp-server option in the update preparation command, or include the NtpServer parameter and value in your environment file. Include any custom configuration environment files with the -e option. Wait until the update preparation process completes. 3.2. Running the container image preparation Before you can update the overcloud, you must prepare all container image configurations that are required for your environment and pull the latest RHOSP 17.0 container images to your undercloud. To complete the container image preparation, you must run the openstack overcloud external-update run command against tasks that have the container_image_prepare tag. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the openstack overcloud external-update run command against tasks that have the container_image_prepare tag: USD openstack overcloud external-update run --stack <stack_name> --tags container_image_prepare If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. 3.3. Optional: Updating the ovn-controller container on all overcloud servers If you deployed your overcloud with the Modular Layer 2 Open Virtual Network mechanism driver (ML2/OVN), update the ovn-controller container to the latest RHOSP 17.0 version. The update occurs on every overcloud server that runs the ovn-controller container. The following procedure updates the ovn-controller containers on servers that are assigned the Compute role before it updates the ovn-northd service on servers that are assigned the Controller role. For distributed compute node (edge) architectures, you must complete this procedure for each stack with Compute, DistributedCompute, or DistributedComputeHCI nodes before proceeding with section Updating all Controller nodes . If you accidentally updated the ovn-northd service before following this procedure, you might not be able to connect to your virtual machines or create new virtual machines or virtual networks. The following procedure restores connectivity. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the openstack overcloud external-update run command against the tasks that have the ovn tag: USD openstack overcloud external-update run --stack <stack_name> --tags ovn If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the ovn-controller container update completes. 3.4. Updating all Controller nodes Update all the Controller nodes to the latest RHOSP 17.0 version. Run the openstack overcloud update run command and include the --limit Controller option to restrict operations to the Controller nodes only. The control plane API is fully available during the minor update. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit Controller If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the Controller node update completes. 3.5. Updating all Compute nodes Update all Compute nodes to the latest RHOSP 17.0 version. To update Compute nodes, run the openstack overcloud update run command and include the --limit Compute option to restrict operations to the Compute nodes only. Parallelization considerations When you update a large number of Compute nodes, to improve performance, you can run multiple update tasks in the background and configure each task to update a separate group of 20 nodes. For example, if you have 80 Compute nodes in your deployment, you can run the following commands to update the Compute nodes in parallel: USD openstack overcloud update run -y --limit 'Compute[0:19]' > update-compute-0-19.log 2>&1 & USD openstack overcloud update run -y --limit 'Compute[20:39]' > update-compute-20-39.log 2>&1 & USD openstack overcloud update run -y --limit 'Compute[40:59]' > update-compute-40-59.log 2>&1 & USD openstack overcloud update run -y --limit 'Compute[60:79]' > update-compute-60-79.log 2>&1 & This method of partitioning the nodes space is random and you do not have control over which nodes are updated. The selection of nodes is based on the inventory file that you generate when you run the tripleo-ansible-inventory command. To update specific Compute nodes, list the nodes that you want to update in a batch separated by a comma: Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit Compute If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the Compute node update completes. 3.6. Updating all HCI Compute nodes Update the Hyperconverged Infrastructure (HCI) Compute nodes to the latest RHOSP 17.0 version. Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit ComputeHCI If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the node update completes. 3.7. Updating all DistributedComputeHCI nodes Update roles specific to distributed compute node architecture. When you upgrade distributed compute nodes, update DistributedComputeHCI nodes first, and then update DistributedComputeHCIScaleOut nodes. Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the DistributedComputeHCI node update completes. Use the same process to update DistributedComputeHCIScaleOut nodes. 3.8. Updating all Ceph Storage nodes Update the Red Hat Ceph Storage nodes to the latest RHOSP 17.0 version. Important RHOSP 17.0 is supported on RHEL 9.0. However, hosts that are mapped to the Ceph Storage role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations . Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit CephStorage If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the node update completes. 3.9. Updating the Red Hat Ceph Storage cluster Update the Red Hat Ceph Storage cluster to the latest RHOSP 17.0 version by using the cephadm command. Prerequisites Complete the container image preparation in Section 3.2, "Running the container image preparation" . Procedure Log in to a Controller node. Check the health of the cluster: USD sudo cephadm shell -- ceph health Note If the Ceph Storage cluster is healthy, the command returns a result of HEALTH_OK . If the command returns a different result, review the status of the cluster and contact Red Hat support before continuing the update. For more information, see Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 5 Upgrade Guide . Optional: Check which images should be included in the Ceph Storage cluster update: USD openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print USD2}' Update the cluster to the latest Red Hat Ceph Storage version: USD sudo cephadm shell -- ceph orch upgrade start --image <image_name>: <version> Replace <image_name> with the name of the Ceph Storage cluster image. Replace <version> with the target version to which you are updating the Ceph Storage cluster. Wait until the Ceph Storage container update completes. To monitor the status of the update, run the following command: sudo cephadm shell -- ceph orch upgrade status 3.10. Performing online database updates Some overcloud components require an online update or migration of their databases tables. To perform online database updates, run the openstack overcloud external-update run command against tasks that have the online_upgrade tag. Online database updates apply to the following components: OpenStack Block Storage (cinder) OpenStack Compute (nova) Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the openstack overcloud external-update run command against tasks that use the online_upgrade tag: USD openstack overcloud external-update run --stack <stack_name> --tags online_upgrade 3.11. Re-enabling fencing in the overcloud Before you updated the overcloud, you disabled fencing in Section 1.6, "Disabling fencing in the overcloud" . After you update the overcloud, re-enable fencing to protect your data if a node fails. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Log in to a Controller node and run the Pacemaker command to re-enable fencing: USD ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=true" Replace <controller_ip> with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with the openstack server list command. In the fencing.yaml environment file, set the EnableFencing parameter to true . Additional Resources Fencing Controller nodes with STONITH | [
"source ~/stackrc",
"openstack overcloud update prepare --templates --stack <stack_name> -r <roles_data_file> -n <network_data_file> -e <environment_file> -e <environment_file>",
"source ~/stackrc",
"openstack overcloud external-update run --stack <stack_name> --tags container_image_prepare",
"source ~/stackrc",
"openstack overcloud external-update run --stack <stack_name> --tags ovn",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit Controller",
"openstack overcloud update run -y --limit 'Compute[0:19]' > update-compute-0-19.log 2>&1 & openstack overcloud update run -y --limit 'Compute[20:39]' > update-compute-20-39.log 2>&1 & openstack overcloud update run -y --limit 'Compute[40:59]' > update-compute-40-59.log 2>&1 & openstack overcloud update run -y --limit 'Compute[60:79]' > update-compute-60-79.log 2>&1 &",
"openstack overcloud update run --limit <Compute0>,<Compute1>,<Compute2>,<Compute3>",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit Compute",
"sudo cephadm -- shell ceph status",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit ComputeHCI",
"sudo cephadm -- shell ceph status",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit DistributedComputeHCI",
"sudo cephadm -- shell ceph status",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit CephStorage",
"sudo cephadm shell -- ceph health",
"openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print USD2}'",
"sudo cephadm shell -- ceph orch upgrade start --image <image_name>: <version>",
"sudo cephadm shell -- ceph orch upgrade status",
"source ~/stackrc",
"openstack overcloud external-update run --stack <stack_name> --tags online_upgrade",
"source ~/stackrc",
"ssh tripleo-admin@<controller_ip> \"sudo pcs property set stonith-enabled=true\""
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/keeping_red_hat_openstack_platform_updated/assembly_updating-the-overcloud_keeping-updated |
Chapter 1. Red Hat Software Collections 3.6 | Chapter 1. Red Hat Software Collections 3.6 This chapter serves as an overview of the Red Hat Software Collections 3.6 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.6 is available for Red Hat Enterprise Linux 7; selected previously released components also for Red Hat Enterprise Linux 6. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. Note In Red Hat Enterprise Linux 8, similar components are provided as Application Streams . All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Table 1.1, "Red Hat Software Collections Components" lists components that are supported at the time of the Red Hat Software Collections 3.6 release. Table 1.1. Red Hat Software Collections Components Component Software Collection Description Red Hat Developer Toolset 10.0 devtoolset-10 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Perl 5.26.3 [a] rh-perl526 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl526 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules. The rh-perl526 packaging is aligned with upstream; the perl526-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. Perl 5.30.1 [a] rh-perl530 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl530 Software Collection provides additional utilities, scripts, and database connectors for MySQL , PostgreSQL , and SQLite . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules, the LWP::UserAgent module for communicating with the HTTP servers, and the LWP::Protocol::https module for securing the communication. The rh-perl530 packaging is aligned with upstream; the perl530-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. PHP 7.3.20 [a] rh-php73 A release of PHP 7.3 with PEAR 1.10.9, APCu 5.1.17, and the Xdebug extension. Python 2.7.18 python27 A release of Python 2.7 with a number of additional utilities. This Python version provides various features and enhancements, including an ordered dictionary type, faster I/O operations, and improved forward compatibility with Python 3. The python27 Software Collections contains the Python 2.7.13 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), MySQL and PostgreSQL database connectors, and numpy and scipy . Python 3.8.6 [a] rh-python38 The rh-python38 Software Collection contains Python 3.8, which introduces new Python modules, such as contextvars , dataclasses , or importlib.resources , new language features, improved developer experience, and performance improvements . In addition, a set of popular extension libraries is provided, including mod_wsgi (supported only together with the httpd24 Software Collection), numpy , scipy , and the psycopg2 PostgreSQL database connector. Ruby 2.5.5 [a] rh-ruby25 A release of Ruby 2.5. This version provides multiple performance improvements and new features, for example, simplified usage of blocks with the rescue , else , and ensure keywords, a new yield_self method, support for branch coverage and method coverage measurement, new Hash#slice and Hash#transform_keys methods . Ruby 2.5.0 maintains source-level backward compatibility with Ruby 2.4. Ruby 2.6.2 [a] rh-ruby26 A release of Ruby 2.6. This version provides multiple performance improvements and new features, such as endless ranges, the Binding#source_location method, and the USDSAFE process global state . Ruby 2.6.0 maintains source-level backward compatibility with Ruby 2.5. Ruby 2.7.1 [a] rh-ruby27 A release of Ruby 2.7. This version provides multiple performance improvements and new features, such as Compaction GC or command-line interface for the LALR(1) parser generator, and an enhancement to REPL. Ruby 2.7 maintains source-level backward compatibility with Ruby 2.6. MariaDB 10.3.27 [a] rh-mariadb103 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version introduces system-versioned tables, invisible columns, a new instant ADD COLUMN operation for InnoDB , and a JDBC connector for MariaDB and MySQL . MongoDB 3.6.3 [a] rh-mongodb36 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces change streams, retryable writes, and JSON Schema , as well as other features. MySQL 8.0.21 [a] rh-mysql80 A release of the MySQL server, which introduces a number of new security and account management features and enhancements. PostgreSQL 10.15 [a] rh-postgresql10 A release of PostgreSQL, which includes a significant performance improvement and a number of new features, such as logical replication using the publish and subscribe keywords, or stronger password authentication based on the SCRAM-SHA-256 mechanism . PostgreSQL 12.5 [a] rh-postgresql12 A release of PostgreSQL, which provides the pgaudit extension, various enhancements to partitioning and parallelism, support for the SQL/JSON path language, and performance improvements. Node.js 10.21.0 [a] rh-nodejs10 A release of Node.js, which provides multiple API enhancements and new features, including V8 engine version 6.6, full N-API support , and stability improvements. Node.js 12.19.1 [a] rh-nodejs12 A release of Node.js with V8 engine version 7.6, support for ES6 modules, and improved support for native modules. Node.js 14.15.0 [a] rh-nodejs14 A release of Node.js with V8 version 8.3, a new experimental WebAssembly System Interface (WASI), and a new experimental Async Local Storage API. nginx 1.16.1 [a] rh-nginx116 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces numerous updates related to SSL, several new directives and parameters, and various enhancements. nginx 1.18.0 [a] rh-nginx118 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces enhancements to HTTP request rate and connection limiting, and a new auth_delay directive . In addition, support for new variables has been added to multiple directives. Apache httpd 2.4.34 [a] httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb , mod_auth_mellon , and ModSecurity modules are also included. Varnish Cache 5.2.1 [a] rh-varnish5 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes the shard director, experimental HTTP/2 support, and improvements to Varnish configuration through separate VCL files and VCL labels. Varnish Cache 6.0.6 [a] rh-varnish6 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes support for Unix Domain Sockets (both for clients and for back-end servers), new level of the VCL language ( vcl 4.1 ), and improved HTTP/2 support . Maven 3.6.1 [a] rh-maven36 A release of Maven, a software project management and comprehension tool. This release provides various enhancements and bug fixes. Git 2.18.4 [a] rh-git218 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This version includes the Large File Storage (LFS) extension . Git 2.27.0 [a] rh-git227 A release of Git, a distributed revision control system with a decentralized architecture. This version introduces numerous enhancements; for example, the git checkout command split into git switch and git restore , and changed behavior of the git rebase command . In addition, Git Large File Storage (LFS) has been updated to version 2.11.0. Redis 5.0.5 [a] rh-redis5 A release of Redis 5.0, a persistent key-value database . Redis now provides redis-trib , a cluster management tool . HAProxy 1.8.24 [a] rh-haproxy18 A release of HAProxy 1.8, a reliable, high-performance network load balancer for TCP and HTTP-based applications. JDK Mission Control 7.1.1 [a] rh-jmc This Software Collection includes JDK Mission Control (JMC) , a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. JMC requires JDK version 8 or later to run. Target Java applications must run with at least OpenJDK version 11 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection requires the rh-maven35 Software Collection. [a] This Software Collection is available only for Red Hat Enterprise Linux 7 Previously released Software Collections remain available in the same distribution channels. All Software Collections, including retired components, are listed in the Table 1.2, "All Available Software Collections" . Software Collections that are no longer supported are marked with an asterisk ( * ). See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Components New in Red Hat Software Collections 3.6 Red Hat Developer Toolset 10.0 devtoolset-10 RHEL7 x86_64, s390x, ppc64, ppc64le Git 2.27.0 rh-git227 RHEL7 x86_64, s390x, ppc64le nginx 1.18.0 rh-nginx118 RHEL7 x86_64, s390x, ppc64le Node.js 14.15.0 rh-nodejs14 RHEL7 x86_64, s390x, ppc64le Table 1.2. All Available Software Collections Components Updated in Red Hat Software Collections 3.6 Apache httpd 2.4.34 httpd24 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.3.20 rh-php73 RHEL7 x86_64, s390x, aarch64, ppc64le HAProxy 1.8.24 rh-haproxy18 RHEL7 x86_64 Perl 5.30.1 rh-perl530 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.5.5 rh-ruby25 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.5 Red Hat Developer Toolset 9.1 devtoolset-9 RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Python 3.8.6 rh-python38 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.7.1 rh-ruby27 RHEL7 x86_64, s390x, aarch64, ppc64le JDK Mission Control 7.1.1 rh-jmc RHEL7 x86_64 Varnish Cache 6.0.6 rh-varnish6 RHEL7 x86_64, s390x, aarch64, ppc64le Apache httpd 2.4.34 (the last update for RHEL6) httpd24 (RHEL6)* RHEL6 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.4 Node.js 12.19.1 rh-nodejs12 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.16.1 rh-nginx116 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 12.5 rh-postgresql12 RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.6.1 rh-maven36 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.3 Red Hat Developer Toolset 8.1 devtoolset-8 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le MariaDB 10.3.27 rh-mariadb103 RHEL7 x86_64, s390x, aarch64, ppc64le Redis 5.0.5 rh-redis5 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.6.2 rh-ruby26 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.2 PHP 7.2.24 rh-php72 * RHEL7 x86_64, s390x, aarch64, ppc64le MySQL 8.0.21 rh-mysql80 RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 10.21.0 rh-nodejs10 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.14.1 rh-nginx114 * RHEL7 x86_64, s390x, aarch64, ppc64le Git 2.18.4 rh-git218 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.1 Red Hat Developer Toolset 7.1 devtoolset-7 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Perl 5.26.3 rh-perl526 RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.6.3 rh-mongodb36 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 5.2.1 rh-varnish5 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 10.15 rh-postgresql10 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.0.27 rh-php70 * RHEL6, RHEL7 x86_64 MySQL 5.7.24 rh-mysql57 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.0 PHP 7.1.8 rh-php71 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 * RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.12 rh-python36 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 * RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.22 rh-mariadb102 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.19 rh-postgresql96 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.11.4 rh-nodejs8 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 * RHEL7 x86_64 nginx 1.10.2 rh-nginx110 * RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.4.6 rh-ruby24 * RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 * RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 * RHEL7 x86_64 Python 2.7.18 python27 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 * RHEL6, RHEL7 x86_64 Common Java Packages rh-java-common * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Redis 3.2.4 rh-redis32 * RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 * RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 * RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 * RHEL6, RHEL7 x86_64 Ruby 2.3.8 rh-ruby23 * RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 * RHEL6, RHEL7 x86_64 MariaDB 10.1.29 rh-mariadb101 * RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg * RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 * RHEL6, RHEL7 x86_64 PostgreSQL 9.5.14 rh-postgresql95 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 * RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 * RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 * RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 * RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 * RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 * RHEL6, RHEL7 x86_64 Ruby 2.2.9 rh-ruby22 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 * RHEL6, RHEL7 x86_64 MariaDB 10.0.33 rh-mariadb100 * RHEL6, RHEL7 x86_64 MySQL 5.6.40 rh-mysql56 * RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 * RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 * RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 Legend: RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * - Retired component; this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. Eclipse is available as a part of the Red Hat Developer Tools offering. 1.3. Changes in Red Hat Software Collections 3.6 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on AMD64 and Intel 64 architectures; certain earlier Software Collections are available also for Red Hat Enterprise Linux 6. In addition, Red Hat Software Collections 3.6 supports the following architectures on Red Hat Enterprise Linux 7: 64-bit IBM Z IBM POWER, little endian For a full list of components and their availability, see Table 1.2, "All Available Software Collections" . New Software Collections Red Hat Software Collections 3.6 adds the following new Software Collections: devtoolset-10 - see Section 1.3.2, "Changes in Red Hat Developer Toolset" rh-git227 - see Section 1.3.3, "Changes in Git" rh-nginx118 - see Section 1.3.4, "Changes in nginx" rh-nodejs14 - see Section 1.3.5, "Changes in Node.js" All new Software Collections are available only for Red Hat Enterprise Linux 7. Updated Software Collections The following components has been updated in Red Hat Software Collections 3.6: httpd24 - see Section 1.3.6, "Changes in Apache httpd" rh-perl530 - see Section 1.3.7, "Changes in Perl" rh-php73 - see Section 1.3.8, "Changes in PHP" rh-haproxy18 - see Section 1.3.9, "Changes in HAProxy" rh-ruby25 - see Section 1.3.10, "Changes in Ruby" Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.6: rhscl/devtoolset-10-toolchain-rhel7 rhscl/devtoolset-10-perftools-rhel7 rhscl/nginx-118-rhel7 rhscl/nodejs-14-rhel7 The following container image has been updated in Red Hat Software Collections 3.6 rhscl/httpd-24-rhel7 rhscl/php-73-rhel7 rhscl/perl-530-rhel7 rhscl/ruby-25-rhel7 For more information about Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. Changes in Red Hat Developer Toolset The following components have been upgraded in Red Hat Developer Toolset 10.0 compared to the release of Red Hat Developer Toolset: GCC to version 10.2.1 binutils to version 2.35 GDB to version 9.2 strace to version 5.7 SystemTap to version 4.3 OProfile to version 1.4.0 Valgrind to version 3.16.1 elfutils to version 0.180 annobin to version 9.23 For detailed information on changes in 10.0, see the Red Hat Developer Toolset User Guide . 1.3.3. Changes in Git The new rh-git227 Software Collection includes Git 2.27.0 , which provides numerous bug fixes and new features compared to the rh-git218 Collection released with Red Hat Software Collections 3.2. Notable changes in this release include: The git checkout command has been split into two separate commands: git switch for managing branches git restore for managing changes within the directory tree The behavior of the git rebase command is now based on the merge workflow by default rather than the patch+apply workflow. To preserve the behavior, set the rebase.backend configuration variable to apply . The git difftool command can now be used also outside a repository. Four new configuration variables, {author,committer}.{name,email} , have been introduced to override user.{name,email} in more specific cases. Several new options have been added that enable users to configure SSL for communication with proxies. Handling of commits with log messages in non-UTF-8 character encoding has been improved in the git fast-export and git fast-import utilities. Git Large File Storage (LFS) has been updated to version 2.11.0. For detailed list of further enhancements, bug fixes, and backward compatibility notes related to Git 2.27.0 , see the upstream release notes . See also the Git manual page for version 2.27.0. 1.3.4. Changes in nginx The new rh-nginx118 Software Collection introduces nginx 1.18.0 , which provides a number of bug and security fixes, new features and enhancements over version 1.16. Notable changes include: Enhancements to HTTP request rate and connection limiting have been implemented. For example, the limit_rate and limit_rate_after directives now support variables, including new USDlimit_req_status and USDlimit_conn_status variables. In addition, dry-run mode has been added for the limit_conn_dry_run and limit_req_dry_run directives. A new auth_delay directive has been added, which enables delayed processing of unauthorized requests. The following directives now support variables: grpc_pass , proxy_upload_rate , and proxy_download_rate . Additional PROXY protocol variables have been added, namely USDproxy_protocol_server_addr and USDproxy_protocol_server_port . rh-nginx118 uses the rh-perl530 Software Collection for Perl integration. For more information regarding changes in nginx , refer to the upstream release notes . For migration instructions, see Section 5.5, "Migrating to nginx 1.18" . 1.3.5. Changes in Node.js The new rh-nodejs14 Software Collection provides Node.js 14.15.0 , which is the most recent Long Term Support (LTS) version. Notable enhancements in this release include: The V8 engine has been upgraded to version 8.3. A new experimental WebAssembly System Interface (WASI) has been implemented. A new experimental Async Local Storage API has been introduced. The diagnostic report feature is now stable. The streams APIs have been hardened. Experimental modules warnings have been removed. Stability has been improved. For detailed changes in Node.js 14.15.0, see the upstream release notes and upstream documentation . 1.3.6. Changes in Apache httpd The httpd24 Software Collection has been updated to provide multiple security and bug fixes. In addition, the ProxyRemote configuration directive has been enhanced to optionally take username and password credentials, which are used to authenticate to the remote proxy using HTTP Basic authentication. This feature has been backported from httpd 2.5 . For details, see the upstream documentation . 1.3.7. Changes in Perl The rh-perl530-perl-CGI package has been added to the rh-perl530 Software Collection. The rh-perl530-perl-CGI package provides a Perl module that implements Common Gateway Interface (CGI) for running scripts written in the Perl language. 1.3.8. Changes in PHP The rh-php73 Software Collection has been updated to version 7.3.20, which provides multiple security and bug fixes. 1.3.9. Changes in HAProxy The rh-haproxy18 Software Collection has been updated with a bug fix. 1.3.10. Changes in Ruby The rh-ruby25 Software Collection has been updated with a bug fix. 1.4. Compatibility Information Red Hat Software Collections 3.6 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, 64-bit IBM Z, and IBM POWER, little endian. Certain previously released components are available also for the 64-bit ARM architecture. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues rh-ruby27 component, BZ# 1836201 When a custom script requires the Psych YAML parser and afterwards uses the Gem.load_yaml method, running the script fails with the following error message: To work around this problem, add the gem 'psych' line to the script somewhere above the require 'psych' line: ... gem 'psych' ... require 'psych' Gem.load_yaml multiple components, BZ# 1716378 Certain files provided by the Software Collections debuginfo packages might conflict with the corresponding debuginfo package files from the base Red Hat Enterprise Linux system or from other versions of Red Hat Software Collections components. For example, the python27-python-debuginfo package files might conflict with the corresponding files from the python-debuginfo package installed on the core system. Similarly, files from the httpd24-mod_auth_mellon-debuginfo package might conflict with similar files provided by the base system mod_auth_mellon-debuginfo package. To work around this problem, uninstall the base system debuginfo package prior to installing the Software Collection debuginfo package. rh-mysql80 component, BZ# 1646363 The mysql-connector-java database connector does not work with the MySQL 8.0 server. To work around this problem, use the mariadb-java-client database connector from the rh-mariadb103 Software Collection. rh-mysql80 component, BZ# 1646158 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, 64-bit IBM Z, and IBM POWER, little endian. httpd24 component, BZ# 1327548 The mod_ssl module does not support the ALPN protocol on Red Hat Enterprise Linux 6, or on Red Hat Enterprise Linux 7.3 and earlier. Consequently, clients that support upgrading TLS connections to HTTP/2 only using ALPN are limited to HTTP/1.1 support. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. python27 component, BZ# 1330489 The python27-python-pymongo package has been updated to version 3.2.1. Note that this version is not fully compatible with the previously shipped version 2.5.2. scl-utils component In Red Hat Enterprise Linux 7.5 and earlier, due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. maven component When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven*-maven-local package, XMvn , a tool used for building Java RPM packages, run from the Maven Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. postgresql component The rh-postgresql9* packages for Red Hat Enterprise Linux 6 do not provide the sepgsql module as this feature requires installation of libselinux version 2.0.99, which is not available in Red Hat Enterprise Linux 6. httpd , mariadb , mongodb , mysql , nodejs , perl , php , python , ruby , and ror components, BZ# 1072319 When uninstalling the httpd24 , rh-mariadb* , rh-mongodb* , rh-mysql* , rh-nodejs* , rh-perl* , rh-php* , python27 , rh-python* , rh-ruby* , or rh-ror* packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. mariadb , mysql components, BZ# 1194611 Since MariaDB 10 and MySQL 5.6 , the rh-mariadb*-mariadb-server and rh-mysql*-mysql-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exist. mariadb , mysql , postgresql , mongodb components Red Hat Software Collections contains the MySQL 5.7 , MySQL 8.0 , MariaDB 10.2 , MariaDB 10.3 , PostgreSQL 9.6 , PostgreSQL 10 , PostgreSQL 12 , MongoDB 3.4 , and MongoDB 3.6 databases. The core Red Hat Enterprise Linux 6 provides earlier versions of the MySQL and PostgreSQL databases (client library and daemon). The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 9.2 client library with the PostgreSQL 9.4 or 9.5 daemon works as expected. The core Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 do not include the client library for MongoDB . In order to use this client library for your application, you should use the client library from Red Hat Software Collections and always use the scl enable ... call every time you run an application linked against this MongoDB client library. mariadb , mysql , mongodb components MariaDB, MySQL, and MongoDB do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . Other Notes rh-ruby* , rh-python* , rh-php* components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby* Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable python component When the user tries to install more than one scldevel package from the python27 and rh-python* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the rh-php* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the rh-ruby* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the rh-perl* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the rh-nginx* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). 1.6. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl . | [
"superclass mismatch for class Mark (TypeError)",
"gem 'psych' require 'psych' Gem.load_yaml",
"[mysqld] character-set-server=utf8",
"ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems",
"Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'",
"Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user",
"su -l postgres -c \"scl enable rh-postgresql94 psql\"",
"scl enable rh-postgresql94 bash su -l postgres -c psql"
]
| https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.6_release_notes/chap-rhscl |
Chapter 355. Twitter Streaming Component | Chapter 355. Twitter Streaming Component Available as of Camel version 2.10 The Twitter Streaming component consumes twitter statuses using Streaming API. 355.1. Component Options The Twitter Streaming component supports 9 options, which are listed below. Name Description Default Type accessToken (security) The access token String accessTokenSecret (security) The access token secret String consumerKey (security) The consumer key String consumerSecret (security) The consumer secret String httpProxyHost (proxy) The http proxy host which can be used for the camel-twitter. String httpProxyUser (proxy) The http proxy user which can be used for the camel-twitter. String httpProxyPassword (proxy) The http proxy password which can be used for the camel-twitter. String httpProxyPort (proxy) The http proxy port which can be used for the camel-twitter. int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean 355.2. Endpoint Options The Twitter Streaming endpoint is configured using URI syntax: with the following path and query parameters: 355.2.1. Path Parameters (1 parameters): Name Description Default Type streamingType Required The streaming type to consume. StreamingType 355.2.2. Query Parameters (43 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean type (consumer) Endpoint type to use. Only streaming supports event type. polling EndpointType distanceMetric (consumer) Used by the non-stream geography search, to search by radius using the configured metrics. The unit can either be mi for miles, or km for kilometers. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. km String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern extendedMode (consumer) Used for enabling full text from twitter (eg receive tweets that contains more than 140 characters). true boolean latitude (consumer) Used by the non-stream geography search to search by latitude. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double locations (consumer) Bounding boxes, created by pairs of lat/lons. Can be used for streaming/filter. A pair is defined as lat,lon. And multiple paris can be separated by semi colon. String longitude (consumer) Used by the non-stream geography search to search by longitude. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy radius (consumer) Used by the non-stream geography search to search by radius. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double twitterStream (consumer) To use a custom instance of TwitterStream TwitterStream synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean count (filter) Limiting number of results per page. 5 Integer filterOld (filter) Filter out old tweets, that has previously been polled. This state is stored in memory only, and based on last tweet id. true boolean keywords (filter) Can be used for a streaming filter. Multiple values can be separated with comma. String lang (filter) The lang string ISO_639-1 which will be used for searching String numberOfPages (filter) The number of pages result which you want camel-twitter to consume. 1 Integer sinceId (filter) The last tweet id which will be used for pulling the tweets. It is useful when the camel route is restarted after a long running. 1 long userIds (filter) To filter by user ids for streaming/filter. Multiple values can be separated by comma. String backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 30000 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean sortById (sort) Sorts by id, so the oldest are first, and newest last. true boolean httpProxyHost (proxy) The http proxy host which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String httpProxyPassword (proxy) The http proxy password which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String httpProxyPort (proxy) The http proxy port which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. Integer httpProxyUser (proxy) The http proxy user which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String accessToken (security) The access token. Can also be configured on the TwitterComponent level instead. String accessTokenSecret (security) The access secret. Can also be configured on the TwitterComponent level instead. String consumerKey (security) The consumer key. Can also be configured on the TwitterComponent level instead. String consumerSecret (security) The consumer secret. Can also be configured on the TwitterComponent level instead. String 355.3. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.twitter-streaming.access-token The access token String camel.component.twitter-streaming.access-token-secret The access token secret String camel.component.twitter-streaming.consumer-key The consumer key String camel.component.twitter-streaming.consumer-secret The consumer secret String camel.component.twitter-streaming.enabled Whether to enable auto configuration of the twitter-streaming component. This is enabled by default. Boolean camel.component.twitter-streaming.http-proxy-host The http proxy host which can be used for the camel-twitter. String camel.component.twitter-streaming.http-proxy-password The http proxy password which can be used for the camel-twitter. String camel.component.twitter-streaming.http-proxy-port The http proxy port which can be used for the camel-twitter. Integer camel.component.twitter-streaming.http-proxy-user The http proxy user which can be used for the camel-twitter. String camel.component.twitter-streaming.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean | [
"twitter-streaming:streamingType"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/twitter-streaming-component |
Chapter 2. Installation | Chapter 2. Installation This chapter guides you through the steps to install AMQ JMS Pool in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with AMQ JMS Pool, you must install Apache Maven . To use AMQ JMS Pool, you must install Java. 2.2. Using the Red Hat Maven repository Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> <version>2.0.0.redhat-00001</version> </dependency> The client is now available in your Maven project. 2.3. Installing a local Maven repository As an alternative to the online repository, AMQ JMS Pool can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the AMQ Clients 2.10.0 JMS Pool Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-clients-2.10.0-jms-pool-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" . 2.4. Installing the examples Procedure Use the git clone command to clone the source repository to a local directory named pooled-jms : USD git clone https://github.com/messaginghub/pooled-jms.git pooled-jms Change to the pooled-jms directory and use the git checkout command to switch to the 2.0.0 branch: USD cd pooled-jms USD git checkout 2.0.0 The resulting local directory is referred to as <source-dir> throughout this document. | [
"<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>",
"<dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> <version>2.0.0.redhat-00001</version> </dependency>",
"unzip amq-clients-2.10.0-jms-pool-maven-repository.zip",
"git clone https://github.com/messaginghub/pooled-jms.git pooled-jms",
"cd pooled-jms git checkout 2.0.0"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_pool_library/installation |
Chapter 8. Planning your environment according to object maximums | Chapter 8. Planning your environment according to object maximums Consider the following tested object maximums when you plan your OpenShift Container Platform cluster. These guidelines are based on the largest possible cluster. For smaller clusters, the maximums are lower. There are many factors that influence the stated thresholds, including the etcd version or storage data format. Important These guidelines apply to OpenShift Container Platform with software-defined networking (SDN), not Open Virtual Network (OVN). In most cases, exceeding these numbers results in lower overall performance. It does not necessarily mean that the cluster will fail. Warning Clusters that experience rapid change, such as those with many starting and stopping pods, can have a lower practical maximum size than documented. 8.1. OpenShift Container Platform tested cluster maximums for major releases Tested Cloud Platforms for OpenShift Container Platform 3.x: Red Hat OpenStack Platform (RHOSP), Amazon Web Services and Microsoft Azure. Tested Cloud Platforms for OpenShift Container Platform 4.x: Amazon Web Services, Microsoft Azure and Google Cloud Platform. Maximum type 3.x tested maximum 4.x tested maximum Number of nodes 2,000 2,000 [1] Number of pods [2] 150,000 150,000 Number of pods per node 250 500 [3] Number of pods per core There is no default value. There is no default value. Number of namespaces [4] 10,000 10,000 Number of builds 10,000 (Default pod RAM 512 Mi) - Pipeline Strategy 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy Number of pods per namespace [5] 25,000 25,000 Number of routes and back ends per Ingress Controller 2,000 per router 2,000 per router Number of secrets 80,000 80,000 Number of config maps 90,000 90,000 Number of services [6] 10,000 10,000 Number of services per namespace 5,000 5,000 Number of back-ends per service 5,000 5,000 Number of deployments per namespace [5] 2,000 2,000 Number of build configs 12,000 12,000 Number of custom resource definitions (CRD) There is no default value. 512 [7] Pause pods were deployed to stress the control plane components of OpenShift Container Platform at 2000 node scale. The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements. This was tested on a cluster with 100 worker nodes with 500 pods per worker node. The default maxPods is still 250. To get to 500 maxPods , the cluster must be created with a maxPods set to 500 using a custom kubelet config. If you need 500 user pods, you need a hostPrefix of 22 because there are 10-15 system pods already running on the node. The maximum number of pods with attached persistent volume claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Data Foundation v4 (OCS v4) was able to satisfy the number of pods per node discussed in this document. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to free etcd storage. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system. OpenShift Container Platform has a limit of 512 total custom resource definitions (CRD), including those installed by OpenShift Container Platform, products integrating with OpenShift Container Platform and user created CRDs. If there are more than 512 CRDs created, then there is a possibility that oc commands requests may be throttled. Note Red Hat does not provide direct guidance on sizing your OpenShift Container Platform cluster. This is because determining whether your cluster is within the supported bounds of OpenShift Container Platform requires careful consideration of all the multidimensional factors that limit the cluster scale. 8.2. OpenShift Container Platform environment and configuration on which the cluster maximums are tested 8.2.1. AWS cloud platform Node Flavor vCPU RAM(GiB) Disk type Disk size(GiB)/IOS Count Region Control plane/etcd [1] r5.4xlarge 16 128 gp3 220 3 us-west-2 Infra [2] m5.12xlarge 48 192 gp3 100 3 us-west-2 Workload [3] m5.4xlarge 16 64 gp3 500 [4] 1 us-west-2 Compute m5.2xlarge 8 32 gp3 100 3/25/250/500 [5] us-west-2 gp3 disks with a baseline performance of 3000 IOPS and 125 MiB per second are used for control plane/etcd nodes because etcd is latency sensitive. gp3 volumes do not use burst performance. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale. Workload node is dedicated to run performance and scalability workload generators. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts. 8.2.2. IBM Power platform Node vCPU RAM(GiB) Disk type Disk size(GiB)/IOS Count Control plane/etcd [1] 16 32 io1 120 / 10 IOPS per GiB 3 Infra [2] 16 64 gp2 120 2 Workload [3] 16 256 gp2 120 [4] 1 Compute 16 64 gp2 120 2 to 100 [5] io1 disks with 120 / 10 IOPS per GiB are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale. Workload node is dedicated to run performance and scalability workload generators. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run. Cluster is scaled in iterations. 8.2.3. IBM Z platform Node vCPU [4] RAM(GiB) [5] Disk type Disk size(GiB)/IOS Count Control plane/etcd [1,2] 8 32 ds8k 300 / LCU 1 3 Compute [1,3] 8 32 ds8k 150 / LCU 2 4 nodes (scaled to 100/250/500 pods per node) Nodes are distributed between two logical control units (LCUs) to optimize disk I/O load of the control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Etcd I/O demand should not interfere with other workloads. Four compute nodes are used for the tests running several iterations with 100/250/500 pods at the same time. First, idling pods were used to evaluate if pods can be instanced. , a network and CPU demanding client/server workload were used to evaluate the stability of the system under stress. Client and server pods were pairwise deployed and each pair was spread over two compute nodes. No separate workload node was used. The workload simulates a microservice workload between two compute nodes. Physical number of processors used is six Integrated Facilities for Linux (IFLs). Total physical memory used is 512 GiB. 8.3. How to plan your environment according to tested cluster maximums Important Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster. The numbers noted in this documentation are based on Red Hat's test methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments. While planning your environment, determine how many pods are expected to fit per node: The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application's memory, CPU, and storage requirements, as described in How to plan your environment according to application requirements . Example scenario If you want to scope your cluster for 2200 pods per cluster, you would need at least five nodes, assuming that there are 500 maximum pods per node: If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node: Where: 8.4. How to plan your environment according to application requirements Consider an example application environment: Pod type Pod quantity Max memory CPU cores Persistent storage apache 100 500 MB 0.5 1 GB node.js 200 1 GB 1 1 GB postgresql 100 1 GB 2 10 GB JBoss EAP 100 1 GB 1 1 GB Extrapolated requirements: 550 CPU cores, 450GB RAM, and 1.4TB storage. Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered. Node type Quantity CPUs RAM (GB) Nodes (option 1) 100 4 16 Nodes (option 2) 50 8 32 Nodes (option 3) 25 16 64 Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio. The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing. Disable the service links in the deployment's service specification file to overcome this: --- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: "USD{IMAGE}" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR2_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR3_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR4_USD{IDENTIFIER} value: "USD{ENV_VALUE}" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: "[A-Za-z0-9]{255}" required: false labels: template: deployment-config-template The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX on the system defines the maximum argument length for a new process and it is set to 2097152 KiB by default. The Kubelet injects environment variables in to each pod scheduled to run in the namespace including: <SERVICE_NAME>_SERVICE_HOST=<IP> <SERVICE_NAME>_SERVICE_PORT=<PORT> <SERVICE_NAME>_PORT=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp <SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR> The pods in the namespace will start to fail if the argument length exceeds the allowed value and the number of characters in a service name impacts it. For example, in a namespace with 5000 services, the limit on the service name is 33 characters, which enables you to run 5000 pods in the namespace. | [
"required pods per cluster / pods per node = total number of nodes needed",
"2200 / 500 = 4.4",
"2200 / 20 = 110",
"required pods per cluster / total number of nodes = expected pods per node",
"--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/scalability_and_performance/planning-your-environment-according-to-object-maximums |
Chapter 4. Network considerations | Chapter 4. Network considerations Review the strategies for redirecting your application network traffic after migration. 4.1. DNS considerations The DNS domain of the target cluster is different from the domain of the source cluster. By default, applications get FQDNs of the target cluster after migration. To preserve the source DNS domain of migrated applications, select one of the two options described below. 4.1.1. Isolating the DNS domain of the target cluster from the clients You can allow the clients' requests sent to the DNS domain of the source cluster to reach the DNS domain of the target cluster without exposing the target cluster to the clients. Procedure Place an exterior network component, such as an application load balancer or a reverse proxy, between the clients and the target cluster. Update the application FQDN on the source cluster in the DNS server to return the IP address of the exterior network component. Configure the network component to send requests received for the application in the source domain to the load balancer in the target cluster domain. Create a wildcard DNS record for the *.apps.source.example.com domain that points to the IP address of the load balancer of the source cluster. Create a DNS record for each application that points to the IP address of the exterior network component in front of the target cluster. A specific DNS record has higher priority than a wildcard record, so no conflict arises when the application FQDN is resolved. Note The exterior network component must terminate all secure TLS connections. If the connections pass through to the target cluster load balancer, the FQDN of the target application is exposed to the client and certificate errors occur. The applications must not return links referencing the target cluster domain to the clients. Otherwise, parts of the application might not load or work properly. 4.1.2. Setting up the target cluster to accept the source DNS domain You can set up the target cluster to accept requests for a migrated application in the DNS domain of the source cluster. Procedure For both non-secure HTTP access and secure HTTPS access, perform the following steps: Create a route in the target cluster's project that is configured to accept requests addressed to the application's FQDN in the source cluster: USD oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> \ -n <app1-namespace> With this new route in place, the server accepts any request for that FQDN and sends it to the corresponding application pods. In addition, when you migrate the application, another route is created in the target cluster domain. Requests reach the migrated application using either of these hostnames. Create a DNS record with your DNS provider that points the application's FQDN in the source cluster to the IP address of the default load balancer of the target cluster. This will redirect traffic away from your source cluster to your target cluster. The FQDN of the application resolves to the load balancer of the target cluster. The default Ingress Controller router accept requests for that FQDN because a route for that hostname is exposed. For secure HTTPS access, perform the following additional step: Replace the x509 certificate of the default Ingress Controller created during the installation process with a custom certificate. Configure this certificate to include the wildcard DNS domains for both the source and target clusters in the subjectAltName field. The new certificate is valid for securing connections made using either DNS domain. Additional resources See Replacing the default ingress certificate for more information. 4.2. Network traffic redirection strategies After a successful migration, you must redirect network traffic of your stateless applications from the source cluster to the target cluster. The strategies for redirecting network traffic are based on the following assumptions: The application pods are running on both the source and target clusters. Each application has a route that contains the source cluster hostname. The route with the source cluster hostname contains a CA certificate. For HTTPS, the target router CA certificate contains a Subject Alternative Name for the wildcard DNS record of the source cluster. Consider the following strategies and select the one that meets your objectives. Redirecting all network traffic for all applications at the same time Change the wildcard DNS record of the source cluster to point to the target cluster router's virtual IP address (VIP). This strategy is suitable for simple applications or small migrations. Redirecting network traffic for individual applications Create a DNS record for each application with the source cluster hostname pointing to the target cluster router's VIP. This DNS record takes precedence over the source cluster wildcard DNS record. Redirecting network traffic gradually for individual applications Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route a percentage of the traffic to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Gradually increase the percentage of traffic that you route to the target cluster router's VIP until all the network traffic is redirected. User-based redirection of traffic for individual applications Using this strategy, you can filter TCP/IP headers of user requests to redirect network traffic for predefined groups of users. This allows you to test the redirection process on specific populations of users before redirecting the entire network traffic. Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route traffic matching a given header pattern, such as test customers , to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Redirect traffic to the target cluster router's VIP in stages until all the traffic is on the target cluster router's VIP. | [
"oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migrating_from_version_3_to_4/planning-considerations-3-4 |
10.8. Configuring VLAN switchport mode | 10.8. Configuring VLAN switchport mode Red Hat Enterprise Linux machines are often used as routers and enable an advanced VLAN configuration on their network interfaces. You need to set switchport mode when the Ethernet interface is connected to a switch and there are VLANs running over the physical interface. A Red Hat Enterprise Linux server or workstation is usually connected to only one VLAN, which makes switchport mode access suitable, and the default setting. In certain scenarios, multiple tagged VLANs use the same physical link, that is Ethernet between the switch and Red Hat Enterprise Linux machine, which requires switchport mode trunk to be configured on both ends. For example, when a Red Hat Enterprise Linux machine is used as a router, the machine needs to forward tagged packets from the various VLANs behind the router to the switch over the same physical Ethernet, and maintain separation between those VLANs. With the setup described, for example, in Section 10.3, "Configure 802.1Q VLAN Tagging Using the Command Line Tool, nmcli" , use the Cisco switchport mode trunk . If you only set an IP address on an interface, use Cisco switchport mode access . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configure_802_1q_vlan_tagging-configuring-vlan-switchpport-mode |
3.5.4. Host Storage | 3.5.4. Host Storage Disk images can be stored on a range of local and remote storage technologies connected to the host. Image files Image files can only be stored on a host be file system. The image files can be stored on a local file system, such as ext4 or xfs, or a network file system, such as NFS. Tools such as libguestfs can manage, back up, and monitor files. Disk image formats on KVM include: raw Raw image files contain the contents of the disk with no additional metadata. Raw files can either be pre-allocated or sparse, if the host file system allows it. Sparse files allocate host disk space on demand, and are therefore a form of thin provisioning. Pre-allocated files are fully provisioned but have higher performance than sparse files. Raw files are desirable when disk I/O performance is critical and transferring the image file over a network is rarely necessary. qcow2 qcow2 image files offer a number of advanced disk image features, including backing files, snapshots, compression, and encryption. They can be used to instantiate virtual machines from template images. qcow2 files are typically more efficient to transfer over a network, because only sectors written by the virtual machine are allocated in the image. LVM volumes Logical volumes (LVs) can be used for disk images and managed using the system's LVM tools. LVM offers higher performance than file systems because of its simpler block storage model. LVM thin provisioning offers snapshots and efficient space usage for LVM volumes, and can be used as an alternative to migrating to qcow2. Host devices Host devices such as physical CD-ROMs, raw disks, and logical unit numbers (LUNs) can be presented to the guest. This enables a guest to use storage area network (SAN) or iSCSI LUNs, as well as local CD-ROM media, with good performance. Host devices can be used when storage management is done on a SAN instead of on hosts. Distributed storage systems Gluster volumes can be used as disk images. This enables high-performance clustered storage over the network. Red Hat Enterprise Linux 6.5 and above includes native support for creating virtual machines with GlusterFS. This enables a KVM host to boot virtual machine images from GlusterFS volumes, and to use images from a GlusterFS volume as data disks for virtual machines. When compared to GlusterFS FUSE, the native support in KVM delivers higher performance. Note For more information on storage and virtualization, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/sec-virtualization_getting_started-products-storage-host-devices |
Chapter 10. Log storage | Chapter 10. Log storage 10.1. About log storage You can use an internal Loki or Elasticsearch log store on your cluster for storing logs, or you can use a ClusterLogForwarder custom resource (CR) to forward logs to an external store. 10.1.1. Log storage types Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as an alternative to Elasticsearch as a log store for the logging. Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. 10.1.1.1. About the Elasticsearch log store The logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system. Elasticsearch organizes the log data from Fluentd into datastores, or indices , then subdivides each index into multiple pieces called shards , which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called replicas , which Elasticsearch also spreads across the Elasticsearch nodes. The ClusterLogging custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the ClusterLogging CR. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume. You can use a ClusterLogging custom resource (CR) to increase the number of Elasticsearch nodes, as needed. See the Elasticsearch documentation for considerations involved in configuring storage. Note A highly-available Elasticsearch environment requires at least three Elasticsearch nodes, each on a different host. Role-based access control (RBAC) applied on the Elasticsearch indices enables the controlled access of the logs to the developers. Administrators can access all logs and developers can access only the logs in their projects. 10.1.2. Querying log stores You can query Loki by using the LogQL log query language . 10.1.3. Additional resources Loki components documentation Loki Object Storage documentation 10.2. Installing log storage You can use the OpenShift CLI ( oc ) or the OpenShift Container Platform web console to deploy a log store on your OpenShift Container Platform cluster. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 10.2.1. Deploying a Loki log store You can use the Loki Operator to deploy an internal Loki log store on your OpenShift Container Platform cluster. After install the Loki Operator, you must configure Loki object storage by creating a secret, and create a LokiStack custom resource (CR). 10.2.1.1. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. Important It is not possible to change the number 1x for the deployment size. Table 10.1. Loki sizing 1x.demo 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 Total CPU requests None 14 vCPUs 34 vCPUs 54 vCPUs Total CPU requests if using the ruler None 16 vCPUs 42 vCPUs 70 vCPUs Total memory requests None 31Gi 67Gi 139Gi Total memory requests if using the ruler None 35Gi 83Gi 171Gi Total disk requests 40Gi 430Gi 430Gi 590Gi Total disk requests if using the ruler 80Gi 750Gi 750Gi 910Gi 10.2.1.2. Installing the Loki Operator by using the OpenShift Container Platform web console To install and configure logging on your OpenShift Container Platform cluster, additional Operators must be installed. This can be done from the Operator Hub within the web console. OpenShift Container Platform Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator's logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs. Prerequisites You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). You have administrator permissions. You have access to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable or stable-x.y as the Update channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you. Select Enable operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Verification Go to Operators Installed Operators . Make sure the openshift-logging project is selected. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date . Note An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page. 10.2.1.3. Creating a secret for Loki object storage by using the web console To configure Loki object storage, you must create a secret. You can create a secret by using the OpenShift Container Platform web console. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You installed the Loki Operator. Procedure Go to Workloads Secrets in the Administrator perspective of the OpenShift Container Platform web console. From the Create drop-down list, select From YAML . Create a secret that uses the access_key_id and access_key_secret fields to specify your credentials and the bucketnames , endpoint , and region fields to define the object storage location. AWS is used in the following example: Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 Additional resources Loki object storage 10.2.1.4. Creating a LokiStack custom resource by using the web console You can create a LokiStack custom resource (CR) by using the OpenShift Container Platform web console. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You installed the Loki Operator. Procedure Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging 6 1 Use the name logging-loki . 2 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Important It is not possible to change the number 1x for the deployment size. 3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 6 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Click Create . 10.2.1.5. Installing Loki Operator by using the CLI To install and configure logging on your OpenShift Container Platform cluster, additional Operators must be installed. This can be done from the OpenShift Container Platform CLI. OpenShift Container Platform Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator's logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Procedure Create a Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object: USD oc apply -f <filename>.yaml 10.2.1.6. Creating a secret for Loki object storage by using the CLI To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a secret in the directory that contains your certificate and key files by running the following command: USD oc create secret generic -n openshift-logging <your_secret_name> \ --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password> Note Use generic or opaque secrets for best results. Verification Verify that a secret was created by running the following command: USD oc get secrets Additional resources Loki object storage 10.2.1.7. Creating a LokiStack custom resource by using the CLI You can create a LokiStack custom resource (CR) by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small 1 storage: schemas: - version: v12 effectiveDate: "2022-06-01" secret: name: logging-loki-s3 2 type: s3 3 storageClassName: <storage_class_name> 4 tenants: mode: openshift-logging 5 1 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Important It is not possible to change the number 1x for the deployment size. 2 Specify the name of your log store secret. 3 Specify the type of your log store secret. 4 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 5 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR by running the following command: USD oc apply -f <filename>.yaml Verification Verify the installation by listing the pods in the openshift-logging project by running the following command and observing the output: USD oc get pods -n openshift-logging Confirm that you see several pods for components of the logging, similar to the following list: Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s 10.2.2. Loki object storage The Loki Operator supports AWS S3 , as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation . Azure , GCS , and Swift are also supported. The recommended nomenclature for Loki storage is logging-loki- <your_storage_provider> . The following table shows the type values within the LokiStack custom resource (CR) for each storage provider. For more information, see the section on your storage provider. Table 10.2. Secret type quick reference Storage provider Secret type value AWS s3 Azure azure Google Cloud gcs Minio s3 OpenShift Data Foundation s3 Swift swift 10.2.2.1. AWS storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on AWS. You created an AWS IAM Policy and IAM User . Procedure Create an object storage secret with the name logging-loki-aws by running the following command: USD oc create secret generic logging-loki-aws \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" 10.2.2.2. Azure storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Azure. Procedure Create an object storage secret with the name logging-loki-azure by running the following command: USD oc create secret generic logging-loki-azure \ --from-literal=container="<azure_container_name>" \ --from-literal=environment="<azure_environment>" \ 1 --from-literal=account_name="<azure_account_name>" \ --from-literal=account_key="<azure_account_key>" 1 Supported environment values are AzureGlobal , AzureChinaCloud , AzureGermanCloud , or AzureUSGovernment . 10.2.2.3. Google Cloud Platform storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a project on Google Cloud Platform (GCP). You created a bucket in the same project. You created a service account in the same project for GCP authentication. Procedure Copy the service account credentials received from GCP into a file called key.json . Create an object storage secret with the name logging-loki-gcs by running the following command: USD oc create secret generic logging-loki-gcs \ --from-literal=bucketname="<bucket_name>" \ --from-file=key.json="<path/to/key.json>" 10.2.2.4. Minio storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You have Minio deployed on your cluster. You created a bucket on Minio. Procedure Create an object storage secret with the name logging-loki-minio by running the following command: USD oc create secret generic logging-loki-minio \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<minio_bucket_endpoint>" \ --from-literal=access_key_id="<minio_access_key_id>" \ --from-literal=access_key_secret="<minio_access_key_secret>" 10.2.2.5. OpenShift Data Foundation storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You deployed OpenShift Data Foundation . You configured your OpenShift Data Foundation cluster for object storage . Procedure Create an ObjectBucketClaim custom resource in the openshift-logging namespace: apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io Get bucket properties from the associated ConfigMap object by running the following command: BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}') Get bucket access key from the associated secret by running the following command: ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d) Create an object storage secret with the name logging-loki-odf by running the following command: USD oc create -n openshift-logging secret generic logging-loki-odf \ --from-literal=access_key_id="<access_key_id>" \ --from-literal=access_key_secret="<secret_access_key>" \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="https://<bucket_host>:<bucket_port>" 10.2.2.6. Swift storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Swift. Procedure Create an object storage secret with the name logging-loki-swift by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" You can optionally provide project-specific data, region, or both by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" \ --from-literal=project_id="<swift_project_id>" \ --from-literal=project_name="<swift_project_name>" \ --from-literal=project_domain_id="<swift_project_domain_id>" \ --from-literal=project_domain_name="<swift_project_domain_name>" \ --from-literal=region="<swift_region>" 10.2.3. Deploying an Elasticsearch log store You can use the OpenShift Elasticsearch Operator to deploy an internal Elasticsearch log store on your OpenShift Container Platform cluster. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 10.2.3.1. Storage considerations for Elasticsearch A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Container Platform this is achieved using persistent volume claims (PVCs). Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Fluentd ships any logs from systemd journal and /var/log/containers/*.log to Elasticsearch. Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity. By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED. Note These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. 10.2.3.2. Installing the OpenShift Elasticsearch Operator by using the web console The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. Prerequisites Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node. Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments. Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Click OpenShift Elasticsearch Operator from the list of available Operators, and click Install . Ensure that the All namespaces on the cluster is selected under Installation mode . Ensure that openshift-operators-redhat is selected under Installed Namespace . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as OpenShift Container Platform metric, which would cause conflicts. Select Enable operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select stable-5.x as the Update channel . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators Installed Operators page. Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded . 10.2.3.3. Installing the OpenShift Elasticsearch Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Elasticsearch Operator. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Elasticsearch is a memory-intensive application. By default, OpenShift Container Platform installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OpenShift Container Platform nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. You have administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Create a Namespace object as a YAML file: apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as metric, which would cause conflicts. 2 String. You must specify this label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object as a YAML file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {} 1 You must specify the openshift-operators-redhat namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe the namespace to the OpenShift Elasticsearch Operator: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-x.y as the channel. See the following note. 3 Automatic allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. Manual requires a user with appropriate credentials to approve the Operator update. 4 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM). Note Specifying stable installs the current version of the latest stable release. Using stable with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable major and minor release. Specifying stable-x.y installs the current minor version of a specific major release. Using stable-x.y with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable minor release within the major release. Apply the subscription by running the following command: USD oc apply -f <filename>.yaml The OpenShift Elasticsearch Operator is installed to the openshift-operators-redhat namespace and copied to each project in the cluster. Verification Run the following command: USD oc get csv -n --all-namespaces Observe the output and confirm that pods for the OpenShift Elasticsearch Operator exist in each namespace Example output NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded ... 10.2.4. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.3. Configuring the LokiStack log store In logging documentation, LokiStack refers to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store. 10.3.1. Creating a new group for the cluster-admin user role Important Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120) . For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it. Use the following procedure to create a new group for users with cluster-admin permissions. Procedure Enter the following command to create a new group: USD oc adm groups new cluster-admin Enter the following command to add the desired user to the cluster-admin group: USD oc adm groups add-users cluster-admin <username> Enter the following command to add cluster-admin user role to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admin 10.3.2. LokiStack behavior during cluster restarts In logging version 5.8 and newer versions, when an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. Additional resources Pod disruption budgets Kubernetes documentation 10.3.3. Configuring Loki to tolerate node failure In the logging 5.8 and later versions, the Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. Additional resources PodAntiAffinity v1 core Kubernetes documentation Assigning Pods to Nodes Kubernetes documentation Placing pods relative to other pods using affinity and anti-affinity rules 10.3.4. Zone aware data replication In the logging 5.8 and later versions, the Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra.small , 1x.small , or 1x.medium, the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 10.3.4.1. Recovering Loki pods from failed zones In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster isn't configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Logging version 5.8 or later. Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: oc delete pvc __<pvc_name>__ -n openshift-logging Then delete the pod(s) by running the following command: oc delete pod __<pod_name>__ -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 10.3.4.1.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. oc patch pvc __<pvc_name>__ -p '{"metadata":{"finalizers":null}}' -n openshift-logging Additional resources Topology spread constraints Kubernetes documentation Kubernetes storage documentation . Controlling pod placement by using pod topology spread constraints 10.3.5. Fine grained access for Loki logs In logging 5.8 and later, the Red Hat OpenShift Logging Operator does not grant all users access to logs by default. As an administrator, you must configure your users' access unless the Operator was upgraded and prior configurations are in place. Depending on your configuration and need, you can configure fine grain access to logs using the following: Cluster wide policies Namespace scoped policies Creation of custom admin groups As an administrator, you need to create the role bindings and cluster role bindings appropriate for your deployment. The Red Hat OpenShift Logging Operator provides the following cluster roles: cluster-logging-application-view grants permission to read application logs. cluster-logging-infrastructure-view grants permission to read infrastructure logs. cluster-logging-audit-view grants permission to read audit logs. If you have upgraded from a prior version, an additional cluster role logging-application-logs-reader and associated cluster role binding logging-all-authenticated-application-logs-reader provide backward compatibility, allowing any authenticated user read access in their namespaces. Note Users with access by namespace must provide a namespace when querying application logs. 10.3.5.1. Cluster wide access Cluster role binding resources reference cluster roles, and set permissions cluster wide. Example ClusterRoleBinding kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io 1 Additional ClusterRoles are cluster-logging-infrastructure-view , and cluster-logging-audit-view . 2 Specifies the users or groups this object applies to. 10.3.5.2. Namespaced access RoleBinding resources can be used with ClusterRole objects to define the namespace a user or group has access to logs for. Example RoleBinding kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0 1 Specifies the namespace this RoleBinding applies to. 10.3.5.3. Custom admin group access If you have a large deployment with several users who require broader permissions, you can create a custom group using the adminGroup field. Users who are members of any group specified in the adminGroups field of the LokiStack CR are considered administrators. Administrator users have access to all application logs in all namespaces, if they also get assigned the cluster-logging-application-view role. Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3 1 Custom admin groups are only available in this mode. 2 Entering an empty list [] value for this field disables admin groups. 3 Overrides the default groups ( system:cluster-admins , cluster-admin , dedicated-admin ) Additional resources Using RBAC to define and apply permissions 10.3.6. Enabling stream-based retention with Loki With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Although logging version 5.9 and higher supports schema v12, v13 is recommended. To enable stream-based retention, create a LokiStack CR: Example global stream-based retention apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream. Example per-tenant stream-based retention apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml Note This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. 10.3.7. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 10.3.8. Configuring Loki to tolerate memberlist creation failure In an OpenShift cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack CR to use the podIP in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP","type": "memberlist"}}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 10.3.9. Additional resources Loki components documentation Loki Query Language (LogQL) documentation Grafana Dashboard documentation Loki Object Storage documentation Loki Operator IngestionLimitSpec documentation Loki Storage Schema documentation 10.4. Configuring the Elasticsearch log store You can use Elasticsearch 6 to store and organize log data. You can make modifications to your log store, including: Storage for your Elasticsearch cluster Shard replication across data nodes in the cluster, from full replication to no replication External access to Elasticsearch data 10.4.1. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.4.2. Forwarding audit logs to the log store In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the ClusterLogging custom resource (CR) by default. Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured. If this default configuration meets your needs, you do not need to configure a ClusterLogForwarder CR. If a ClusterLogForwarder CR exists, logs are not forwarded to the internal log store unless a pipeline is defined that contains the default output. Procedure To use the Log Forward API to forward audit logs to the internal Elasticsearch instance: Create or edit a YAML file that defines the ClusterLogForwarder CR object: Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default 1 A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance. Note You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: "elasticsearch" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: "fluentdForward" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1 1 This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance. Additional resources About log collection and forwarding 10.4.3. Configuring log retention time You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs. To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices. Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions: The index is older than the rollover.maxAge value in the Elasticsearch CR. The index size is greater than 40 GB x the number of primary shards. The index doc count is greater than 40960 KB x the number of primary shards. Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default. Prerequisites The Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator must be installed. Procedure To configure the log retention time: Edit the ClusterLogging CR to add or modify the retentionPolicy parameter: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ... 1 Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days. You can verify the settings in the Elasticsearch custom resource (CR). For example, the Red Hat OpenShift Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over. apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ... 1 For each log source, the retention policy indicates when to delete and roll over logs for that source. 2 When OpenShift Container Platform deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR. 3 The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR. 4 When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed. Note Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR. The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval . USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s 10.4.4. Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. Note In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: 1 resources: limits: 2 memory: "32Gi" requests: 3 cpu: "1" memory: "16Gi" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi 1 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 2 The maximum amount of resources a pod can use. 3 The minimum resources required to schedule a pod. 4 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits . For example: resources: limits: 1 memory: "32Gi" requests: 2 cpu: "8" memory: "32Gi" 1 The maximum amount of the resource. 2 The minimum amount required. Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. 10.4.5. Configuring replication policy for the log store You can define how Elasticsearch shards are replicated across data nodes in the cluster. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1 1 Specify a redundancy policy for the shards. The change is applied upon saving the changes. FullRedundancy . Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance. MultipleRedundancy . Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance. SingleRedundancy . Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. ZeroRedundancy . Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. 10.4.6. Scaling down Elasticsearch pods Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation. If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green , you can scale down by another pod. Note If your Elasticsearch cluster is set to ZeroRedundancy , you should not scale down your Elasticsearch pods. 10.4.7. Configuring persistent storage for the log store Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. Warning Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" # ... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G" This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. 10.4.8. Configuring the log store for emptyDir storage You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod's data is lost upon restart. Note When using emptyDir, if log storage is restarted or redeployed, you will lose data. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify emptyDir: spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {} 10.4.9. Performing an Elasticsearch rolling cluster restart Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure To perform a rolling cluster restart: Change to the openshift-logging project: Get the names of the Elasticsearch pods: Scale down the collector pods so they stop sending new logs to Elasticsearch: USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}' Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: USD oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST For example: Example output Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool: For example: Example output {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient": After the command is complete, for each deployment you have for an ES cluster: By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: For example: Example output A new pod is deployed. After the pod has a ready container, you can move on to the deployment. Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h After the deployments are complete, reset the pod to disallow rollouts: For example: Example output Check that the Elasticsearch cluster is in a green or yellow state: Note If you performed a rollout on the Elasticsearch pod you used in the commands, the pod no longer exists and you need a new pod name here. For example: 1 Make sure this parameter value is green or yellow before proceeding. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod. After all the deployments for the cluster have been rolled out, re-enable shard balancing: For example: Example output { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } } Scale up the collector pods so they send new logs to Elasticsearch. USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}' 10.4.10. Exposing the log store service as a route By default, the log store that is deployed with logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data. Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains: The Authorization: Bearer USD{token} The Elasticsearch reencrypt route and an Elasticsearch API request . Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands: USD oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging Example output 172.30.183.229 USD oc get service elasticsearch -n openshift-logging Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h You can check the cluster IP address with a command similar to the following: USD oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://172.30.183.229:9200/_cat/health" Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108 Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. You must have access to the project to be able to access to the logs. Procedure To expose the log store externally: Change to the openshift-logging project: USD oc project openshift-logging Extract the CA certificate from the log store and write to the admin-ca file: USD oc extract secret/elasticsearch --to=. --keys=admin-ca Example output admin-ca Create the route for the log store service as a YAML file: Create a YAML file with the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1 1 Add the log store CA certifcate or use the command in the step. You do not have to set the spec.tls.key , spec.tls.certificate , and spec.tls.caCertificate parameters required by some reencrypt routes. Run the following command to add the log store CA certificate to the route YAML you created in the step: USD cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml Create the route: USD oc create -f <file-name>.yaml Example output route.route.openshift.io/elasticsearch created Check that the Elasticsearch service is exposed: Get the token of this service account to be used in the request: USD token=USD(oc whoami -t) Set the elasticsearch route you created as an environment variable. USD routeES=`oc get route elasticsearch -o jsonpath={.spec.host}` To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://USD{routeES}" The response appears similar to the following: Example output { "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" } 10.4.11. Removing unused components if you do not use the default Elasticsearch log store As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster. In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default . For example: outputRefs: - default Warning Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: type: "fluentd" fluentd: {} Verify that the collector pods are redeployed: USD oc get pods -l component=collector -n openshift-logging | [
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging 6",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"oc get secrets",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small 1 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 2 type: s3 3 storageClassName: <storage_class_name> 4 tenants: mode: openshift-logging 5",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s",
"oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"",
"oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"",
"oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"",
"oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io",
"BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')",
"ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)",
"oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv -n --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"delete pvc __<pvc_name>__ -n openshift-logging",
"delete pod __<pod_name>__ -n openshift-logging",
"patch pvc __<pvc_name>__ -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\",\"type\": \"memberlist\"}}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/logging/log-storage |
Chapter 42. PodDisruptionBudgetTemplate schema reference | Chapter 42. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodDisruptionBudgetTemplate schema properties A PodDisruptionBudget (PDB) is an OpenShift resource that ensures high availability by specifying the minimum number of pods that must be available during planned maintenance or upgrades. Streams for Apache Kafka creates a PDB for every new StrimziPodSet or Deployment . By default, the PDB allows only one pod to be unavailable at any given time. You can increase the number of unavailable pods allowed by changing the default value of the maxUnavailable property. StrimziPodSet custom resources manage pods using a custom controller that cannot use the maxUnavailable value directly. Instead, the maxUnavailable value is automatically converted to a minAvailable value when creating the PDB resource, which effectively serves the same purpose, as illustrated in the following examples: If there are three broker pods and the maxUnavailable property is set to 1 in the Kafka resource, the minAvailable setting is 2 , allowing one pod to be unavailable. If there are three broker pods and the maxUnavailable property is set to 0 (zero), the minAvailable setting is 3 , requiring all three broker pods to be available and allowing zero pods to be unavailable. Example PodDisruptionBudget template configuration # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... 42.1. PodDisruptionBudgetTemplate schema properties Property Property type Description metadata MetadataTemplate Metadata to apply to the PodDisruptionBudgetTemplate resource. maxUnavailable integer Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1. | [
"template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-poddisruptionbudgettemplate-reference |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/using_cost_models/proc-providing-feedback-on-redhat-documentation |
Chapter 5. Management of Ceph daemons | Chapter 5. Management of Ceph daemons As a storage administrator, you can manage Ceph daemons on the Red Hat Ceph Storage dashboard. 5.1. Daemon actions The Red Hat Ceph Storage dashboard allows you to start, stop, restart, and redeploy daemons. Note These actions are supported on all daemons except monitor and manager daemons. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. At least one daemon is configured in the storage cluster. Procedure You can manage daemons two ways. From the Services page: Log in to the dashboard. From the Cluster drop-down menu, select Services . View the details of the service with the daemon to perform the action on by clicking the Expand/Collapse icon on its row. In Details , select the drop down to the desired daemon to perform Start , Stop , Restart , or Redeploy . Figure 5.1. Managing daemons From the Hosts page: Log in to the dashboard. From the Cluster drop-down menu, select Hosts . From the Hosts List , select the host with the daemon to perform the action on. In the Daemon tab of the host, click the daemon. Use the drop down at the top to perform Start , Stop , Restart , or Redeploy . Figure 5.2. Managing daemons | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/dashboard_guide/management-of-ceph-daemons |
Chapter 1. Migration toolkit for containers overview | Chapter 1. Migration toolkit for containers overview You can migrate stateful application workloads between OpenShift Container Platform 4 clusters at the granularity of a namespace by using the Migration Toolkit for Containers (MTC). To learn more about MTC see understanding MTC . Note If you are migrating from OpenShift Container Platform 3, see about migrating from OpenShift Container Platform 3 to 4 and installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . 1.1. Installing MTC You must install the Migration Toolkit for Containers Operator that is compatible for your OpenShift Container Platform version: OpenShift Container Platform 4.6 and later versions: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager (OLM) . OpenShift Container Platform 4.5 and earlier versions: Manually install the legacy Migration Toolkit for Containers Operator . Then you configure object storage to use as a replication repository . 1.2. Upgrading MTC You can upgrade the MTC by using OLM. 1.3. Reviewing MTC checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the premigration checklists . 1.4. Migrating applications You can migrate your applications by using the MTC web console or the command line . 1.5. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. You can check the following items: Create a registry route for direct image migration Configuring proxies Migrating an application by using the MTC API Running a state migration Creating migration hooks Editing, excluding, and mapping migrated resources Configuring the migration controller for large migrations 1.6. Troubleshooting migrations You can perform the following troubleshooting tasks: Viewing plan resources Viewing the migration plan aggregated log file Using the migration log reader Accessing performance metrics Using the must-gather tool Using the Velero CLI to debug Backup and Restore CRs Debugging a partial migration failure Using MTC custom resources for troubleshooting Checking common issues and concerns 1.7. Rolling back a migration You can roll back a migration by using the MTC web console, the CLI or manually. 1.8. Uninstalling MTC and deleting resources You can uninstall the MTC and delete its resources to clean up the cluster. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migration_toolkit_for_containers/migration-toolkit-for-containers-overview |
Chapter 3. Bug fixes | Chapter 3. Bug fixes 3.1. Allow Ports in Git Provider Endpoint for Personal Access Tokens With this release, you can provide ports in the URL for Git Provider Endpoint when adding Personal Access Tokens on the User Dashboard. Previously, it was not possible due to strict validation. Additional resources CRW-7125 3.2. If persistHome is enabled, the token in .kube/config isn't renewed Before this release, when the spec.devEnvironments.persistUserHome option was enabled, the token in .kube/config was not renewed automatically during a workspace restart. You can find more details about automatic token injection in the official documentation . Additional resources CRW-7126 3.3. Keep projects when restarting a workspace from local devfile Previously, PROJECTS_ROOT and PROJECT_SOURCE environment variables were not correctly set after using the Restart Workspace from Local Devfile functionality. The defect has been fixed in this release. Additional resources CRW-7127 3.4. Inconsistency in the behaviour of the USDPATH environment variable within Devfile Previously, when commands were executed using the command definition in the devfile, they had a different USDPATH compared to commands launched in containers defined within the components section. The defect has been fixed in this release. Additional resources CRW-7130 3.5. User-provided environment variables can't reference USDPROJECT_ROOT or USDPROJECT_SOURCE Previously, users were not able to reference the USDPROJECT_ROOT or USDPROJECT_SOURCE environment variables in their devfile environment variables . This issue has now been fixed in this release. Additional resources CRW-7131 3.6. Workspace status flickering during startup Previously, during a workspace startup, the status could have been unexpectedly changed to 'Stopped' even though the workspace started successfully. The defect has been fixed in this release, and the status changes are ignored during workspace startup. Additional resources CRW-7132 3.7. Starting a new workspace with a clone of the specified branch doesn't work correctly if the repository has no`devfile.yaml` Previously, starting a new workspace with a clone of a specified branch didn't work correctly if the repository didn't have devfile.yaml . Instead, the default branch was always cloned after the cloud development environment (CDE) startup. The defect has been fixed in this release. Additional resources CRW-7133 3.8. Branch detection for Microsoft Azure does not work on the User Dashboard Before this release, branch detection for Microsoft Azure repositories was not working on the User Dashboard. The defect has been fixed in this release. Additional resources CRW-7134 3.9. Workspace start page goes to cyclic reload if refresh token mode is applied Previously, using the experimental CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN feature could result in the cyclic reload sequence during cloud development environment (CDE) startup. The defect has been fixed in this release. Learn more about the CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN feature in the official documentation . Additional resources CRW-7137 3.10. SSH key added by pasting the key strings in the dashboard is invalid Before this release, there was an issue with adding an SSH key by manually pasting the key strings in the dashboard. After saving the SSH key and starting the workspace, the project would not be cloned with the following error message: "Could not read from remote repository. Please make sure you have the correct access rights and the repository exists." With this release, the issue has been fixed. Additional resources CRW-7153 3.11. Extension 'ms-python.python' CANNOT use API proposal: terminalShellIntegration Before this release, installing the latest Python extension (v2024.14.0) would result in the following would fail with the following error message: "Extension 'ms-python.python' CANNOT use API proposal: terminalShellIntegration". With this release, the issue is fixed Additional resources CRW-7201 3.12. Opening links not possible in the Visual Studio Code - Open Source ("Code - OSS") Before this release, it was not possible to open links in Visual Studio Code - Open Source ("Code - OSS"). With this release, the issue has been fixed. Additional resources CRW-7247 | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/3.16.0_release_notes_and_known_issues/bug-fixes |
Appendix C. Using AMQ Broker with the examples | Appendix C. Using AMQ Broker with the examples The AMQ Core Protocol JMS examples require a running message broker with a queue named exampleQueue . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named exampleQueue . USD <broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2021-08-24 14:25:38 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_core_protocol_jms_client/using_the_broker_with_the_examples |
Chapter 121. KafkaMirrorMakerProducerSpec schema reference | Chapter 121. KafkaMirrorMakerProducerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerProducerSpec schema properties Configures a MirrorMaker producer. 121.1. abortOnSendFailure Use the producer.abortOnSendFailure property to configure how to handle message send failure from the producer. By default, if an error occurs when sending a message from Kafka MirrorMaker to a Kafka cluster: The Kafka MirrorMaker container is terminated in OpenShift. The container is then recreated. If the abortOnSendFailure option is set to false , message sending errors are ignored. 121.2. config Use the producer.config properties to configure Kafka options for the producer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Interceptors Properties with the following prefixes cannot be set: bootstrap.servers interceptor.classes sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to MirrorMaker, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the MirrorMaker cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all MirrorMaker nodes. 121.3. KafkaMirrorMakerProducerSpec schema properties Property Property type Description bootstrapServers string A list of host:port pairs for establishing the initial connection to the Kafka cluster. abortOnSendFailure boolean Flag to set the MirrorMaker to exit on a failed send. Default value is true . authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. config map The MirrorMaker producer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). tls ClientTls TLS configuration for connecting MirrorMaker to the cluster. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMakerProducerSpec-reference |
3.3.4. Converting a remote Xen virtual machine | 3.3.4. Converting a remote Xen virtual machine Xen virtual machines can be converted remotely using SSH. Ensure that the host running the virtual machine is accessible via SSH. To convert the virtual machine, run: Where vmhost.example.com is the host running the virtual machine, pool is the local storage pool to hold the image, bridge_name is the name of a local network bridge to connect the converted virtual machine's network to, and guest_name is the name of the Xen virtual machine. You may also use the --network parameter to connect to a locally managed network if your virtual machine only has a single network interface. If your virtual machine has multiple network interfaces, edit /etc/virt-v2v.conf to specify the network mapping for all interfaces. If your virtual machine uses a Xen paravirtualized kernel (it would be called something like kernel-xen or kernel-xenU ) virt-v2v will attempt to install a new kernel during the conversion process. You can avoid this requirement by installing a regular kernel, which will not reference a hypervisor in its name, alongside the Xen kernel prior to conversion. You should not make this newly installed kernel your default kernel, because Xen will not boot it. virt-v2v will make it the default during conversion. Note When converting from Xen, virt-v2v requires that the image of the source virtual machine exists in a storage pool. If the image is not currently in a storage pool, you must create one. Contact Red Hat Support for assistance creating an appropriate storage pool. | [
"virt-v2v -ic qemu+ssh://[email protected]/system -op pool --bridge bridge_name guest_name"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/subsect-convert-a-remote-xen-virtual-machine |
Scalability and performance | Scalability and performance OpenShift Container Platform 4.9 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team | [
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1",
"oc label machineconfigpool worker custom-kubelet=set-max-pods",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=large-pods",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-max-pods -o yaml",
"spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc edit machineconfigpool worker",
"spec: maxUnavailable: <node_count>",
"sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf",
"sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Make File System on /dev/sdb DefaultDependencies=no BindsTo=dev-sdb.device After=dev-sdb.device var.mount [email protected] [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/systemd-makefs xfs /dev/sdb TimeoutSec=0 [Install] WantedBy=var-lib-containers.mount enabled: true name: [email protected] - contents: | [Unit] Description=Mount /dev/sdb to /var/lib/etcd Before=local-fs.target [email protected] [email protected] var.mount [Mount] What=/dev/sdb Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setenforce 0 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/setenforce 1 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: sync-var-lib-etcd-to-etcd.service - contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: restorecon-var-lib-etcd.service",
"oc login -u USD{ADMIN} -p USD{ADMINPASSWORD} USD{API} ... output omitted",
"oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created",
"oc login -u USD{ADMIN} -p USD{ADMINPASSWORD} USD{API} [... output omitted ...]",
"oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Mount /dev/sdb to /var/lib/etcd Before=local-fs.target [email protected] [email protected] var.mount [Mount] What=/dev/sdb Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount",
"oc replace -f etcd-mc.yml",
"I0907 08:43:12.171919 1 defragcontroller.go:198] etcd member \"ip- 10-0-191-150.example.redhat.com\" backend store fragmented: 39.33 %, dbSize: 349138944",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table",
"Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com",
"sh-4.4# unset ETCDCTL_ENDPOINTS",
"sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag",
"Finished defragmenting etcd member[https://localhost:2379]",
"sh-4.4# etcdctl endpoint status -w table --cluster",
"+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"sh-4.4# etcdctl alarm list",
"memberID:12345678912345678912 alarm:NOSPACE",
"sh-4.4# etcdctl alarm disarm",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf",
"oc create -f enable-rfs.yaml",
"oc get mc",
"oc delete mc 50-enable-rfs",
"cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805",
"cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805",
"oc create -f 05-master-kernelarg-hpav.yaml",
"oc create -f 05-worker-kernelarg-hpav.yaml",
"oc delete -f 05-master-kernelarg-hpav.yaml",
"oc delete -f 05-worker-kernelarg-hpav.yaml",
"<interface type=\"direct\"> <source network=\"net01\"/> <model type=\"virtio\"/> <driver ... queues=\"2\"/> </interface>",
"<domain> <iothreads>3</iothreads> 1 <devices> <disk type=\"block\" device=\"disk\"> 2 <driver ... iothread=\"2\"/> </disk> </devices> </domain>",
"<disk type=\"block\" device=\"disk\"> <driver name=\"qemu\" type=\"raw\" cache=\"none\" io=\"native\" iothread=\"1\"/> </disk>",
"<memballoon model=\"none\"/>",
"sysctl kernel.sched_migration_cost_ns=60000",
"kernel.sched_migration_cost_ns=60000",
"cgroup_controllers = [ \"cpu\", \"devices\", \"memory\", \"blkio\", \"cpuacct\" ]",
"systemctl restart libvirtd",
"echo 0 > /sys/module/kvm/parameters/halt_poll_ns",
"echo 80000 > /sys/module/kvm/parameters/halt_poll_ns",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"oc get profile -n openshift-cluster-node-tuning-operator",
"NAME TUNED APPLIED DEGRADED AGE master-0 openshift-control-plane True False 6h33m master-1 openshift-control-plane True False 6h33m master-2 openshift-control-plane True False 6h33m worker-a openshift-node True False 6h28m worker-b openshift-node True False 6h28m",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range=\"1024 65535\" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\\n' | sed 's|^.*/||'",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-hpc-compute namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile for HPC compute workloads include=openshift-node,hpc-compute name: openshift-node-hpc-compute recommend: - match: - label: tuned.openshift.io/openshift-node-hpc-compute priority: 20 profile: openshift-node-hpc-compute",
"podman pull quay.io/openshift/origin-tests:4.9",
"podman run -v USD{LOCAL_KUBECONFIG}:/root/.kube/config:z -i quay.io/openshift/origin-tests:4.9 /bin/bash -c 'export KUBECONFIG=/root/.kube/config && openshift-tests run-test \"[sig-scalability][Feature:Performance] Load cluster should populate the cluster [Slow][Serial] [Suite:openshift]\"'",
"podman run -v USD{LOCAL_KUBECONFIG}:/root/.kube/config:z -v USD{LOCAL_CONFIG_FILE_PATH}:/root/configs/:z -i quay.io/openshift/origin-tests:4.9 /bin/bash -c 'KUBECONFIG=/root/.kube/config VIPERCONFIG=/root/configs/test.yaml openshift-tests run-test \"[sig-scalability][Feature:Performance] Load cluster should populate the cluster [Slow][Serial] [Suite:openshift]\"'",
"provider: local 1 ClusterLoader: cleanup: true projects: - num: 1 basename: clusterloader-cakephp-mysql tuning: default ifexists: reuse templates: - num: 1 file: cakephp-mysql.json - num: 1 basename: clusterloader-dancer-mysql tuning: default ifexists: reuse templates: - num: 1 file: dancer-mysql.json - num: 1 basename: clusterloader-django-postgresql tuning: default ifexists: reuse templates: - num: 1 file: django-postgresql.json - num: 1 basename: clusterloader-nodejs-mongodb tuning: default ifexists: reuse templates: - num: 1 file: quickstarts/nodejs-mongodb.json - num: 1 basename: clusterloader-rails-postgresql tuning: default templates: - num: 1 file: rails-postgresql.json tuningsets: 2 - name: default pods: stepping: 3 stepsize: 5 pause: 0 s rate_limit: 4 delay: 0 ms",
"{ \"name\": \"IDENTIFIER\", \"description\": \"Number to append to the name of resources\", \"value\": \"1\" }",
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring",
"oc create -f cluster-monitoring-config.yaml",
"required pods per cluster / pods per node = total number of nodes needed",
"2200 / 500 = 4.4",
"2200 / 20 = 110",
"required pods per cluster / total number of nodes = expected pods per node",
"--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>",
"oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ \"IPC_LOCK\" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: \"1Gi\" cpu: \"1\" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: \"hugepages_1G_request\" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi",
"oc create -f hugepages-volume-pod.yaml",
"oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- env | grep REQUESTS_HUGEPAGES_1GI",
"REQUESTS_HUGEPAGES_1GI=2147483648",
"oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- cat /etc/podinfo/hugepages_1G_request",
"2",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker",
"oc create -f thp-disable-tuned.yaml",
"oc get profile -n openshift-cluster-node-tuning-operator",
"cat /sys/kernel/mm/transparent_hugepage/enabled",
"always madvise [never]",
"apiVersion: v1 kind: Namespace metadata: name: openshift-performance-addon-operator annotations: workload.openshift.io/allowed: management",
"oc create -f pao-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-performance-addon-operator namespace: openshift-performance-addon-operator",
"oc create -f pao-operatorgroup.yaml",
"oc get packagemanifest performance-addon-operator -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'",
"4.9",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-performance-addon-operator-subscription namespace: openshift-performance-addon-operator spec: channel: \"<channel>\" 1 name: performance-addon-operator source: redhat-operators 2 sourceNamespace: openshift-marketplace",
"oc create -f pao-sub.yaml",
"oc project openshift-performance-addon-operator",
"oc get csv -n openshift-performance-addon-operator",
"oc patch operatorgroup -n openshift-performance-addon-operator openshift-performance-addon-operator --type json -p '[{ \"op\": \"remove\", \"path\": \"/spec\" }]'",
"oc describe -n openshift-performance-addon-operator og openshift-performance-addon-operator",
"oc get csv",
"VERSION REPLACES PHASE 4.9.0 performance-addon-operator.v4.9.0 Installing 4.8.0 Replacing",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE performance-addon-operator.v4.9.0 Performance Addon Operator 4.9.0 performance-addon-operator.v4.8.0 Succeeded",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-rt labels: machineconfiguration.openshift.io/role: worker-rt spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-rt], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-rt: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: realTimeKernel: enabled: true nodeSelector: node-role.kubernetes.io/worker-rt: \"\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-rt",
"oc describe mcp/worker-rt",
"Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt",
"oc get node -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.22.1 128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa) 4.18.0-211.rt5.23.el8.x86_64 cri-o://1.22.1-90.rhaos4.9.git4a0ac05.el8-rc.1 [...]",
"apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\"",
"oc apply -f qos-pod.yaml --namespace=qos-example",
"oc get pod qos-demo --namespace=qos-example --output=yaml",
"spec: containers: status: qosClass: Guaranteed",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual",
"apiVersion: v1 kind: Pod metadata: annotations: cpu-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: v1 kind: Pod metadata: name: example spec: # nodeSelector: node-role.kubernetes.io/worker-rt: \"\"",
"apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: cpu-quota.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: globallyDisableIrqLoadBalancing: true",
"apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1",
"apiVersion: v1 kind: Pod metadata: name: dynamic-irq-pod annotations: irq-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: dynamic-irq-pod image: \"quay.io/openshift-kni/cnf-tests:4.9\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" runtimeClassName: performance-dynamic-irq-profile",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dynamic-irq-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none>",
"oc exec -it dynamic-irq-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"",
"Cpus_allowed_list: 2-3",
"oc debug node/<node-name>",
"Starting pod/<node-name>-debug To use host binaries, run `chroot /host` Pod IP: <ip-address> If you don't see a command prompt, try pressing enter. sh-4.4#",
"sh-4.4# chroot /host",
"sh-4.4#",
"cat /proc/irq/default_smp_affinity",
"33",
"find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;",
"/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5",
"cat /proc/irq/<irq-num>/effective_affinity",
"lscpu --all --extended",
"CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000",
"cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list",
"0-4",
"cpu: isolated: 0,4 reserved: 1-3,5-7",
"\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"5-15\" 1 reserved: \"0-4\" 2 hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 16 node: 0 realTimeKernel: enabled: true 3 numa: 4 topologyPolicy: \"best-effort\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" 5",
"hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1",
"oc debug node/ip-10-0-141-105.ec2.internal",
"grep -i huge /proc/meminfo",
"AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##",
"oc describe node worker-0.ocp4poc.example.com | grep -i huge",
"hugepages-1g=true hugepages-###: ### hugepages-###: ###",
"spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G",
"\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"",
"oc edit -f <your_profile_name>.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - interfaceName: \"eth1\" - vendorID: \"0x1af4\" - deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth*\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"!eno1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - vendorID: \"0x1af4\" - deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"oc apply -f <your_profile_name>.yaml",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true",
"ethtool -l <device>",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4",
"ethtool -l <device>",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"udevadm info -p /sys/class/net/ens4 E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4",
"udevadm info -p /sys/class/net/eth0 E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3",
"WARNING tuned.plugins.base: instance net_test: no matching devices available",
"Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h",
"oc describe mcp worker-cnf",
"Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync",
"oc describe performanceprofiles performance",
"Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded",
"--image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.9.",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.9 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 -e PERF_TEST_PROFILE=<performance_profile> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh -ginkgo.focus=\"[performance]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"hwlatdetect\"",
"running /usr/bin/validationsuite -ginkgo.v -ginkgo.focus=hwlatdetect I0210 17:08:38.607699 7 request.go:668] Waited for 1.047200253s due to client-side throttling, not priority and fairness, request: GET:https://api.ocp.demo.lab:6443/apis/apps.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e validation ========================================== Random Seed: 1644512917 Will run 0 of 48 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Ran 0 of 48 Specs in 0.001 seconds SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 48 Skipped PASS Discovery mode enabled, skipping setup running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0210 17:08:41.179269 40 request.go:668] Waited for 1.046001096s due to client-side throttling, not priority and fairness, request: GET:https://api.ocp.demo.lab:6443/apis/storage.k8s.io/v1beta1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1644512920 Will run 1 of 151 specs SSSSSSS ------------------------------ [performance] Latency Test with the hwlatdetect image should succeed /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:221 STEP: Waiting two minutes to download the latencyTest image STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase Feb 10 17:10:56.045: [INFO]: found mcd machine-config-daemon-dzpw7 for node ocp-worker-0.demo.lab Feb 10 17:10:56.259: [INFO]: found mcd machine-config-daemon-dzpw7 for node ocp-worker-0.demo.lab Feb 10 17:11:56.825: [ERROR]: timed out waiting for the condition • Failure [193.903 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:60 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:213 should succeed [It] /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:221 Log file created at: 2022/02/10 17:08:45 Running on machine: hwlatdetect-cd8b6 Binary: Built with gc go1.16.6 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0210 17:08:45.716288 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-56fabc639a679b757ebae30e5f01b2ebd38e9fde9ecae91c41be41d3e89b37f8/vmlinuz-4.18.0-305.34.2.rt7.107.el8_4.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.0/rhcos/56fabc639a679b757ebae30e5f01b2ebd38e9fde9ecae91c41be41d3e89b37f8/0 root=UUID=56731f4f-f558-46a3-85d3-d1b579683385 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=3-5 tuned.non_isolcpus=ffffffc7 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,3-5 systemd.cpu_affinity=0,1,2,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 + + I0210 17:08:45.716782 1 node.go:44] Environment information: kernel version 4.18.0-305.34.2.rt7.107.el8_4.x86_64 I0210 17:08:45.716861 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 10 --window 10000000us --width 950000us] F0210 17:08:56.815204 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 10 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 24us 2 Samples recorded: 1 Samples exceeding threshold: 1 ts: 1644512927.163556381, inner:20, outer:24 ; err: exit status 1 goroutine 1 [running]: k8s.io/klog.stacks(0xc000010001, 0xc00012e000, 0x25b, 0x2710) /remote-source/app/vendor/k8s.io/klog/klog.go:875 +0xb9 k8s.io/klog.(*loggingT).output(0x5bed00, 0xc000000003, 0xc0000121c0, 0x53ea81, 0x7, 0x35, 0x0) /remote-source/app/vendor/k8s.io/klog/klog.go:829 +0x1b0 k8s.io/klog.(*loggingT).printf(0x5bed00, 0x3, 0x5082da, 0x33, 0xc000113f58, 0x2, 0x2) /remote-source/app/vendor/k8s.io/klog/klog.go:707 +0x153 k8s.io/klog.Fatalf(...) /remote-source/app/vendor/k8s.io/klog/klog.go:1276 main.main() /remote-source/app/cnf-tests/pod-utils/hwlatdetect-runner/main.go:53 +0x897 goroutine 6 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x5bed00) /remote-source/app/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /remote-source/app/vendor/k8s.io/klog/klog.go:411 +0xd8 goroutine 7 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x5bede0) /remote-source/app/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b created by k8s.io/klog/v2.init.0 /remote-source/app/vendor/k8s.io/klog/v2/klog.go:420 +0xdf Unexpected error: <*errors.errorString | 0xc000418ed0>: { s: \"timed out waiting for the condition\", } timed out waiting for the condition occurred /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:433 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:433 Ran 1 of 151 Specs in 222.254 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 150 Skipped --- FAIL: TestTest (222.45s) FAIL",
"hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0",
"hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"cyclictest\"",
"Discovery mode enabled, skipping setup running /usr/bin//cnftests -ginkgo.v -ginkgo.focus=cyclictest I0811 15:02:36.350033 20 request.go:668] Waited for 1.049965918s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1628694153 Will run 1 of 138 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [performance] Latency Test with the cyclictest image should succeed /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:200 STEP: Waiting two minutes to download the latencyTest image STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase Aug 11 15:03:06.826: [INFO]: found mcd machine-config-daemon-wf4w8 for node cnfdc8.clus2.t5g.lab.eng.bos.redhat.com • Failure [22.527 seconds] [performance] Latency Test /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:84 with the cyclictest image /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:188 should succeed [It] /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:200 The current latency 27 is bigger than the expected one 20 Expected <bool>: false to be true /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:219 Log file created at: 2021/08/11 15:02:51 Running on machine: cyclictest-knk7d Binary: Built with gc go1.16.6 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0811 15:02:51.092254 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/vmlinuz-4.18.0-305.10.2.rt7.83.el8_4.x86_64 ip=dhcp random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.1/rhcos/612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/0 ignition.platform.id=openstack root=UUID=5a4ddf16-9372-44d9-ac4e-3ee329e16ab3 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=1-3 tuned.non_isolcpus=000000ff,ffffffff,ffffffff,fffffff1 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,1-3 systemd.cpu_affinity=0,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103 default_hugepagesz=1G hugepagesz=2M hugepages=128 nmi_watchdog=0 audit=0 mce=off processor.max_cstate=1 idle=poll intel_idle.max_cstate=0 I0811 15:02:51.092427 1 node.go:44] Environment information: kernel version 4.18.0-305.10.2.rt7.83.el8_4.x86_64 I0811 15:02:51.092450 1 main.go:48] running the cyclictest command with arguments [-D 600 -95 1 -t 10 -a 2,4,6,8,10,54,56,58,60,62 -h 30 -i 1000 --quiet] I0811 15:03:06.147253 1 main.go:54] succeeded to run the cyclictest command: # /dev/cpu_dma_latency set to 0us Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 005561 027778 037704 011987 000000 120755 238981 081847 300186 000002 587440 581106 564207 554323 577416 590635 474442 357940 513895 296033 000003 011751 011441 006449 006761 008409 007904 002893 002066 003349 003089 000004 000527 001079 000914 000712 001451 001120 000779 000283 000350 000251 More histogram entries Min Latencies: 00002 00001 00001 00001 00001 00002 00001 00001 00001 00001 Avg Latencies: 00002 00002 00002 00001 00002 00002 00001 00001 00001 00001 Max Latencies: 00018 00465 00361 00395 00208 00301 02052 00289 00327 00114 Histogram Overflows: 00000 00220 00159 00128 00202 00017 00069 00059 00045 00120 Histogram Overflow at cycle number: Thread 0: Thread 1: 01142 01439 05305 ... # 00190 others Thread 2: 20895 21351 30624 ... # 00129 others Thread 3: 01143 17921 18334 ... # 00098 others Thread 4: 30499 30622 31566 ... # 00172 others Thread 5: 145221 170910 171888 Thread 6: 01684 26291 30623 ...# 00039 others Thread 7: 28983 92112 167011 ... 00029 others Thread 8: 45766 56169 56171 ...# 00015 others Thread 9: 02974 08094 13214 ... # 00090 others",
"running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 Histogram Overflow at cycle number: Thread 0: Thread 1: Thread 2: Thread 3: Thread 4: Thread 5: Thread 6: Thread 7: Thread 8: Thread 9: Thread 10: Thread 11: Thread 12: Thread 13: Thread 14: Thread 15:",
"running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 Histogram Overflow at cycle number: Thread 0: 155922 Thread 1: 110064 Thread 2: 110064 Thread 3: 110063 155921 Thread 4: 110063 155921 Thread 5: 155920 Thread 6: Thread 7: 110062 Thread 8: 110062 Thread 9: 155919 Thread 10: 110061 155919 Thread 11: 155918 Thread 12: 155918 Thread 13: 110060 Thread 14: 110060 Thread 15: 110059 155917",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=7 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"oslat\"",
"running /usr/bin//validationsuite -ginkgo.v -ginkgo.focus=oslat I0829 12:36:55.386776 8 request.go:668] Waited for 1.000303471s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/authentication.k8s.io/v1?timeout=32s Running Suite: CNF Features e2e validation ========================================== Discovery mode enabled, skipping setup running /usr/bin//cnftests -ginkgo.v -ginkgo.focus=oslat I0829 12:37:01.219077 20 request.go:668] Waited for 1.050010755s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/snapshot.storage.k8s.io/v1beta1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1630240617 Will run 1 of 142 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [performance] Latency Test with the oslat image should succeed /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:134 STEP: Waiting two minutes to download the latencyTest image STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase Aug 29 12:37:59.324: [INFO]: found mcd machine-config-daemon-wf4w8 for node cnfdc8.clus2.t5g.lab.eng.bos.redhat.com • Failure [49.246 seconds] [performance] Latency Test /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:59 with the oslat image /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:112 should succeed [It] /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:134 The current latency 27 is bigger than the expected one 20 1 Expected <bool>: false to be true /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:168 Log file created at: 2021/08/29 13:25:21 Running on machine: oslat-57c2g Binary: Built with gc go1.16.6 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0829 13:25:21.569182 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/vmlinuz-4.18.0-305.10.2.rt7.83.el8_4.x86_64 ip=dhcp random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.0/rhcos/612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/0 ignition.platform.id=openstack root=UUID=5a4ddf16-9372-44d9-ac4e-3ee329e16ab3 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=1-3 tuned.non_isolcpus=000000ff,ffffffff,ffffffff,fffffff1 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,1-3 systemd.cpu_affinity=0,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103 default_hugepagesz=1G hugepagesz=2M hugepages=128 nmi_watchdog=0 audit=0 mce=off processor.max_cstate=1 idle=poll intel_idle.max_cstate=0 I0829 13:25:21.569345 1 node.go:44] Environment information: kernel version 4.18.0-305.10.2.rt7.83.el8_4.x86_64 I0829 13:25:21.569367 1 main.go:53] Running the oslat command with arguments [--duration 600 --rtprio 1 --cpu-list 4,6,52,54,56,58 --cpu-main-thread 2] I0829 13:35:22.632263 1 main.go:59] Succeeded to run the oslat command: oslat V 2.00 Total runtime: 600 seconds Thread priority: SCHED_FIFO:1 CPU list: 4,6,52,54,56,58 CPU for main thread: 2 Workload: no Workload mem: 0 (KiB) Preheat cores: 6 Pre-heat for 1 seconds Test starts Test completed. Core: 4 6 52 54 56 58 CPU Freq: 2096 2096 2096 2096 2096 2096 (Mhz) 001 (us): 19390720316 19141129810 20265099129 20280959461 19391991159 19119877333 002 (us): 5304 5249 5777 5947 6829 4971 003 (us): 28 14 434 47 208 21 004 (us): 1388 853 123568 152817 5576 0 005 (us): 207850 223544 103827 91812 227236 231563 006 (us): 60770 122038 277581 323120 122633 122357 007 (us): 280023 223992 63016 25896 214194 218395 008 (us): 40604 25152 24368 4264 24440 25115 009 (us): 6858 3065 5815 810 3286 2116 010 (us): 1947 936 1452 151 474 361 Minimum: 1 1 1 1 1 1 (us) Average: 1.000 1.000 1.000 1.000 1.000 1.000 (us) Maximum: 37 38 49 28 28 19 (us) Max-Min: 36 37 48 27 27 18 (us) Duration: 599.667 599.667 599.667 599.667 599.667 599.667 (sec)",
"podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh --report <report_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junitdest:<junit_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh --junit <junit_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=master registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e IMAGE_REGISTRY=\"<disconnected_registry>\" -e CNF_TESTS_IMAGE=\"cnf-tests-rhel8:v4.9\" /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<custom_image_registry>\" -e CNF_TESTS_IMAGE=\"<custom_cnf-tests_image>\" registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/test-run.sh",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"oc create ns cnftests",
"oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests",
"oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests",
"SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'}",
"TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')",
"echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:4.9 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.9\" } ]",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.9 get nodes",
"oc adm must-gather --image=<PAO_image> --dest-dir=<dir>",
"oc adm must-gather --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.9 --dest-dir=must-gather",
"tar cvaf must-gather.tar.gz must-gather/",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"podman login registry.redhat.io",
"Username: myrhusername Password: ************",
"podman run --entrypoint performance-profile-creator registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 -h",
"A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --info log --must-gather-dir-path /must-gather",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency > my-performance-profile.yaml",
"cat my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: 1,3,5,7,9,11,13,15,17,19-39,41,43,45,47,49,51,53,55,57,59-79 reserved: 0,2,4,6,8,10,12,14,16,18,40,42,44,46,48,50,52,54,56,58 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true",
"oc apply -f my-performance-profile.yaml",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 10-39,50-79 reserved: 0-9,40-49 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true",
"vi run-perf-profile-creator.sh",
"#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" PAO_IMG=\"registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Performance Addon Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{PAO_IMG}\" && USD{CMD} \"USD{PAO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{PAO_IMG}\" || USD{IMG_PULL_CMD} \"USD{PAO_IMG}\" || exit_error \"Performance Addon Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) PAO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{PAO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"",
"chmod a+x run-perf-profile-creator.sh",
"./run-perf-profile-creator.sh -h",
"Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Performance Addon Operator image 1 -t path to a must-gather tarball 2 A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled",
"./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml",
"cat my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 1-39,41-79 reserved: 0,40 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: false",
"oc apply -f my-performance-profile.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKW2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudC5yZXNvdXJjZXNdCmNwdXNoYXJlcyA9IDAKQ1BVcyA9ICIwLTEsIDUyLTUzIgo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root",
"[crio.runtime.workloads.management] activation_annotation = \"target.workload.openshift.io/management\" annotation_prefix = \"resources.workload.openshift.io\" [crio.runtime.workloads.management.resources] cpushares = 0 cpuset = \"0-1, 52-53\" 1",
"{ \"management\": { \"cpuset\": \"0-1,52-53\" 1 } }",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} enabled: true name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-local-storage --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-logging --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" name: openshift-performance-addon-operator spec: {} --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: performance-addon-operator namespace: openshift-performance-addon-operator",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" name: openshift-ptp --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" 1 name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual 2 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"stable\" 3 installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: performance-addon-operator namespace: openshift-performance-addon-operator spec: channel: \"4.10\" 4 name: performance-addon-operator source: performance-addon-operator sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" 5 name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" 6 name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging 1 metadata: name: instance namespace: openshift-logging spec: collection: logs: fluentd: {} type: fluentd curation: type: \"curator\" curator: schedule: \"30 3 * * *\" managementState: Managed --- apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder 2 metadata: name: instance namespace: openshift-logging spec: inputs: - infrastructure: {} outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test 3 pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile 1 spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: 2-51,54-103 2 reserved: 0-1,52-53 3 hugepages: defaultHugepagesSize: 1G pages: - count: 32 4 size: 1G 5 node: 0 6 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true 7 nodeSelector: node-role.kubernetes.io/master: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true 8",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: profile: - interface: ens5f0 1 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison ieee1588 G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport UDPv4 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 ptp4lOpts: -2 -s --summary_interval -4 recommend: - match: - nodeLabel: node-role.kubernetes.io/master priority: 4 profile: slave",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: disable-chronyd spec: config: systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: chronyd.service ignition: version: 2.2.0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/master: \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-mh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_mh vlan: 150 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 2 isRdma: false nicSelector: pfNames: - ens7f0 3 nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 4 priority: 10 resourceName: du_mh --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-fh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_fh vlan: 140 5 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-fh namespace: openshift-sriov-network-operator spec: deviceType: netdevice 6 isRdma: true nicSelector: pfNames: - ens5f0 7 nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 8 priority: 10 resourceName: du_fh",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"false\" include.release.openshift.io/self-managed-high-availability: \"false\" include.release.openshift.io/single-node-developer: \"false\" release.openshift.io/create-only: \"true\" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal",
"oc apply -f <file_name>.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKW2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudC5yZXNvdXJjZXNdCmNwdXNoYXJlcyA9IDAKQ1BVcyA9ICIwLTEsIDUyLTUzIgo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root",
"[crio.runtime.workloads.management] activation_annotation = \"target.workload.openshift.io/management\" annotation_prefix = \"resources.workload.openshift.io\" [crio.runtime.workloads.management.resources] cpushares = 0 cpuset = \"0-1, 52-53\" 1",
"{ \"management\": { \"cpuset\": \"0-1,52-53\" 1 } }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<server_architecture>",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-install",
"export ISO_IMAGE_NAME=<iso_image_name> 1",
"export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1",
"export OCP_VERSION=<ocp_version> 1",
"sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}",
"sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}",
"wget http://USD(hostname)/USD{ISO_IMAGE_NAME}",
"Saving to: rhcos-4.9.0-fc.1-x86_64-live.x86_64.iso rhcos-4.9.0-fc.1-x86_64- 11%[====> ] 10.01M 4.71MB/s",
"oc patch hiveconfig hive --type merge -p '{\"spec\":{\"targetNamespace\":\"hive\",\"logLevel\":\"debug\",\"featureGates\":{\"custom\":{\"enabled\":[\"AlphaAgentInstallStrategy\"]},\"featureSet\":\"Custom\"}}}'",
"oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"watchAllNamespaces\": true }}'",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> 1 filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> 2 osImages: 3 - openshiftVersion: \"<ocp_version>\" 4 version: \"<ocp_release_version>\" 5 url: \"<iso_url>\" 6 rootFSUrl: \"<root_fs_url>\" 7 cpuArchitecture: \"x86_64\"",
"oc create -f agent_service_config.yaml",
"agentserviceconfig.agent-install.openshift.io/agent created",
"console-openshift-console.apps.hub-cluster.internal.domain.com api.hub-cluster.internal.domain.com",
"console-openshift-console.apps.sno-managed-cluster-1.internal.domain.com api.sno-managed-cluster-1.internal.domain.com",
"adm release mirror -a <pull_secret.json> --from=quay.io/openshift-release-dev/ocp-release:{{ mirror_version_spoke_release }} --to={{ provisioner_cluster_registry }}/ocp4 --to-release-image={{ provisioner_cluster_registry }}/ocp4:{{ mirror_version_spoke_release }}",
"apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.9.0-rc.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.9.0-x86_64 2",
"apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2",
"apiVersion: v1 data: password: <bmc_password> 1 username: <bmc_username> 2 kind: Secret metadata: name: <cluster_name>-bmc-secret namespace: <cluster_name> type: Opaque",
"apiVersion: v1 data: .dockerconfigjson: <pull_secret> 1 kind: Secret metadata: name: assisted-deployment-pull-secret namespace: <cluster_name> type: kubernetes.io/dockerconfigjson",
"apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: # Only include the annotation if using OVN, otherwise omit the annotation annotations: agent-install.openshift.io/install-config-overrides: '{\"networking\":{\"networkType\":\"OVNKubernetes\"}}' name: <cluster_name> namespace: <cluster_name> spec: clusterDeploymentRef: name: <cluster_name> imageSetRef: name: <cluster_image_set> 1 networking: clusterNetwork: - cidr: <cluster_network_cidr> 2 hostPrefix: 23 machineNetwork: - cidr: <machine_network_cidr> 3 serviceNetwork: - <service_network_cidr> 4 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <public_key> 5",
"apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: <cluster_name> namespace: <cluster_name> spec: baseDomain: <base_domain> 1 clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: <cluster_name> version: v1beta1 clusterName: <cluster_name> platform: agentBareMetal: agentSelector: matchLabels: cluster-name: <cluster_name> pullSecretRef: name: assisted-deployment-pull-secret",
"apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterName: <cluster_name> clusterNamespace: <cluster_name> clusterLabels: cloud: auto-detect vendor: auto-detect applicationManager: enabled: true certPolicyController: enabled: false iamPolicyController: enabled: false policyController: enabled: true searchCollector: enabled: false 1",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster_name> spec: hubAcceptsClient: true",
"apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterRef: name: <cluster_name> namespace: <cluster_name> sshAuthorizedKey: <public_key> 1 agentLabels: 2 location: \"<label-name>\" pullSecretRef: name: assisted-deployment-pull-secret",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <cluster_name> namespace: <cluster_name> annotations: inspect.metal3.io: disabled labels: infraenvs.agent-install.openshift.io: \"<cluster_name>\" spec: bootMode: \"UEFI\" bmc: address: <bmc_address> 1 disableCertificateVerification: true credentialsName: <cluster_name>-bmc-secret bootMACAddress: <mac_address> 2 automatedCleaningMode: disabled online: true",
"apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: <cluster_name> namespace: <cluster_name> labels: sno-cluster-<cluster-name>: <cluster_name> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true address: - ip: <ip_address> 1 prefix-length: <public_network_prefix> 2 dhcp: false dns-resolver: config: server: - <dns_resolver> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <gateway> 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: \"eth0\" 5 macAddress: <mac_address> 6",
"apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: clusterRef: name: <cluster_name> namespace: <cluster_name> sshAuthorizedKey: <public_key> agentLabels: 1 location: \"<label-name>\" pullSecretRef: name: assisted-deployment-pull-secret nmStateConfigLabelSelector: matchLabels: sno-cluster-<cluster-name>: <cluster_name> # Match this label",
"oc get managedcluster",
"oc get agent -n <cluster_name>",
"oc describe agent -n <cluster_name>",
"oc get agentclusterinstall -n <cluster_name>",
"oc describe agentclusterinstall -n <cluster_name>",
"oc get managedclusteraddon -n <cluster_name>",
"oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig",
"apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: assisted-installer labels: app: assisted-service data: ca-bundle.crt: <certificate> 1 registries.conf: | 2 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] location = <mirror_registry_url> 3 insecure = false mirror-by-digest-only = true",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: 'assisted-installer-mirror-config' osImages: - openshiftVersion: <ocp_version> rootfs: <rootfs_url> 1 url: <iso_url> 2",
"Allow NTP client access from local network. #allow 192.168.0.0/16 local stratum 10 bindcmdaddress :: allow 2620:52:0:1310::/64",
"apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: # Only include the annotation if using OVN, otherwise omit the annotation annotations: agent-install.openshift.io/install-config-overrides: '{\"networking\":{\"networkType\":\"OVNKubernetes\"}}' name: <cluster_name> namespace: <cluster_name> spec: clusterDeploymentRef: name: <cluster_name> imageSetRef: name: <cluster_image_set> networking: clusterNetwork: - cidr: \"fd01::/48\" hostPrefix: 64 machineNetwork: - cidr: <machine_network_cidr> serviceNetwork: - \"fd02::/112\" provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <public_key>",
"oc get managedcluster",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h",
"oc get clusterdeployment -n <cluster_name>",
"NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h",
"oc describe agentclusterinstall -n <cluster_name> <cluster_name>",
"oc delete managedcluster <cluster_name>",
"oc delete namespace <cluster_name>",
"apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator labels: openshift.io/run-level: \"1\"",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: common-sriov-sub-ns-policy namespace: common-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: common-sriov-sub-ns-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: labels: openshift.io/run-level: \"1\" name: openshift-sriov-network-operator",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp namespace: openshift-sriov-network-operator spec: # The USD tells the policy generator to overlay/remove the spec.item in the generated policy. deviceType: USDdeviceType isRdma: false nicSelector: pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: site-du-sno-1-sriov-nnp-mh-policy namespace: sites-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: site-du-sno-1-sriov-nnp-mh-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - ens7f0 nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: 8 resourceName: du_mh",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno\" namespace: \"group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: ConsoleOperatorDisable.yaml policyName: \"console-policy\" - fileName: ClusterLogging.yaml policyName: \"cluster-log-policy\" spec: curation: curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..",
"apiVersion: policyGenerator/v1 kind: PolicyGenerator metadata: name: acm-policy namespace: acm-policy-generator The arguments should be given and defined as below with same order --policyGenTempPath= --sourcePath= --outPath= --stdout --customResources argsOneLiner: ./ranPolicyGenTempExamples ./sourcePolicies ./out true false",
"cd cnf-features-deploy/ztp/ztp-policy-generator/",
"XDG_CONFIG_HOME=./ kustomize build --enable-alpha-plugins",
"out ├── common │ ├── common-log-sub-ns-policy.yaml │ ├── common-log-sub-oper-policy.yaml │ ├── common-log-sub-policy.yaml │ ├── common-pao-sub-catalog-policy.yaml │ ├── common-pao-sub-ns-policy.yaml │ ├── common-pao-sub-oper-policy.yaml │ ├── common-pao-sub-policy.yaml │ ├── common-policies-placementbinding.yaml │ ├── common-policies-placementrule.yaml │ ├── common-ptp-sub-ns-policy.yaml │ ├── common-ptp-sub-oper-policy.yaml │ ├── common-ptp-sub-policy.yaml │ ├── common-sriov-sub-ns-policy.yaml │ ├── common-sriov-sub-oper-policy.yaml │ └── common-sriov-sub-policy.yaml ├── groups │ ├── group-du │ │ ├── group-du-mc-chronyd-policy.yaml │ │ ├── group-du-mc-mount-ns-policy.yaml │ │ ├── group-du-mcp-du-policy.yaml │ │ ├── group-du-mc-sctp-policy.yaml │ │ ├── group-du-policies-placementbinding.yaml │ │ ├── group-du-policies-placementrule.yaml │ │ ├── group-du-ptp-config-policy.yaml │ │ └── group-du-sriov-operconfig-policy.yaml │ └── group-sno-du │ ├── group-du-sno-policies-placementbinding.yaml │ ├── group-du-sno-policies-placementrule.yaml │ ├── group-sno-du-console-policy.yaml │ ├── group-sno-du-log-forwarder-policy.yaml │ └── group-sno-du-log-policy.yaml └── sites └── site-du-sno-1 ├── site-du-sno-1-policies-placementbinding.yaml ├── site-du-sno-1-policies-placementrule.yaml ├── site-du-sno-1-sriov-nn-fh-policy.yaml ├── site-du-sno-1-sriov-nnp-mh-policy.yaml ├── site-du-sno-1-sriov-nw-fh-policy.yaml ├── site-du-sno-1-sriov-nw-mh-policy.yaml └── site-du-sno-1-.yaml",
"FROM <registry fqdn>/ztp-site-generator:latest 1 COPY myInstallManifest.yaml /usr/src/hook/ztp/source-crs/extra-manifest/ COPY mySourceCR.yaml /usr/src/hook/ztp/source-crs/",
"USD> podman build Containerfile.example",
"oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\\.password}' | base64 -d",
"mkdir ztp podman run --rm -v `pwd`/ztp:/mnt/ztp:Z registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.9.0-1 /bin/bash -c \"cp -ar /usr/src/hook/ztp/* /mnt/ztp/\"",
"apiVersion: v1 kind: Namespace metadata: name: clusters-sub --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: clusters namespace: openshift-gitops spec: destination: server: https://kubernetes.default.svc namespace: clusters-sub project: default source: path: ztp/gitops-subscriptions/argocd/resource-hook-example/siteconfig 1 repoURL: https://github.com/openshift-kni/cnf-features-deploy 2 targetRevision: master 3 syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true",
"apiVersion: v1 kind: Namespace metadata: name: policies-sub --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: policies namespace: openshift-gitops spec: destination: server: https://kubernetes.default.svc namespace: policies-sub project: default source: directory: recurse: true path: ztp/gitops-subscriptions/argocd/resource-hook-example/policygentemplates 1 repoURL: https://github.com/openshift-kni/cnf-features-deploy 2 targetRevision: master 3 syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true",
"oc apply -k ./deployment",
"apiVersion: v1 kind: Secret metadata: name: test-sno-bmh-secret namespace: test-sno data: password: dGVtcA== username: cm9vdA== type: Opaque",
"apiVersion: v1 kind: Secret metadata: name: assisted-deployment-pull-secret namespace: test-sno type: kubernetes.io/dockerconfigjson data: .dockerconfigjson: <Your pull secret base64 encoded>",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"test-sno\" namespace: \"test-sno\" spec: baseDomain: \"clus2.t5g.lab.eng.bos.redhat.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.9\" sshPublicKey: \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDB3dwhI5X0ZxGBb9VK7wclcPHLc8n7WAyKjTNInFjYNP9J+Zoc/ii+l3YbGUTuqilDwZN5rVIwBux2nUyVXDfaM5kPd9kACmxWtfEWTyVRootbrNWwRfKuC2h6cOd1IlcRBM1q6IzJ4d7+JVoltAxsabqLoCbK3svxaZoKAaK7jdGG030yvJzZaNM4PiTy39VQXXkCiMDmicxEBwZx1UsA8yWQsiOQ5brod9KQRXWAAST779gbvtgXR2L+MnVNROEHf1nEjZJwjwaHxoDQYHYKERxKRHlWFtmy5dNT6BbvOpJ2e5osDFPMEd41d2mUJTfxXiC1nvyjk9Irf8YJYnqJgBIxi0IxEllUKH7mTdKykHiPrDH5D2pRlp+Donl4n+sw6qoDc/3571O93+RQ6kUSAgAsvWiXrEfB/7kGgAa/BD5FeipkFrbSEpKPVu+gue1AQeJcz9BuLqdyPUQj2VUySkSg0FuGbG7fxkKeF1h3Sga7nuDOzRxck4I/8Z7FxMF/e8DmaBpgHAUIfxXnRqAImY9TyAZUEMT5ZPSvBRZNNmLbfex1n3NLcov/GEpQOqEYcjG5y57gJ60/av4oqjcVmgtaSOOAS0kZ3y9YDhjsaOcpmRYYijJn8URAH7NrW8EZsvAoF6GUt6xHq5T258c6xSYUm5L0iKvBqrOW9EjbLw== [email protected]\" clusters: - clusterName: \"test-sno\" clusterType: \"sno\" clusterProfile: \"du\" clusterLabels: group-du-sno: \"\" common: true sites : \"test-sno\" clusterNetwork: - cidr: 1001:db9::/48 hostPrefix: 64 machineNetwork: - cidr: 2620:52:0:10e7::/64 serviceNetwork: - 1001:db7::/112 additionalNTPSources: - 2620:52:0:1310::1f6 nodes: - hostName: \"test-sno.clus2.t5g.lab.eng.bos.redhat.com\" bmcAddress: \"idrac-virtualmedia+https://[2620:52::10e7:f602:70ff:fee4:f4e2]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"test-sno-bmh-secret\" bmcDisableCertificateVerification: true 1 bootMACAddress: \"0C:42:A1:8A:74:EC\" bootMode: \"UEFI\" rootDeviceHints: hctl: '0:1:0' cpuset: \"0-1,52-53\" nodeNetwork: interfaces: - name: eno1 macAddress: \"0C:42:A1:8A:74:EC\" config: interfaces: - name: eno1 type: ethernet state: up macAddress: \"0C:42:A1:8A:74:EC\" ipv4: enabled: false ipv6: enabled: true address: - ip: 2620:52::10e7:e42:a1ff:fe8a:900 prefix-length: 64 dns-resolver: config: search: - clus2.t5g.lab.eng.bos.redhat.com server: - 2620:52:0:1310::1f6 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 2620:52:0:10e7::fc table-id: 254",
"export CLUSTER=<cluster_name>",
"oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq",
"curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'",
"oc get policy -A",
"oc delete -k cnf-features-deploy/ztp/gitops-subscriptions/argocd/deployment",
"oc get AgentClusterInstall -n <cluster_name>",
"oc get siteconfig -A",
"oc get siteconfig -n clusters-sub",
"oc describe -n openshift-gitops application clusters",
"oc describe job -n clusters-sub siteconfig-post",
"oc get pod -n clusters-sub",
"oc logs -n clusters-sub siteconfig-post-xxxxx",
"export NS=<namespace>",
"oc get policy -n USDNS",
"oc get policygentemplate -A",
"oc get policygentemplate -n USDNS",
"oc describe -n openshift-gitops application clusters",
"oc get policy -n <cluster_name>",
"oc get placementrule -n USDNS",
"get placementrule -n USDNS <placmentRuleName> -o yaml",
"get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: group-test1-policies-placementrules namespace: group-test1-policies spec: clusterSelector: matchExpressions: - key: group-test1 operator: In values: - \"\" status: decisions: - clusterName: <cluster_name> clusterNamespace: <cluster_name>",
"get policy -n USDCLUSTER"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/scalability_and_performance/index |
Chapter 17. Red Hat Software Collections | Chapter 17. Red Hat Software Collections Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose at any time which package version they want to run. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . Red Hat Developer Toolset is now a part of Red Hat Software Collections, included as a separate Software Collection. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides the current versions of the GNU Compiler Collection, GNU Debugger, Eclipse development platform, and other development, debugging, and performance monitoring tools. See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-red_hat_software_collections |
Chapter 44. Managing hosts using Ansible playbooks | Chapter 44. Managing hosts using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for Identity Management (IdM), and you can use Ansible modules to automate host management. The following concepts and operations are performed when managing hosts and host entries using Ansible playbooks: Ensuring the presence of IdM host entries that are only defined by their FQDNs Ensuring the presence of IdM host entries with IP addresses Ensuring the presence of multiple IdM host entries with random passwords Ensuring the presence of an IdM host entry with multiple IP addresses Ensuring the absence of IdM host entries 44.1. Ensuring the presence of an IdM host entry with FQDN using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are only defined by their fully-qualified domain names (FQDNs). Specifying the FQDN name of the host is enough if at least one of the following conditions applies: The IdM server is not configured to manage DNS. The host does not have a static IP address or the IP address is not known at the time the host is configured. Adding a host defined only by an FQDN essentially creates a placeholder entry in the IdM DNS service. For example, laptops may be preconfigured as IdM clients, but they do not have IP addresses at the time they are configured. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the FQDN of the host whose presence in IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/add-host.yml file: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. 44.2. Ensuring the presence of an IdM host entry with DNS information using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are defined by their fully-qualified domain names (FQDNs) and their IP addresses. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. In addition, if the IdM server is configured to manage DNS and you know the IP address of the host, specify a value for the ip_address parameter. The IP address is necessary for the host to exist in the DNS resource records. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-present.yml file. You can also include other, additional information: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms host01.idm.example.com exists in IdM. 44.3. Ensuring the presence of multiple IdM host entries with random passwords using Ansible playbooks The ipahost module allows the system administrator to ensure the presence or absence of multiple host entries in IdM using just one Ansible task. Follow this procedure to ensure the presence of multiple host entries that are only defined by their fully-qualified domain names (FQDNs). Running the Ansible playbook generates random passwords for the hosts. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the hosts whose presence in IdM you want to ensure. To make the Ansible playbook generate a random password for each host even when the host already exists in IdM and update_password is limited to on_create , add the random: true and force: true options. To simplify this step, you can copy and modify the example from the /usr/share/doc/ansible-freeipa/README-host.md Markdown file: Run the playbook: Note To deploy the hosts as IdM clients using random, one-time passwords (OTPs), see Authorization options for IdM client enrollment using an Ansible playbook or Installing a client by using a one-time password: Interactive installation . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of one of the hosts: The output confirms host01.idm.example.com exists in IdM with a random password. 44.4. Ensuring the presence of an IdM host entry with multiple IP addresses using Ansible playbooks Follow this procedure to ensure the presence of a host entry in Identity Management (IdM) using Ansible playbooks. The host entry is defined by its fully-qualified domain name (FQDN) and its multiple IP addresses. Note In contrast to the ipa host utility, the Ansible ipahost module can ensure the presence or absence of several IPv4 and IPv6 addresses for a host. The ipa host-mod command cannot handle IP addresses. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file. Specify, as the name of the ipahost variable, the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. Specify each of the multiple IPv4 and IPv6 ip_address values on a separate line by using the ip_address syntax. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-member-ipaddresses-present.yml file. You can also include additional information: Run the playbook: Note The procedure creates a host entry in the IdM LDAP server but does not enroll the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. To verify that the multiple IP addresses of the host exist in the IdM DNS records, enter the ipa dnsrecord-show command and specify the following information: The name of the IdM domain The name of the host The output confirms that all the IPv4 and IPv6 addresses specified in the playbook are correctly associated with the host01.idm.example.com host entry. 44.5. Ensuring the absence of an IdM host entry using Ansible playbooks Follow this procedure to ensure the absence of host entries in Identity Management (IdM) using Ansible playbooks. Prerequisites IdM administrator credentials Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose absence from IdM you want to ensure. If your IdM domain has integrated DNS, use the updatedns: true option to remove the associated records of any kind for the host from the DNS. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/delete-host.yml file: Run the playbook: Note The procedure results in: The host not being present in the IdM Kerberos realm. The host entry not being present in the IdM LDAP server. To remove the specific IdM configuration of system services, such as System Security Services Daemon (SSSD), from the client host itself, you must run the ipa-client-install --uninstall command on the client. For details, see Uninstalling an IdM client . Verification Log into ipaserver as admin: Display information about host01.idm.example.com : The output confirms that the host does not exist in IdM. 44.6. Additional resources See the /usr/share/doc/ansible-freeipa/README-host.md Markdown file. See the additional playbooks in the /usr/share/doc/ansible-freeipa/playbooks/host directory. | [
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com state: present force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host01.idm.example.com is present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com description: Example host ip_address: 192.168.0.123 locality: Lab ns_host_location: Lab ns_os_version: CentOS 7 ns_hardware_platform: Lenovo T61 mac_address: - \"08:00:27:E3:B1:2D\" - \"52:54:00:BD:97:1E\" state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Description: Example host Locality: Lab Location: Lab Platform: Lenovo T61 Operating system: CentOS 7 Principal name: host/[email protected] Principal alias: host/[email protected] MAC address: 08:00:27:E3:B1:2D, 52:54:00:BD:97:1E Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure hosts with random password hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Hosts host01.idm.example.com and host02.idm.example.com present with random passwords ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" hosts: - name: host01.idm.example.com random: true force: true - name: host02.idm.example.com random: true force: true register: ipahost",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-are-present.yml [...] TASK [Hosts host01.idm.example.com and host02.idm.example.com present with random passwords] changed: [r8server.idm.example.com] => {\"changed\": true, \"host\": {\"host01.idm.example.com\": {\"randompassword\": \"0HoIRvjUdH0Ycbf6uYdWTxH\"}, \"host02.idm.example.com\": {\"randompassword\": \"5VdLgrf3wvojmACdHC3uA3s\"}}}",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Password: True Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host member IP addresses present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host101.example.com IP addresses present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com ip_address: - 192.168.0.123 - fe80::20c:29ff:fe02:a1b3 - 192.168.0.124 - fe80::20c:29ff:fe02:a1b4 force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-with-multiple-IP-addreses-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"ipa dnsrecord-show idm.example.com host01 [...] Record name: host01 A record: 192.168.0.123, 192.168.0.124 AAAA record: fe80::20c:29ff:fe02:a1b3, fe80::20c:29ff:fe02:a1b4",
"[ipaserver] server.idm.example.com",
"--- - name: Host absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com absent ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com updatedns: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa host-show host01.idm.example.com ipa: ERROR: host01.idm.example.com: host not found"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-hosts-using-Ansible-playbooks_managing-users-groups-hosts |
Chapter 36. FHIR | Chapter 36. FHIR Both producer and consumer are supported The FHIR component integrates with the HAPI-FHIR library which is an open-source implementation of the FHIR (Fast Healthcare Interoperability Resources) specification in Java. 36.1. Dependencies When using fhir with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-fhir-starter</artifactId> </dependency> 36.2. URI Format The FHIR Component uses the following URI format: Endpoint prefix can be one of: capabilities create delete history load-page meta operation patch read search transaction update validate 36.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 36.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 36.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 36.4. Component Options The FHIR component supports 27 options, which are listed below. Name Description Default Type encoding (common) Encoding to use for all request. Enum values: JSON XML String fhirVersion (common) The FHIR Version to use. Enum values: DSTU2 DSTU2_HL7ORG DSTU2_1 DSTU3 R4 R5 R4 String log (common) Will log every requests and responses. false boolean prettyPrint (common) Pretty print all request. false boolean serverUrl (common) The FHIR server base URL. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean client (advanced) To use the custom client. IGenericClient clientFactory (advanced) To use the custom client factory. IRestfulClientFactory compress (advanced) Compresses outgoing (POST/PUT) contents to the GZIP format. false boolean configuration (advanced) To use the shared configuration. FhirConfiguration connectionTimeout (advanced) How long to try and establish the initial TCP connection (in ms). 10000 Integer deferModelScanning (advanced) When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. false boolean fhirContext (advanced) FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. FhirContext forceConformanceCheck (advanced) Force conformance check. false boolean sessionCookie (advanced) HTTP session cookie to add to every request. String socketTimeout (advanced) How long to block for individual read/write operations (in ms). 10000 Integer summary (advanced) Request that the server modify the response using the _summary param. Enum values: COUNT TEXT DATA TRUE FALSE String validationMode (advanced) When should Camel validate the FHIR Server's conformance statement. Enum values: NEVER ONCE ONCE String proxyHost (proxy) The proxy host. String proxyPassword (proxy) The proxy password. String proxyPort (proxy) The proxy port. Integer proxyUser (proxy) The proxy username. String accessToken (security) OAuth access token. String password (security) Username to use for basic authentication. String username (security) Username to use for basic authentication. String 36.5. Endpoint Options The FHIR endpoint is configured using URI syntax: with the following path and query parameters: 36.5.1. Path Parameters (2 parameters) Name Description Default Type apiName (common) Required What kind of operation to perform. Enum values: CAPABILITIES CREATE DELETE HISTORY LOAD_PAGE META OPERATION PATCH READ SEARCH TRANSACTION UPDATE VALIDATE FhirApiName methodName (common) Required What sub operation to use for the selected operation. String 36.5.2. Query Parameters (44 parameters) Name Description Default Type encoding (common) Encoding to use for all request. Enum values: JSON XML String fhirVersion (common) The FHIR Version to use. Enum values: DSTU2 DSTU2_HL7ORG DSTU2_1 DSTU3 R4 R5 R4 String inBody (common) Sets the name of a parameter to be passed in the exchange In Body. String log (common) Will log every requests and responses. false boolean prettyPrint (common) Pretty print all request. false boolean serverUrl (common) The FHIR server base URL. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean client (advanced) To use the custom client. IGenericClient clientFactory (advanced) To use the custom client factory. IRestfulClientFactory compress (advanced) Compresses outgoing (POST/PUT) contents to the GZIP format. false boolean connectionTimeout (advanced) How long to try and establish the initial TCP connection (in ms). 10000 Integer deferModelScanning (advanced) When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. false boolean fhirContext (advanced) FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. FhirContext forceConformanceCheck (advanced) Force conformance check. false boolean sessionCookie (advanced) HTTP session cookie to add to every request. String socketTimeout (advanced) How long to block for individual read/write operations (in ms). 10000 Integer summary (advanced) Request that the server modify the response using the _summary param. Enum values: COUNT TEXT DATA TRUE FALSE String validationMode (advanced) When should Camel validate the FHIR Server's conformance statement. Enum values: NEVER ONCE ONCE String proxyHost (proxy) The proxy host. String proxyPassword (proxy) The proxy password. String proxyPort (proxy) The proxy port. Integer proxyUser (proxy) The proxy username. String backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean accessToken (security) OAuth access token. String password (security) Username to use for basic authentication. String username (security) Username to use for basic authentication. String 36.6. API Parameters (13 APIs) The @FHIR endpoint is an API based component and has additional parameters based on which API name and API method is used. The API name and API method is located in the endpoint URI as the apiName/methodName path parameters: There are 13 API names as listed in the table below: API Name Type Description capabilities Both API to Fetch the capability statement for the server create Both API for the create operation, which creates a new resource instance on the server delete Both API for the delete operation, which performs a logical delete on a server resource history Both API for the history method load-page Both API that Loads the / bundle of resources from a paged set, using the link specified in the link type= tag within the atom bundle meta Both API for the meta operations, which can be used to get, add and remove tags and other Meta elements from a resource or across the server operation Both API for extended FHIR operations patch Both API for the patch operation, which performs a logical patch on a server resource read Both API method for read operations search Both API to search for resources matching a given set of criteria transaction Both API for sending a transaction (collection of resources) to the server to be executed as a single unit update Both API for the update operation, which performs a logical delete on a server resource validate Both API for validating resources Each API is documented in the following sections to come. 36.6.1. API: capabilities Both producer and consumer are supported The capabilities API is defined in the syntax as follows: The method is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description ofType Retrieve the conformance statement using the given model type 36.6.1.1. Method ofType Signatures: org.hl7.fhir.instance.model.api.IBaseConformance ofType(Class<org.hl7.fhir.instance.model.api.IBaseConformance> type, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/ofType API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map type The model type Class In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.2. API: create Both producer and consumer are supported The create API is defined in the syntax as follows: The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resource Creates a IBaseResource on the server 36.6.2.1. Method resource Signatures: ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map preferReturn Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() , may be null PreferReturnEnum resource The resource to create IBaseResource resourceAsString The resource to create String url The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366, may be null String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.3. API: delete Both producer and consumer are supported The delete API is defined in the syntax as follows: The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resource Deletes the given resource resourceById Deletes the resource by resource type e resourceConditionalByUrl Specifies that the delete should be performed as a conditional delete against a given search URL 36.6.3.1. Method resource Signatures: org.hl7.fhir.instance.model.api.IBaseOperationOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map resource The IBaseResource to delete IBaseResource 36.6.3.2. Method resourceById Signatures: org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceById(String type, String stringId, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceById(org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceById API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The IIdType referencing the resource IIdType stringId It's id String type The resource type e.g Patient String 36.6.3.3. Method resourceConditionalByUrl Signatures: org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceConditionalByUrl(String url, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceConditionalByUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map url The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366 String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.4. API: history Both producer and consumer are supported The history API is defined in the syntax as follows: The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description onInstance Perform the operation across all versions of a specific resource (by ID and type) on the server onServer Perform the operation across all versions of all resources of all types on the server onType Perform the operation across all versions of all resources of the given type on the server 36.6.4.1. Method onInstance Signatures: org.hl7.fhir.instance.model.api.IBaseBundle onInstance(org.hl7.fhir.instance.model.api.IIdType id, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onInstance API method has the parameters listed in the table below: Parameter Description Type count Request that the server return only up to theCount number of resources, may be NULL Integer cutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL Date extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map iCutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL IPrimitiveType id The IIdType which must be populated with both a resource type and a resource ID at IIdType returnType Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. Class 36.6.4.2. Method onServer Signatures: org.hl7.fhir.instance.model.api.IBaseBundle onServer(Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onServer API method has the parameters listed in the table below: Parameter Description Type count Request that the server return only up to theCount number of resources, may be NULL Integer cutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL Date extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map iCutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL IPrimitiveType returnType Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. Class 36.6.4.3. Method onType Signatures: org.hl7.fhir.instance.model.api.IBaseBundle onType(Class<org.hl7.fhir.instance.model.api.IBaseResource> resourceType, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onType API method has the parameters listed in the table below: Parameter Description Type count Request that the server return only up to theCount number of resources, may be NULL Integer cutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL Date extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map iCutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL IPrimitiveType resourceType The resource type to search for Class returnType Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. Class In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.5. API: load-page Both producer and consumer are supported The load-page API is defined in the syntax as follows: The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description byUrl Load a page of results using the given URL and bundle type and return a DSTU1 Atom bundle Load the page of results using the link with relation in the bundle Load the page of results using the link with relation prev in the bundle 36.6.5.1. Method byUrl Signatures: org.hl7.fhir.instance.model.api.IBaseBundle byUrl(String url, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/byUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map returnType The return type Class url The search url String 36.6.5.2. Method Signatures: org.hl7.fhir.instance.model.api.IBaseBundle (org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/ API method has the parameters listed in the table below: Parameter Description Type bundle The IBaseBundle IBaseBundle extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map 36.6.5.3. Method Signatures: org.hl7.fhir.instance.model.api.IBaseBundle (org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/ API method has the parameters listed in the table below: Parameter Description Type bundle The IBaseBundle IBaseBundle extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.6. API: meta Both producer and consumer are supported The meta API is defined in the syntax as follows: The 5 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description add Add the elements in the given metadata to the already existing set (do not remove any) delete Delete the elements in the given metadata from the given id getFromResource Fetch the current metadata from a specific resource getFromServer Fetch the current metadata from the whole Server getFromType Fetch the current metadata from a specific type 36.6.6.1. Method add Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType add(org.hl7.fhir.instance.model.api.IBaseMetaType meta, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/add API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The id IIdType meta The IBaseMetaType class IBaseMetaType 36.6.6.2. Method delete Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType delete(org.hl7.fhir.instance.model.api.IBaseMetaType meta, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/delete API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The id IIdType meta The IBaseMetaType class IBaseMetaType 36.6.6.3. Method getFromResource Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType getFromResource(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/getFromResource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The id IIdType metaType The IBaseMetaType class Class 36.6.6.4. Method getFromServer Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType getFromServer(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/getFromServer API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map metaType The type of the meta datatype for the given FHIR model version (should be MetaDt.class or MetaType.class) Class 36.6.6.5. Method getFromType Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType getFromType(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, String resourceType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/getFromType API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map metaType The IBaseMetaType class Class resourceType The resource type e.g Patient String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.7. API: operation Both producer and consumer are supported The operation API is defined in the syntax as follows: The 5 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description onInstance Perform the operation across all versions of a specific resource (by ID and type) on the server onInstanceVersion This operation operates on a specific version of a resource onServer Perform the operation across all versions of all resources of all types on the server onType Perform the operation across all versions of all resources of the given type on the server processMessage This operation is called USDprocess-message as defined by the FHIR specification 36.6.7.1. Method onInstance Signatures: org.hl7.fhir.instance.model.api.IBaseResource onInstance(org.hl7.fhir.instance.model.api.IIdType id, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onInstance API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id Resource (version will be stripped) IIdType name Operation name String outputParameterType The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL Class parameters The parameters to use as input. May also be null if the operation does not require any input parameters. IBaseParameters returnType If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/USDeverything) which return a bundle instead of a Parameters resource, may be NULL Class useHttpGet Use HTTP GET verb Boolean 36.6.7.2. Method onInstanceVersion Signatures: org.hl7.fhir.instance.model.api.IBaseResource onInstanceVersion(org.hl7.fhir.instance.model.api.IIdType id, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onInstanceVersion API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id Resource version IIdType name Operation name String outputParameterType The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL Class parameters The parameters to use as input. May also be null if the operation does not require any input parameters. IBaseParameters returnType If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/USDeverything) which return a bundle instead of a Parameters resource, may be NULL Class useHttpGet Use HTTP GET verb Boolean 36.6.7.3. Method onServer Signatures: org.hl7.fhir.instance.model.api.IBaseResource onServer(String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onServer API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map name Operation name String outputParameterType The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL Class parameters The parameters to use as input. May also be null if the operation does not require any input parameters. IBaseParameters returnType If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/USDeverything) which return a bundle instead of a Parameters resource, may be NULL Class useHttpGet Use HTTP GET verb Boolean 36.6.7.4. Method onType Signatures: org.hl7.fhir.instance.model.api.IBaseResource onType(Class<org.hl7.fhir.instance.model.api.IBaseResource> resourceType, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onType API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map name Operation name String outputParameterType The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL Class parameters The parameters to use as input. May also be null if the operation does not require any input parameters. IBaseParameters resourceType The resource type to operate on Class returnType If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/USDeverything) which return a bundle instead of a Parameters resource, may be NULL Class useHttpGet Use HTTP GET verb Boolean 36.6.7.5. Method processMessage Signatures: org.hl7.fhir.instance.model.api.IBaseBundle processMessage(String respondToUri, org.hl7.fhir.instance.model.api.IBaseBundle msgBundle, boolean asynchronous, Class<org.hl7.fhir.instance.model.api.IBaseBundle> responseClass, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/processMessage API method has the parameters listed in the table below: Parameter Description Type asynchronous Whether to process the message asynchronously or synchronously, defaults to synchronous. Boolean extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map msgBundle Set the Message Bundle to POST to the messaging server IBaseBundle respondToUri An optional query parameter indicating that responses from the receiving server should be sent to this URI, may be NULL String responseClass The response class Class In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.8. API: patch Both producer and consumer are supported The patch API is defined in the syntax as follows: The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description patchById Applies the patch to the given resource ID patchByUrl Specifies that the update should be performed as a conditional create against a given search URL 36.6.8.1. Method patchById Signatures: ca.uhn.fhir.rest.api.MethodOutcome patchById(String patchBody, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome patchById(String patchBody, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/patchById API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The resource ID to patch IIdType patchBody The body of the patch document serialized in either XML or JSON which conforms to String preferReturn Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() PreferReturnEnum stringId The resource ID to patch String 36.6.8.2. Method patchByUrl Signatures: ca.uhn.fhir.rest.api.MethodOutcome patchByUrl(String patchBody, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/patchByUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map patchBody The body of the patch document serialized in either XML or JSON which conforms to String preferReturn Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() PreferReturnEnum url The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366 String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.9. API: read Both producer and consumer are supported The read API is defined in the syntax as follows: The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resourceById Reads a IBaseResource on the server by id resourceByUrl Reads a IBaseResource on the server by url 36.6.9.1. Method resourceById Signatures: org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, Long longId, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, String stringId, String version, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, org.hl7.fhir.instance.model.api.IIdType id, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, Long longId, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, String stringId, String ifVersionMatches, String version, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, org.hl7.fhir.instance.model.api.IIdType id, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceById API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The IIdType referencing the resource IIdType ifVersionMatches A version to match against the newest version on the server String longId The resource ID Long resource The resource to read (e.g. Patient) Class resourceClass The resource to read (e.g. Patient) String returnNull Return null if version matches Boolean returnResource Return the resource if version matches IBaseResource stringId The resource ID String throwError Throw error if the version matches Boolean version The resource version String 36.6.9.2. Method resourceByUrl Signatures: org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, String url, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, org.hl7.fhir.instance.model.api.IIdType iUrl, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(String resourceClass, String url, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(String resourceClass, org.hl7.fhir.instance.model.api.IIdType iUrl, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceByUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map iUrl The IIdType referencing the resource by absolute url IIdType ifVersionMatches A version to match against the newest version on the server String resource The resource to read (e.g. Patient) Class resourceClass The resource to read (e.g. Patient.class) String returnNull Return null if version matches Boolean returnResource Return the resource if version matches IBaseResource throwError Throw error if the version matches Boolean url Referencing the resource by absolute url String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.10. API: search Both producer and consumer are supported The search API is defined in the syntax as follows: The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description searchByUrl Perform a search directly by URL 36.6.10.1. Method searchByUrl Signatures: org.hl7.fhir.instance.model.api.IBaseBundle searchByUrl(String url, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/searchByUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map url The URL to search for. Note that this URL may be complete (e.g. ) in which case the client's base URL will be ignored. Or it can be relative (e.g. Patientname=foo) in which case the client's base URL will be used. String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.11. API: transaction Both producer and consumer are supported The transaction API is defined in the syntax as follows: The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description withBundle Use the given raw text (should be a Bundle resource) as the transaction input withResources Use a list of resources as the transaction input 36.6.11.1. Method withBundle Signatures: String withBundle(String stringBundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseBundle withBundle(org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/withBundle API method has the parameters listed in the table below: Parameter Description Type bundle Bundle to use in the transaction IBaseBundle extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map stringBundle Bundle to use in the transaction String 36.6.11.2. Method withResources Signatures: java.util.List<org.hl7.fhir.instance.model.api.IBaseResource> withResources(java.util.List<org.hl7.fhir.instance.model.api.IBaseResource> resources, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/withResources API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map resources Resources to use in the transaction List In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.12. API: update Both producer and consumer are supported The update API is defined in the syntax as follows: The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resource Updates a IBaseResource on the server by id resourceBySearchUrl Updates a IBaseResource on the server by search url 36.6.12.1. Method resource Signatures: ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The IIdType referencing the resource IIdType preferReturn Whether the server include or suppress the resource body as a part of the result PreferReturnEnum resource The resource to update (e.g. Patient) IBaseResource resourceAsString The resource body to update String stringId The ID referencing the resource String 36.6.12.2. Method resourceBySearchUrl Signatures: ca.uhn.fhir.rest.api.MethodOutcome resourceBySearchUrl(String resourceAsString, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resourceBySearchUrl(org.hl7.fhir.instance.model.api.IBaseResource resource, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceBySearchUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map preferReturn Whether the server include or suppress the resource body as a part of the result PreferReturnEnum resource The resource to update (e.g. Patient) IBaseResource resourceAsString The resource body to update String url Specifies that the update should be performed as a conditional create against a given search URL String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.6.13. API: validate Both producer and consumer are supported The validate API is defined in the syntax as follows: The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resource Validates the resource 36.6.13.1. Method resource Signatures: ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map resource The IBaseResource to validate IBaseResource resourceAsString Raw resource to validate String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 36.7. Spring Boot Auto-Configuration The component supports 56 options, which are listed below. Name Description Default Type camel.component.fhir.access-token OAuth access token. String camel.component.fhir.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.fhir.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.fhir.client To use the custom client. The option is a ca.uhn.fhir.rest.client.api.IGenericClient type. IGenericClient camel.component.fhir.client-factory To use the custom client factory. The option is a ca.uhn.fhir.rest.client.api.IRestfulClientFactory type. IRestfulClientFactory camel.component.fhir.compress Compresses outgoing (POST/PUT) contents to the GZIP format. false Boolean camel.component.fhir.configuration To use the shared configuration. The option is a org.apache.camel.component.fhir.FhirConfiguration type. FhirConfiguration camel.component.fhir.connection-timeout How long to try and establish the initial TCP connection (in ms). 10000 Integer camel.component.fhir.defer-model-scanning When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. false Boolean camel.component.fhir.enabled Whether to enable auto configuration of the fhir component. This is enabled by default. Boolean camel.component.fhir.encoding Encoding to use for all request. String camel.component.fhir.fhir-context FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. The option is a ca.uhn.fhir.context.FhirContext type. FhirContext camel.component.fhir.fhir-version The FHIR Version to use. R4 String camel.component.fhir.force-conformance-check Force conformance check. false Boolean camel.component.fhir.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.fhir.log Will log every requests and responses. false Boolean camel.component.fhir.password Username to use for basic authentication. String camel.component.fhir.pretty-print Pretty print all request. false Boolean camel.component.fhir.proxy-host The proxy host. String camel.component.fhir.proxy-password The proxy password. String camel.component.fhir.proxy-port The proxy port. Integer camel.component.fhir.proxy-user The proxy username. String camel.component.fhir.server-url The FHIR server base URL. String camel.component.fhir.session-cookie HTTP session cookie to add to every request. String camel.component.fhir.socket-timeout How long to block for individual read/write operations (in ms). 10000 Integer camel.component.fhir.summary Request that the server modify the response using the _summary param. String camel.component.fhir.username Username to use for basic authentication. String camel.component.fhir.validation-mode When should Camel validate the FHIR Server's conformance statement. ONCE String camel.dataformat.fhirjson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.fhirjson.dont-encode-elements If provided, specifies the elements which should NOT be encoded. Valid values for this field would include: Patient - Don't encode patient and all its children Patient.name - Don't encode the patient's name Patient.name.family - Don't encode the patient's family name .text - Don't encode the text element on any resource (only the very first position may contain a wildcard) DSTU2 note: Note that values including meta, such as Patient.meta will work for DSTU2 parsers, but values with subelements on meta such as Patient.meta.lastUpdated will only work in DSTU3 mode. Set camel.dataformat.fhirjson.dont-strip-versions-from-references-at-paths If supplied value(s), any resource references at the specified paths will have their resource versions encoded instead of being automatically stripped during the encoding process. This setting has no effect on the parsing process. This method provides a finer-grained level of control than setStripVersionsFromReferences(String) and any paths specified by this method will be encoded even if setStripVersionsFromReferences(String) has been set to true (which is the default). List camel.dataformat.fhirjson.enabled Whether to enable auto configuration of the fhirJson data format. This is enabled by default. Boolean camel.dataformat.fhirjson.encode-elements If provided, specifies the elements which should be encoded, to the exclusion of all others. Valid values for this field would include: Patient - Encode patient and all its children Patient.name - Encode only the patient's name Patient.name.family - Encode only the patient's family name .text - Encode the text element on any resource (only the very first position may contain a wildcard) .(mandatory) - This is a special case which causes any mandatory fields (min 0) to be encoded. Set camel.dataformat.fhirjson.encode-elements-applies-to-child-resources-only If set to true (default is false), the values supplied to setEncodeElements(Set) will not be applied to the root resource (typically a Bundle), but will be applied to any sub-resources contained within it (i.e. search result resources in that bundle). false Boolean camel.dataformat.fhirjson.fhir-version The version of FHIR to use. Possible values are: DSTU2,DSTU2_HL7ORG,DSTU2_1,DSTU3,R4. DSTU3 String camel.dataformat.fhirjson.omit-resource-id If set to true (default is false) the ID of any resources being encoded will not be included in the output. Note that this does not apply to contained resources, only to root resources. In other words, if this is set to true, contained resources will still have local IDs but the outer/containing ID will not have an ID. false Boolean camel.dataformat.fhirjson.override-resource-id-with-bundle-entry-full-url If set to true (which is the default), the Bundle.entry.fullUrl will override the Bundle.entry.resource's resource id if the fullUrl is defined. This behavior happens when parsing the source data into a Bundle object. Set this to false if this is not the desired behavior (e.g. the client code wishes to perform additional validation checks between the fullUrl and the resource id). false Boolean camel.dataformat.fhirjson.pretty-print Sets the pretty print flag, meaning that the parser will encode resources with human-readable spacing and newlines between elements instead of condensing output as much as possible. false Boolean camel.dataformat.fhirjson.server-base-url Sets the server's base URL used by this parser. If a value is set, resource references will be turned into relative references if they are provided as absolute URLs but have a base matching the given base. String camel.dataformat.fhirjson.strip-versions-from-references If set to true (which is the default), resource references containing a version will have the version removed when the resource is encoded. This is generally good behaviour because in most situations, references from one resource to another should be to the resource by ID, not by ID and version. In some cases though, it may be desirable to preserve the version in resource links. In that case, this value should be set to false. This method provides the ability to globally disable reference encoding. If finer-grained control is needed, use setDontStripVersionsFromReferencesAtPaths(List). false Boolean camel.dataformat.fhirjson.summary-mode If set to true (default is false) only elements marked by the FHIR specification as being summary elements will be included. false Boolean camel.dataformat.fhirjson.suppress-narratives If set to true (default is false), narratives will not be included in the encoded values. false Boolean camel.dataformat.fhirxml.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.fhirxml.dont-encode-elements If provided, specifies the elements which should NOT be encoded. Valid values for this field would include: Patient - Don't encode patient and all its children Patient.name - Don't encode the patient's name Patient.name.family - Don't encode the patient's family name .text - Don't encode the text element on any resource (only the very first position may contain a wildcard) DSTU2 note: Note that values including meta, such as Patient.meta will work for DSTU2 parsers, but values with subelements on meta such as Patient.meta.lastUpdated will only work in DSTU3 mode. Set camel.dataformat.fhirxml.dont-strip-versions-from-references-at-paths If supplied value(s), any resource references at the specified paths will have their resource versions encoded instead of being automatically stripped during the encoding process. This setting has no effect on the parsing process. This method provides a finer-grained level of control than setStripVersionsFromReferences(String) and any paths specified by this method will be encoded even if setStripVersionsFromReferences(String) has been set to true (which is the default). List camel.dataformat.fhirxml.enabled Whether to enable auto configuration of the fhirXml data format. This is enabled by default. Boolean camel.dataformat.fhirxml.encode-elements If provided, specifies the elements which should be encoded, to the exclusion of all others. Valid values for this field would include: Patient - Encode patient and all its children Patient.name - Encode only the patient's name Patient.name.family - Encode only the patient's family name .text - Encode the text element on any resource (only the very first position may contain a wildcard) .(mandatory) - This is a special case which causes any mandatory fields (min 0) to be encoded. Set camel.dataformat.fhirxml.encode-elements-applies-to-child-resources-only If set to true (default is false), the values supplied to setEncodeElements(Set) will not be applied to the root resource (typically a Bundle), but will be applied to any sub-resources contained within it (i.e. search result resources in that bundle). false Boolean camel.dataformat.fhirxml.fhir-version The version of FHIR to use. Possible values are: DSTU2,DSTU2_HL7ORG,DSTU2_1,DSTU3,R4. DSTU3 String camel.dataformat.fhirxml.omit-resource-id If set to true (default is false) the ID of any resources being encoded will not be included in the output. Note that this does not apply to contained resources, only to root resources. In other words, if this is set to true, contained resources will still have local IDs but the outer/containing ID will not have an ID. false Boolean camel.dataformat.fhirxml.override-resource-id-with-bundle-entry-full-url If set to true (which is the default), the Bundle.entry.fullUrl will override the Bundle.entry.resource's resource id if the fullUrl is defined. This behavior happens when parsing the source data into a Bundle object. Set this to false if this is not the desired behavior (e.g. the client code wishes to perform additional validation checks between the fullUrl and the resource id). false Boolean camel.dataformat.fhirxml.pretty-print Sets the pretty print flag, meaning that the parser will encode resources with human-readable spacing and newlines between elements instead of condensing output as much as possible. false Boolean camel.dataformat.fhirxml.server-base-url Sets the server's base URL used by this parser. If a value is set, resource references will be turned into relative references if they are provided as absolute URLs but have a base matching the given base. String camel.dataformat.fhirxml.strip-versions-from-references If set to true (which is the default), resource references containing a version will have the version removed when the resource is encoded. This is generally good behaviour because in most situations, references from one resource to another should be to the resource by ID, not by ID and version. In some cases though, it may be desirable to preserve the version in resource links. In that case, this value should be set to false. This method provides the ability to globally disable reference encoding. If finer-grained control is needed, use setDontStripVersionsFromReferencesAtPaths(List). false Boolean camel.dataformat.fhirxml.summary-mode If set to true (default is false) only elements marked by the FHIR specification as being summary elements will be included. false Boolean camel.dataformat.fhirxml.suppress-narratives If set to true (default is false), narratives will not be included in the encoded values. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-fhir-starter</artifactId> </dependency>",
"fhir://endpoint-prefix/endpoint?[options]",
"fhir:apiName/methodName",
"fhir:apiName/methodName",
"fhir:capabilities/methodName?[parameters]",
"fhir:create/methodName?[parameters]",
"fhir:delete/methodName?[parameters]",
"fhir:history/methodName?[parameters]",
"fhir:load-page/methodName?[parameters]",
"fhir:meta/methodName?[parameters]",
"fhir:operation/methodName?[parameters]",
"fhir:patch/methodName?[parameters]",
"fhir:read/methodName?[parameters]",
"fhir:search/methodName?[parameters]",
"fhir:transaction/methodName?[parameters]",
"fhir:update/methodName?[parameters]",
"fhir:validate/methodName?[parameters]"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-fhir-component-starter |
Chapter 2. Using the Red Hat company single sign-on feature | Chapter 2. Using the Red Hat company single sign-on feature You can use your company single sign-on to login to your Red Hat account. Note If your corporate Red Hat account is not set up to use company single sign-on, you can use your Red Hat account with your Red Hat login and password. 2.1. Logging in to your Red Hat account with company single sign-on The following procedures describe different ways to log in to your Red Hat account depending on how your company single sign-on integration is set up. Note If you previously used a social login to log in to your Red Hat account, you will see an error message when company single sign-on (SSO) is enabled for your organization. A message appears on your Red Hat account screen: Click the link Log in with company single sign-on. to continue. If company single sign-on integration is not yet enabled , you can log in to your Red Hat account. Section 2.2, "Logging in when company SSO integration is not enabled" First-time login to your Red Hat account when company single sign-on is enabled. Section 2.3, "Linking your Red Hat account to your company SSO user" Log in to your Red Hat account when company single sign-on is enabled. Section 2.4, "Logging in with a company SSO user account" Log in to your Red Hat account when your user email is associated with company single sign-on enabled and other non-enabled accounts. Section 2.5, "Logging in when an email is used with company SSO and non-SSO accounts" Log in to your Red Hat account when your user email is associated with more than one company single sign-on account. Section 2.6, "Logging in when email is used on multiple SSO accounts" Change which SSO login account you are linked to. Section 2.7, "Unlinking and linking your Red Hat company SSO account" Because Red Hat provides multiple starting points to log in to your account, for consistency the following login procedures all begin at access.redhat.com . 2.2. Logging in when company SSO integration is not enabled Use your email or your Red Hat login to log in your Red Hat account when it is not set up to use company single sign-on (SSO) integration. This is the default instance. Prerequisites You have a registered Red Hat user account. Your Red Hat company account is not set up to use company SSO integration. Procedure Use your browser to navigate to access.redhat.com Enter your email or your Red Hat login. Enter your Red Hat password. Verification After a successful login, the avatar that is associated with your user account appears in the navigation bar in place of the login icon. Click the avatar for additional account information. 2.3. Linking your Red Hat account to your company SSO user Use your email or your Red Hat login to log in your Red Hat account when it is enabled to use company single sign-on (SSO) integration. The first time you log in, you must link your Red Hat account to your company SSO account. Prerequisites You have a registered Red Hat user account. Your company account is set up to use company SSO integration. Your Red Hat user account is not yet linked to your company SSO user. Note This procedure is only required the first time that you authenticate, which is when Red Hat initially detects that your Red Hat company account has single sign-on (SSO) integration enabled. Procedure Use your browser to navigate to access.redhat.com Enter your Red Hat login or email registered to your Red Hat account. Your company single sign-on login appears. Enter your company username and password credentials. A message appears for the step, One-time account linking required . Enter your Red Hat account password. Click the Link account button. Verification After a successful login, the avatar that is associated with your user account appears in the navigation bar in place of the login icon. Click the avatar for additional account information. Note If the linking action fails, check that the Red Hat login and password are correct and are associated with the corporate account connected to your company SSO. 2.4. Logging in with a company SSO user account Use your email or your Red Hat login to log in to your Red Hat account when it is enabled to use company single sign-on (SSO) integration. Prerequisites You have a registered Red Hat user account. Your Red Hat company account is set up to use company SSO integration. Procedure Use your browser to navigate to access.redhat.com Enter your Red Hat login or email registered to your Red Hat account. The company SSO login page appears. Enter your company username and password credentials. This is the same information you use to log in to your company network, which also provides access to your Red Hat account. Verification After a successful login, the avatar that is associated with your user account appears in the navigation bar in place of the login icon. Click the avatar for additional account information. 2.5. Logging in when an email is used with company SSO and non-SSO accounts Use a single email to log in to Red Hat user accounts that include accounts that use company SSO integration and accounts that do not. Red Hat allows a single email to be associated with more than one account. However, each Red Hat login must be unique. When a single email is used with multiple user accounts, some user accounts might be associated with a company SSO integration and others might not. The Red Hat login determines which login access method is provided. Prerequisites You have an email registered with more than one Red Hat user account. One account (or more) has company SSO integration enabled. One account (or more) does not have SSO integration enabled. Procedure Use your browser to navigate to sso.redhat.com Enter the email registered to your Red Hat account. Note To choose whether company single sign-on or Red Hat account is your login method when the login page appears, select either of the following steps. To choose company single sign-on login method, click the company single sign-on . A company single sign-on page appears. Enter the username and password associated with your company single sign-on. To choose a Red Hat non-SSO login method, click the Red Hat account button. A Red Hat login page appears. Enter the password associated with your Red Hat user account. Verification After a successful login, the avatar that is associated with your user account appears in the navigation bar in place of the login icon. Click the avatar for additional account information. 2.6. Logging in when email is used on multiple SSO accounts You can use one email for multiple accounts. When you do so, you must use your login and not your email to log in to your account. Prerequisites You have more than one registered Red Hat user account associated with a single email, and these user accounts span different Red Hat company accounts. Your Red Hat company accounts are set up to use company SSO integration and those company accounts use different identity providers. Procedure Use your browser to navigate to access.redhat.com Enter your Red Hat email registered to your Red Hat accounts. An information panel appears Enter the login registered to the account you wish to use. The customer SSO login page appears for the selected login. Enter your company username and password credentials. Verification After a successful login, the avatar that is associated with your user account appears in the navigation bar in place of the login icon. Click the avatar for additional account information. 2.7. Unlinking and linking your Red Hat company SSO account If you link your Red Hat user account to an incorrect company SSO account, or you link the wrong Red Hat user account to the SSO account, you can unlink then link to the correct SSO account. For example: You linked your Red Hat user account to Company A but you want to change it to Company B. You linked Red Hat user account X to a company SSO but you want to change to Red Hat user account Y. Note A Red Hat user can only be linked to one user per external Identity Provider (IdP). Two external accounts from the same IdP cannot link to the same Red Hat user. Prerequisites You have a registered Red Hat user account. Your Red Hat company account is set up to use company SSO integration. You incorrectly linked your Red Hat user account and company SSO account. Procedure Use your browser to navigate to access.redhat.com Tip As a shortcut, navigate directly to Linked accounts . Click your user avatar in the upper right corner of the page. Click Account details . A page opens where you can edit your account information. If you log in through Red Hat Hybrid Cloud Console , click My profile under your user avatar to edit your account information. Click the Login & password link. On the Login & password page, click Manage connected accounts . The Linked accounts tab opens on the Account security page and you can view the identity provider account currently connected to your Red Hat account. Click the Unlink button to unlink your Red Hat user account. A message is displayed when the identity provider link is successfully removed. Your account is no longer linked. Restart the linking process with the correct Red Hat user account and company SSO account. Section 2.3, "Linking your Red Hat account to your company SSO user" | [
"Log in with company single sign-on. Company single sign-on is required to access your account.",
"Email address associated with multiple logins To access your account, use your login instead."
]
| https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/using_company_single_sign-on_integration/proc-ciam-user-login-intro_company-single-sign-on |
Chapter 6. Operator [operators.coreos.com/v1] | Chapter 6. Operator [operators.coreos.com/v1] Description Operator represents a cluster operator. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorSpec defines the desired state of Operator status object OperatorStatus defines the observed state of an Operator and its components 6.1.1. .spec Description OperatorSpec defines the desired state of Operator Type object 6.1.2. .status Description OperatorStatus defines the observed state of an Operator and its components Type object Property Type Description components object Components describes resources that compose the operator. 6.1.3. .status.components Description Components describes resources that compose the operator. Type object Required labelSelector Property Type Description labelSelector object LabelSelector is a label query over a set of resources used to select the operator's components refs array Refs are a set of references to the operator's component resources, selected with LabelSelector. refs[] object RichReference is a reference to a resource, enriched with its status conditions. 6.1.4. .status.components.labelSelector Description LabelSelector is a label query over a set of resources used to select the operator's components Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.5. .status.components.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.6. .status.components.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.7. .status.components.refs Description Refs are a set of references to the operator's component resources, selected with LabelSelector. Type array 6.1.8. .status.components.refs[] Description RichReference is a reference to a resource, enriched with its status conditions. Type object Property Type Description apiVersion string API version of the referent. conditions array Conditions represents the latest state of the component. conditions[] object Condition represent the latest available observations of an component's state. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.9. .status.components.refs[].conditions Description Conditions represents the latest state of the component. Type array 6.1.10. .status.components.refs[].conditions[] Description Condition represent the latest available observations of an component's state. Type object Required status type Property Type Description lastTransitionTime string Last time the condition transitioned from one status to another. lastUpdateTime string Last time the condition was probed message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of condition. 6.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/operators DELETE : delete collection of Operator GET : list objects of kind Operator POST : create an Operator /apis/operators.coreos.com/v1/operators/{name} DELETE : delete an Operator GET : read the specified Operator PATCH : partially update the specified Operator PUT : replace the specified Operator /apis/operators.coreos.com/v1/operators/{name}/status GET : read status of the specified Operator PATCH : partially update status of the specified Operator PUT : replace status of the specified Operator 6.2.1. /apis/operators.coreos.com/v1/operators Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Operator Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Operator Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK OperatorList schema 401 - Unauthorized Empty HTTP method POST Description create an Operator Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body Operator schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Operator schema 201 - Created Operator schema 202 - Accepted Operator schema 401 - Unauthorized Empty 6.2.2. /apis/operators.coreos.com/v1/operators/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the Operator Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Operator Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Operator Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK Operator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Operator Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK Operator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Operator Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body Operator schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK Operator schema 201 - Created Operator schema 401 - Unauthorized Empty 6.2.3. /apis/operators.coreos.com/v1/operators/{name}/status Table 6.22. Global path parameters Parameter Type Description name string name of the Operator Table 6.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Operator Table 6.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.25. HTTP responses HTTP code Reponse body 200 - OK Operator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Operator Table 6.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.27. Body parameters Parameter Type Description body Patch schema Table 6.28. HTTP responses HTTP code Reponse body 200 - OK Operator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Operator Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body Operator schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK Operator schema 201 - Created Operator schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operatorhub_apis/operator-operators-coreos-com-v1 |
Chapter 4. Known issues | Chapter 4. Known issues Resolved known issues for this release of Red Hat Trusted Artifact Signer (RHTAS): Version number reported incorrectly on OpenShift 4.13 A list of known issues found in this release RHTAS: The ownerReferences are lost when restoring Trusted Artifact Signer to a different OpenShift cluster When restoring the RHTAS data to a new Red Hat OpenShift cluster, the ownerReferences for components are lost. This happens because the Securesign UUID changes when restoring on a new cluster, and the ownerReferences for each component gets deleted since they are no longer valid. To workaound this issue, run the provided script after the Securesign resource is restored. This script recreates the ownerReferences with the new Securesign UUID. Specifying a PVC name for the TUF repository fails the initialization process When specifying a persistent volume claim (PVC) name in The Update Framework (TUF) resource causes the RHTAS operator to fail the initialization of the TUF repository. For example: To workaround this issue, do not specify a PVC name in the TUF resource. This allows the RHTAS operator to automatically create the PVC, names it tuf , and properly initializes the TUF repository. Rekor Search UI does not show records after upgrade After upgrading the RHTAS operator to the latest version (1.0.1), the existing Rekor data is not found when searching by email address. The backfill-redis CronJob, which ensures that Rekor Search UI can query the transparency log only runs once per day, at midnight. To workaround this issue, you can trigger the backfill-redis job manually, instead of waiting until midnight. To trigger the backfill-redis job from the command-line interface, run the following command: Doing this adds the missing data back to the Rekor Search UI. The Trusted Artifact Signer operator does not apply configuration changes We found a potential issue with the RHTAS operator logic that can cause an unexpected state when redeploying. This inconsistent state can happen if removing configurations from RHTAS resources and the operator tries to redeploy those resources. To workaround this potential issue, you can delete the specific resource, and then re-create that resource by using the instance's configuration, such as keys, and persistent volumes. The RHTAS resources are: Securesign, Fulcio, The Update Framework (TUF), Rekor, Certificate Transparency (CT) log, or Trillian. For example, to delete the Securesign resource: USD oc delete Securesign securesing-sample For example, to re-create the Securesign resource from a configuration file: USD oc create -f ./securesign-sample.yaml Operator does not update the component status after doing a restore to a different OpenShift cluster When restoring the RHTAS signer data from a backup to a new OpenShift cluster, the component status links do not update as expected. Currently, you have to manually delete the securesign-sample-trillian-db-tls resource, and manually update the component status links. The RHTAS operator will automatically recreate an updated securesign-sample-trillian-db-tls resource, after it has been removed. After the backup procedure starts, and the secrets restored, delete the securesign-sample-trillian-db-tls resource: Example Once all the pods start, then update the status files for Securesign , and TimestampAuthority : Example Trusted Artifact Signer requires cosign 2.2 or later Because of recent changes to how we generate The Update Framework (TUF) repository, and making use of different checksum algorithms, we require the use of cosign version 2.2 or later. With this release of RHTAS, you can download cosign version 2.4 for use with Trusted Artifact Signer. | [
"spec: tuf: pvc: name: tuf-pvc-example-name",
"create job --from=cronjob/backfill-redis backfill-redis -n trusted-artifact-signer",
"oc delete Securesign securesing-sample",
"oc create -f ./securesign-sample.yaml",
"oc delete secret securesign-sample-trillian-db-tls",
"oc edit --subresource=status Securesign securesign-sample oc edit --subresource=status TimestampAuthority securesign-sample"
]
| https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1.1/html/release_notes/known-issues |
Chapter 6. Migrating application workloads | Chapter 6. Migrating application workloads You can migrate application workloads from the internal mode storage classes to the external mode storage classes using Migration Toolkit for Containers using the same cluster as source and target. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_multiple_openshift_data_foundation_storage_clusters/proc_migrating-application-workloads_rhodf |
Development Guidelines and Recommended Practices Guide | Development Guidelines and Recommended Practices Guide Red Hat JBoss Data Virtualization 6.4 David Le Sage [email protected] | [
"{vdbname}.{version}.vdb",
"<cache-container name=\"ws-cache-container\" default-cache=\"X/system\" module=\"org.modeshape\"> <local-cache name=\"X/system\"> <eviction strategy=\"LRU\" max-entries=\"100\"/> <expiration lifespan=\"10000\" interval=\"1000\" max-idle=\"5000\"/> </local-cache> <local-cache name=\"X/default\"> <eviction strategy=\"LRU\" max-entries=\"100\"/> <expiration lifespan=\"10000\" interval=\"1000\" max-idle=\"5000\"/> </local-cache> </cache-container>",
"<repository name=\"X\"...> <workspaces cache-container=\"ws-cache-container\">"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guidelines_and_recommended_practices_guide/index |
Chapter 3. Configuring the Collector | Chapter 3. Configuring the Collector 3.1. Configuring the Collector The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat build of OpenTelemetry resources. You can install the default configuration or modify the file. 3.1.1. OpenTelemetry Collector configuration options The OpenTelemetry Collector consists of five types of components that access telemetry data: Receivers Processors Exporters Connectors Extensions You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice, only enable the components that you need. Example of the OpenTelemetry Collector custom resource file apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus] 1 If a component is configured but not defined in the service section, the component is not enabled. Table 3.1. Parameters used by the Operator to define the OpenTelemetry Collector Parameter Description Values Default A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. otlp , jaeger , prometheus , zipkin , kafka , opencensus None Processors run through the received data before it is exported. By default, no processors are enabled. batch , memory_limiter , resourcedetection , attributes , span , k8sattributes , filter , routing None An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. otlp , otlphttp , debug , prometheus , kafka None Connectors join pairs of pipelines by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers. Connectors can be used to summarize, replicate, or route consumed data. spanmetrics None Optional components for tasks that do not involve processing telemetry data. bearertokenauth , oauth2client , jaegerremotesampling , pprof , health_check , memory_ballast , zpages None Components are enabled by adding them to a pipeline under services.pipeline . You enable receivers for tracing by adding them under service.pipelines.traces . None You enable processors for tracing by adding them under service.pipelines.traces . None You enable exporters for tracing by adding them under service.pipelines.traces . None You enable receivers for metrics by adding them under service.pipelines.metrics . None You enable processors for metircs by adding them under service.pipelines.metrics . None You enable exporters for metrics by adding them under service.pipelines.metrics . None 3.1.2. Creating the required RBAC resources automatically Some Collector components require configuring the RBAC resources. Procedure Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator 3.2. Receivers Receivers get data into the Collector. A receiver can be push or pull based. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. Currently, the following General Availability and Technology Preview receivers are available for the Red Hat build of OpenTelemetry: OTLP Receiver Jaeger Receiver Host Metrics Receiver Kubernetes Objects Receiver Kubelet Stats Receiver Prometheus Receiver OTLP JSON File Receiver Zipkin Receiver Kafka Receiver Kubernetes Cluster Receiver OpenCensus Receiver Filelog Receiver Journald Receiver Kubernetes Events Receiver 3.2.1. OTLP Receiver The OTLP Receiver ingests traces, metrics, and logs by using the OpenTelemetry Protocol (OTLP). The OTLP Receiver ingests traces and metrics using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with an enabled OTLP Receiver # ... config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp] # ... 1 The OTLP gRPC endpoint. If omitted, the default 0.0.0.0:4317 is used. 2 The server-side TLS configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled. 3 The path to the TLS certificate at which the server verifies a client certificate. This sets the value of ClientCAs and ClientAuth to RequireAndVerifyClientCert in the TLSConfig . For more information, see the Config of the Golang TLS package . 4 Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval field accepts a string containing valid units of time such as ns , us (or ms ), ms , s , m , h . 5 The OTLP HTTP endpoint. The default value is 0.0.0.0:4318 . 6 The server-side TLS configuration. For more information, see the grpc protocol configuration section. 3.2.2. Jaeger Receiver The Jaeger Receiver ingests traces in the Jaeger formats. OpenTelemetry Collector custom resource with an enabled Jaeger Receiver # ... config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger] # ... 1 The Jaeger gRPC endpoint. If omitted, the default 0.0.0.0:14250 is used. 2 The Jaeger Thrift HTTP endpoint. If omitted, the default 0.0.0.0:14268 is used. 3 The Jaeger Thrift Compact endpoint. If omitted, the default 0.0.0.0:6831 is used. 4 The Jaeger Thrift Binary endpoint. If omitted, the default 0.0.0.0:6832 is used. 5 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3.2.3. Host Metrics Receiver The Host Metrics Receiver ingests metrics in the OTLP format. OpenTelemetry Collector custom resource with an enabled Host Metrics Receiver apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> # ... --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics] # ... 1 Sets the time interval for host metrics collection. If omitted, the default value is 1m . 2 Sets the initial time delay for host metrics collection. If omitted, the default value is 1s . 3 Configures the root_path so that the Host Metrics Receiver knows where the root filesystem is. If running multiple instances of the Host Metrics Receiver, set the same root_path value for each instance. 4 Lists the enabled host metrics scrapers. Available scrapers are cpu , disk , load , filesystem , memory , network , paging , processes , and process . 3.2.4. Kubernetes Objects Receiver The Kubernetes Objects Receiver pulls or watches objects to be collected from the Kubernetes API server. This receiver watches primarily Kubernetes events, but it can collect any type of Kubernetes objects. This receiver gathers telemetry for the cluster as a whole, so only one instance of this receiver suffices for collecting all the data. Important The Kubernetes Objects Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Kubernetes Objects Receiver apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - "" resources: - events - pods verbs: - get - list - watch - apiGroups: - "events.k8s.io" resources: - events verbs: - watch - list # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug] # ... 1 The Resource name that this receiver observes: for example, pods , deployments , or events . 2 The observation mode that this receiver uses: pull or watch . 3 Only applicable to the pull mode. The request interval for pulling an object. If omitted, the default value is 1h . 4 The label selector to define targets. 5 The field selector to filter targets. 6 The list of namespaces to collect events from. If omitted, the default value is all . 3.2.5. Kubelet Stats Receiver The Kubelet Stats Receiver extracts metrics related to nodes, pods, containers, and volumes from the kubelet's API server. These metrics are then channeled through the metrics-processing pipeline for additional analysis. OpenTelemetry Collector custom resource with an enabled Kubelet Stats Receiver # ... config: receivers: kubeletstats: collection_interval: 20s auth_type: "serviceAccount" endpoint: "https://USD{env:K8S_NODE_NAME}:10250" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName # ... 1 Sets the K8S_NODE_NAME to authenticate to the API. The Kubelet Stats Receiver requires additional permissions for the service account used for running the OpenTelemetry Collector. Permissions required by the service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [""] resources: ["nodes/proxy"] 1 verbs: ["get"] # ... 1 The permissions required when using the extra_metadata_labels or request_utilization or limit_utilization metrics. 3.2.6. Prometheus Receiver The Prometheus Receiver scrapes the metrics endpoints. Important The Prometheus Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Prometheus Receiver # ... config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus] # ... 1 Scrapes configurations using the Prometheus format. 2 The Prometheus job name. 3 The lnterval for scraping the metrics data. Accepts time units. The default value is 1m . 4 The targets at which the metrics are exposed. This example scrapes the metrics from a my-app application in the example project. 3.2.7. OTLP JSON File Receiver The OTLP JSON File Receiver extracts pipeline information from files containing data in the ProtoJSON format and conforming to the OpenTelemetry Protocol specification. The receiver watches a specified directory for changes such as created or modified files to process. Important The OTLP JSON File Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled OTLP JSON File Receiver # ... config: otlpjsonfile: include: - "/var/log/*.log" 1 exclude: - "/var/log/test.log" 2 # ... 1 The list of file path glob patterns to watch. 2 The list of file path glob patterns to ignore. 3.2.8. Zipkin Receiver The Zipkin Receiver ingests traces in the Zipkin v1 and v2 formats. OpenTelemetry Collector custom resource with the enabled Zipkin Receiver # ... config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin] # ... 1 The Zipkin HTTP endpoint. If omitted, the default 0.0.0.0:9411 is used. 2 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3.2.9. Kafka Receiver The Kafka Receiver receives traces, metrics, and logs from Kafka in the OTLP format. Important The Kafka Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kafka Receiver # ... config: receivers: kafka: brokers: ["localhost:9092"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka] # ... 1 The list of Kafka brokers. The default is localhost:9092 . 2 The Kafka protocol version. For example, 2.0.0 . This is a required field. 3 The name of the Kafka topic to read from. The default is otlp_spans . 4 The plain text authentication configuration. If omitted, plain text authentication is disabled. 5 The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. 6 Disables verifying the server's certificate chain and host name. The default is false . 7 ServerName indicates the name of the server requested by the client to support virtual hosting. 3.2.10. Kubernetes Cluster Receiver The Kubernetes Cluster Receiver gathers cluster metrics and entity events from the Kubernetes API server. It uses the Kubernetes API to receive information about updates. Authentication for this receiver is only supported through service accounts. Important The Kubernetes Cluster Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kubernetes Cluster Receiver # ... config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug] # ... This receiver requires a configured service account, RBAC rules for the cluster role, and the cluster role binding that binds the RBAC with the service account. ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol # ... RBAC rules for the ClusterRole object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - "" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch # ... ClusterRoleBinding object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default # ... 3.2.11. OpenCensus Receiver The OpenCensus Receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json. OpenTelemetry Collector custom resource with the enabled OpenCensus Receiver # ... config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus] # ... 1 The OpenCensus endpoint. If omitted, the default is 0.0.0.0:55678 . 2 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3 You can also use the HTTP JSON endpoint to optionally configure CORS, which is enabled by specifying a list of allowed CORS origins in this field. Wildcards with * are accepted under the cors_allowed_origins . To match any origin, enter only * . 3.2.12. Filelog Receiver The Filelog Receiver tails and parses logs from files. Important The Filelog Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Filelog Receiver that tails a text file # ... config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev # ... 1 A list of file glob patterns that match the file paths to be read. 2 An array of Operators. Each Operator performs a simple task such as parsing a timestamp or JSON. To process logs into a desired format, chain the Operators together. 3.2.13. Journald Receiver The Journald Receiver parses journald events from the systemd journal and sends them as logs. Important The Journald Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Journald Receiver apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: "false" pod-security.kubernetes.io/enforce: "privileged" pod-security.kubernetes.io/audit: "privileged" pod-security.kubernetes.io/warn: "privileged" # ... --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule # ... 1 Filters output by message priorities or priority ranges. The default value is info . 2 Lists the units to read entries from. If empty, entries are read from all units. 3 Includes very long logs and logs with unprintable characters. The default value is false . 4 If set to true , the receiver pauses reading a file and attempts to resend the current batch of logs when encountering an error from downstream components. The default value is false . 5 The time interval to wait after the first failure before retrying. The default value is 1s . The units are ms , s , m , h . 6 The upper bound for the retry backoff interval. When this value is reached, the time interval between consecutive retry attempts remains constant at this value. The default value is 30s . The supported units are ms , s , m , h . 7 The maximum time interval, including retry attempts, for attempting to send a logs batch to a downstream consumer. When this value is reached, the data are discarded. If the set value is 0 , retrying never stops. The default value is 5m . The supported units are ms , s , m , h . 3.2.14. Kubernetes Events Receiver The Kubernetes Events Receiver collects events from the Kubernetes API server. The collected events are converted into logs. Important The Kubernetes Events Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform permissions required for the Kubernetes Events Receiver apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - "" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch # ... OpenTelemetry Collector custom resource with the enabled Kubernetes Event Receiver # ... serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events] # ... 1 The service account of the Collector that has the required ClusterRole otel-collector RBAC. 2 The list of namespaces to collect events from. The default value is empty, which means that all namespaces are collected. 3.2.15. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.3. Processors Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters. Currently, the following General Availability and Technology Preview processors are available for the Red Hat build of OpenTelemetry: Batch Processor Memory Limiter Processor Resource Detection Processor Attributes Processor Resource Processor Span Processor Kubernetes Attributes Processor Filter Processor Routing Processor Cumulative-to-Delta Processor Group-by-Attributes Processor Transform Processor 3.3.1. Batch Processor The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information. Example of the OpenTelemetry Collector custom resource when using the Batch Processor # ... config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch] # ... Table 3.2. Parameters used by the Batch Processor Parameter Description Default timeout Sends the batch after a specific time duration and irrespective of the batch size. 200ms send_batch_size Sends the batch of telemetry data after the specified number of spans or metrics. 8192 send_batch_max_size The maximum allowable size of the batch. Must be equal or greater than the send_batch_size . 0 metadata_keys When activated, a batcher instance is created for each unique set of values found in the client.Metadata . [] metadata_cardinality_limit When the metadata_keys are populated, this configuration restricts the number of distinct metadata key-value combinations processed throughout the duration of the process. 1000 3.3.2. Memory Limiter Processor The Memory Limiter Processor periodically checks the Collector's memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run. Example of the OpenTelemetry Collector custom resource when using the Memory Limiter Processor # ... config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch] # ... Table 3.3. Parameters used by the Memory Limiter Processor Parameter Description Default check_interval Time between memory usage measurements. The optimal value is 1s . For spiky traffic patterns, you can decrease the check_interval or increase the spike_limit_mib . 0s limit_mib The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value. 0 spike_limit_mib Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of limit_mib . To calculate the soft limit, subtract the spike_limit_mib from the limit_mib . 20% of limit_mib limit_percentage Same as the limit_mib but expressed as a percentage of the total available memory. The limit_mib setting takes precedence over this setting. 0 spike_limit_percentage Same as the spike_limit_mib but expressed as a percentage of the total available memory. Intended to be used with the limit_percentage setting. 0 3.3.3. Resource Detection Processor The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry's resource semantic standards. Using the detected information, this processor can add or replace the resource values in telemetry data. This processor supports traces and metrics. You can use this processor with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES environment variable detector. Important The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform permissions required for the Resource Detection Processor kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] # ... OpenTelemetry Collector using the Resource Detection Processor # ... config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection] # ... OpenTelemetry Collector using the Resource Detection Processor with an environment variable detector # ... config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false # ... 1 Specifies which detector to use. In this example, the environment detector is specified. 3.3.4. Attributes Processor The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions. Important The Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported: Insert Inserts a new attribute into the input data when the specified key does not already exist. Update Updates an attribute in the input data if the key already exists. Upsert Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists. Delete Removes an attribute from the input data. Hash Hashes an existing attribute value as SHA1. Extract Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor's to_attributes setting with the existing attribute as the source. Convert Converts an existing attribute to a specified type. OpenTelemetry Collector using the Attributes Processor # ... config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int # ... 3.3.5. Resource Processor The Resource Processor applies changes to the resource attributes. This processor supports traces, metrics, and logs. Important The Resource Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector using the Resource Detection Processor # ... config: processors: attributes: - key: cloud.availability_zone value: "zone-1" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete # ... Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute. 3.3.6. Span Processor The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces. Span renaming requires specifying attributes for the new name by using the from_attributes configuration. Important The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector using the Span Processor for renaming a span # ... config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2 # ... 1 Defines the keys to form the new span name. 2 An optional separator. You can use this processor to extract attributes from the span name. OpenTelemetry Collector using the Span Processor for extracting attributes from a span name # ... config: processors: span/to_attributes: name: to_attributes: rules: - ^\/api\/v1\/document\/(?P<documentId>.*)\/updateUSD 1 # ... 1 This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a documentID attibute is created. In this example, if the input span name is /api/v1/document/12345678/update , this results in the /api/v1/document/{documentId}/update output span name, and a new "documentId"="12345678" attribute is added to the span. You can have the span status modified. OpenTelemetry Collector using the Span Processor for status change # ... config: processors: span/set_status: status: code: Error description: "<error_description>" # ... 3.3.7. Kubernetes Attributes Processor The Kubernetes Attributes Processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata. Minimum OpenShift Container Platform permissions required for the Kubernetes Attributes Processor kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list'] # ... OpenTelemetry Collector using the Kubernetes Attributes Processor # ... config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME # ... 3.3.8. Filter Processor The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator. This processor supports traces, metrics, and logs. Important The Filter Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled OTLP Exporter # ... config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes["container.name"] == "app_container_1"' 2 - 'resource.attributes["host.name"] == "localhost"' 3 # ... 1 Defines the error mode. When set to ignore , ignores errors returned by conditions. When set to propagate , returns the error up the pipeline. An error causes the payload to be dropped from the Collector. 2 Filters the spans that have the container.name == app_container_1 attribute. 3 Filters the spans that have the host.name == localhost resource attribute. 3.3.9. Routing Processor The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value. Important The Routing Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled OTLP Exporter # ... config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250 # ... 1 The HTTP header name for the lookup value when performing the route. 2 The default exporter when the attribute value is not present in the table in the section. 3 The table that defines which values are to be routed to which exporters. Optionally, you can create an attribute_source configuration, which defines where to look for the attribute that you specify in the from_attribute field. The supported values are context for searching the context including the HTTP headers, and resource for searching the resource attributes. 3.3.10. Cumulative-to-Delta Processor The Cumulative-to-Delta Processor processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics. You can filter metrics by using the include: or exclude: fields and specifying the strict or regexp metric name matching. This processor does not convert non-monotonic sums and exponential histograms. Important The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor # ... config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - "<regular_expression_for_metric_names>" # ... 1 Optional: Configures which metrics to include. When omitted, all metrics, except for those listed in the exclude field, are converted to delta metrics. 2 Defines a value provided in the metrics field as a strict exact match or regexp regular expression. 3 Lists the metric names, which are exact matches or matches for regular expressions, of the metrics to be converted to delta metrics. If a metric matches both the include and exclude filters, the exclude filter takes precedence. 4 Optional: Configures which metrics to exclude. When omitted, no metrics are excluded from conversion to delta metrics. 3.3.11. Group-by-Attributes Processor The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes. Important The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example: # ... config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2> # ... 1 Specifies attribute keys to group by. 2 If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated. 3.3.12. Transform Processor The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL) . For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed. All statements are written in the OTTL. You can configure multiple context statements for different signals, traces, metrics, and logs. The value of the context type specifies which OTTL Context the processor must use when interpreting the associated statements. Important The Transform Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Configuration summary # ... config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string> # ... 1 Optional: See the following table "Values for the optional error_mode field". 2 Indicates a signal to be transformed. 3 See the following table "Values for the context field". 4 Optional: Conditions for performing a transformation. Configuration example # ... config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"]) 2 - replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes["http.path"] == "/health" - set(name, attributes["http.route"]) - replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}") - limit(attributes, 100, []) - truncate_all(attributes, 4096) # ... 1 Transforms a trace signal. 2 Keeps keys on the resources. 3 Replaces attributes and replaces string characters in password fields with asterisks. 4 Performs transformations at the span level. Table 3.4. Values for the context field Signal Statement Valid Contexts trace_statements resource , scope , span , spanevent metric_statements resource , scope , metric , datapoint log_statements resource , scope , log Table 3.5. Values for the optional error_mode field Value Description ignore Ignores and logs errors returned by statements and then continues to the statement. silent Ignores and doesn't log errors returned by statements and then continues to the statement. propagate Returns errors up the pipeline and drops the payload. Implicit default. 3.3.13. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.4. Exporters Exporters send data to one or more back ends or destinations. An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings. Currently, the following General Availability and Technology Preview exporters are available for the Red Hat build of OpenTelemetry: OTLP Exporter OTLP HTTP Exporter Debug Exporter Load Balancing Exporter Prometheus Exporter Prometheus Remote Write Exporter Kafka Exporter AWS CloudWatch Exporter AWS EMF Exporter AWS X-Ray Exporter File Exporter 3.4.1. OTLP Exporter The OTLP gRPC Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with the enabled OTLP Exporter # ... config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: "dev" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp] # ... 1 The OTLP gRPC endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls . 2 The client-side TLS configuration. Defines paths to TLS certificates. 3 Disables client transport security when set to true . The default value is false by default. 4 Skips verifying the certificate when set to true . The default value is false . 5 Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval accepts a string containing valid units of time such as ns , us (or ms ), ms , s , m , h . 6 Overrides the virtual host name of authority such as the authority header field in requests. You can use this for testing. 7 Headers are sent for every request performed during an established connection. 3.4.2. OTLP HTTP Exporter The OTLP HTTP Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with the enabled OTLP Exporter # ... config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: "dev" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp] # ... 1 The OTLP HTTP endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls . 2 The client side TLS configuration. Defines paths to TLS certificates. 3 Headers are sent in every HTTP request. 4 If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request. 3.4.3. Debug Exporter The Debug Exporter prints traces and metrics to the standard output. OpenTelemetry Collector custom resource with the enabled Debug Exporter # ... config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] # ... 1 Verbosity of the debug export: detailed , normal , or basic . When set to detailed , pipeline data are verbosely logged. Defaults to normal . 2 Initial number of messages logged per second. The default value is 2 messages per second. 3 Sampling rate after the initial number of messages, the value in sampling_initial , has been logged. Disabled by default with the default 1 value. Sampling is enabled with values greater than 1 . For more information, see the page for the sampler function in the zapcore package on the Go Project's website. 4 When set to true , enables output from the Collector's internal logger for the exporter. 3.4.4. Load Balancing Exporter The Load Balancing Exporter consistently exports spans, metrics, and logs according to the routing_key configuration. Important The Load Balancing Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Load Balancing Exporter # ... config: exporters: loadbalancing: routing_key: "service" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317 # ... 1 The routing_key: service exports spans for the same service name to the same Collector instance to provide accurate aggregation. The routing_key: traceID exports spans based on their traceID . The implicit default is traceID based routing. 2 The OTLP is the only supported load-balancing protocol. All options of the OTLP exporter are supported. 3 You can configure only one resolver. 4 The static resolver distributes the load across the listed endpoints. 5 You can use the DNS resolver only with a Kubernetes headless service. 6 The Kubernetes resolver is recommended. 3.4.5. Prometheus Exporter The Prometheus Exporter exports metrics in the Prometheus or OpenMetrics formats. Important The Prometheus Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Prometheus Exporter # ... config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus] # ... 1 The network endpoint where the metrics are exposed. The Red Hat build of OpenTelemetry Operator automatically exposes the port specified in the endpoint field to the <instance_name>-collector service. 2 The server-side TLS configuration. Defines paths to TLS certificates. 3 If set, exports metrics under the provided value. 4 Key-value pair labels that are applied for every exported metric. 5 If true , metrics are exported by using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such as counter . Disabled by default. 6 If enabled is true , all the resource attributes are converted to metric labels. Disabled by default. 7 Defines how long metrics are exposed without updates. The default is 5m . 8 Adds the metrics types and units suffixes. Must be disabled if the monitor tab in the Jaeger console is enabled. The default is true . Note When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true , the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics. 3.4.6. Prometheus Remote Write Exporter The Prometheus Remote Write Exporter exports metrics to compatible back ends. Important The Prometheus Remote Write Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Exporter # ... config: exporters: prometheusremotewrite: endpoint: "https://my-prometheus:7900/api/v1/push" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite] # ... 1 Endpoint for sending the metrics. 2 Server-side TLS configuration. Defines paths to TLS certificates. 3 When set to true , creates a target_info metric for each resource metric. 4 When set to true , exports a _created metric for the Summary, Histogram, and Monotonic Sum metric points. 5 Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is 3000000 , which is approximately 2.861 megabytes. Warning This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics. You must enable the --web.enable-remote-write-receiver feature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails. 3.4.7. Kafka Exporter The Kafka Exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. You must use it with batch and queued retry processors for higher throughput and resiliency. Important The Kafka Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kafka Exporter # ... config: exporters: kafka: brokers: ["localhost:9092"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka] # ... 1 The list of Kafka brokers. The default is localhost:9092 . 2 The Kafka protocol version. For example, 2.0.0 . This is a required field. 3 The name of the Kafka topic to read from. The following are the defaults: otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs. 4 The plain text authentication configuration. If omitted, plain text authentication is disabled. 5 The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. 6 Disables verifying the server's certificate chain and host name. The default is false . 7 ServerName indicates the name of the server requested by the client to support virtual hosting. 3.4.8. AWS CloudWatch Logs Exporter The AWS CloudWatch Logs Exporter sends logs data to the Amazon CloudWatch Logs service and signs requests by using the AWS SDK for Go and the default credential provider chain. Important The AWS CloudWatch Logs Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS CloudWatch Logs Exporter # ... config: exporters: awscloudwatchlogs: log_group_name: "<group_name_of_amazon_cloudwatch_logs>" 1 log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5 # ... 1 Required. If the log group does not exist yet, it is automatically created. 2 Required. If the log stream does not exist yet, it is automatically created. 3 Optional. If the AWS region is not already set in the default credential chain, you must specify it. 4 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 5 Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to 0 , the logs never expire by default. Supported values for retention in days are 1 , 3 , 5 , 7 , 14 , 30 , 60 , 90 , 120 , 150 , 180 , 365 , 400 , 545 , 731 , 1827 , 2192 , 2557 , 2922 , 3288 , or 3653 . Additional resources What is Amazon CloudWatch Logs? (Amazon CloudWatch Logs User Guide) Specifying Credentials (AWS SDK for Go Developer Guide) Amazon CloudWatch Logs endpoints and quotas (AWS General Reference) 3.4.9. AWS EMF Exporter The AWS EMF Exporter converts the following OpenTelemetry metrics datapoints to the AWS CloudWatch Embedded Metric Format (EMF): Int64DataPoints DoubleDataPoints SummaryDataPoints The EMF metrics are then sent directly to the Amazon CloudWatch Logs service by using the PutLogEvents API. One of the benefits of using this exporter is the possibility to view logs and metrics in the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/ . Important The AWS EMF Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS EMF Exporter # ... config: exporters: awsemf: log_group_name: "<group_name_of_amazon_cloudwatch_logs>" 1 log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7 # ... 1 Customized log group name. 2 Customized log stream name. 3 Optional. Converts resource attributes to telemetry attributes such as metric labels. Disabled by default. 4 The AWS region of the log stream. If a region is not already set in the default credential provider chain, you must specify the region. 5 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 6 Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to 0 , the logs never expire by default. Supported values for retention in days are 1 , 3 , 5 , 7 , 14 , 30 , 60 , 90 , 120 , 150 , 180 , 365 , 400 , 545 , 731 , 1827 , 2192 , 2557 , 2922 , 3288 , or 3653 . 7 Optional. A custom namespace for the Amazon CloudWatch metrics. Log group name The log_group_name parameter allows you to customize the log group name and supports the default /metrics/default value or the following placeholders: /aws/metrics/{ClusterName} This placeholder is used to search for the ClusterName or aws.ecs.cluster.name resource attribute in the metrics data and replace it with the actual cluster name. {NodeName} This placeholder is used to search for the NodeName or k8s.node.name resource attribute. {TaskId} This placeholder is used to search for the TaskId or aws.ecs.task.id resource attribute. If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined value. Log stream name The log_stream_name parameter allows you to customize the log stream name and supports the default otel-stream value or the following placeholders: {ClusterName} This placeholder is used to search for the ClusterName or aws.ecs.cluster.name resource attribute. {ContainerInstanceId} This placeholder is used to search for the ContainerInstanceId or aws.ecs.container.instance.id resource attribute. This resource attribute is valid only for the AWS ECS EC2 launch type. {NodeName} This placeholder is used to search for the NodeName or k8s.node.name resource attribute. {TaskDefinitionFamily} This placeholder is used to search for the TaskDefinitionFamily or aws.ecs.task.family resource attribute. {TaskId} This placeholder is used to search for the TaskId or aws.ecs.task.id resource attribute in the metrics data and replace it with the actual task ID. If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined value. Additional resources Specification: Embedded metric format (Amazon CloudWatch User Guide) PutLogEvents (Amazon CloudWatch Logs API Reference) Amazon CloudWatch Logs endpoints and quotas (AWS General Reference) 3.4.10. AWS X-Ray Exporter The AWS X-Ray Exporter converts OpenTelemetry spans to AWS X-Ray Segment Documents and then sends them directly to the AWS X-Ray service. The AWS X-Ray Exporter uses the PutTraceSegments API and signs requests by using the AWS SDK for Go and the default credential provider chain. Important The AWS X-Ray Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS X-Ray Exporter # ... config: exporters: awsxray: region: "<region>" 1 endpoint: <endpoint> 2 resource_arn: "<aws_resource_arn>" 3 role_arn: "<iam_role>" 4 indexed_attributes: [ "<indexed_attr_0>", "<indexed_attr_1>" ] 5 aws_log_groups: ["<group1>", "<group2>"] 6 request_timeout_seconds: 120 7 # ... 1 The destination region for the X-Ray segments sent to the AWS X-Ray service. For example, eu-west-1 . 2 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 3 The Amazon Resource Name (ARN) of the AWS resource that is running the Collector. 4 The AWS Identity and Access Management (IAM) role for uploading the X-Ray segments to a different account. 5 The list of attribute names to be converted to X-Ray annotations. 6 The list of log group names for Amazon CloudWatch Logs. 7 Time duration in seconds before timing out a request. If omitted, the default value is 30 . Additional resources What is AWS X-Ray? (AWS X-Ray Developer Guide) AWS SDK for Go API Reference (AWS Documentation) Specifying Credentials (AWS SDK for Go Developer Guide) IAM roles (AWS Identity and Access Management User Guide) 3.4.11. File Exporter The File Exporter writes telemetry data to files in persistent storage and supports file operations such as rotation, compression, and writing to multiple files. With this exporter, you can also use a resource attribute to control file naming. The only required setting is path , which specifies the destination path for telemetry files in the persistent-volume file system. Important The File Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled File Exporter # ... config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9 # ... 1 The file-system path where the data is to be written. There is no default. 2 File rotation is an optional feature of this exporter. By default, telemetry data is exported to a single file. Add the rotation setting to enable file rotation. 3 The max_megabytes setting is the maximum size a file is allowed to reach until it is rotated. The default is 100 . 4 The max_days setting is for how many days a file is to be retained, counting from the timestamp in the file name. There is no default. 5 The max_backups setting is for retaining several older files. The defalt is 100 . 6 The localtime setting specifies the local-time format for the timestamp, which is appended to the file name in front of any extension, when the file is rotated. The default is the Coordinated Universal Time (UTC). 7 The format for encoding the telemetry data before writing it to a file. The default format is json . The proto format is also supported. 8 File compression is optional and not set by default. This setting defines the compression algorithm for the data that is exported to a file. Currently, only the zstd compression algorithm is supported. There is no default. 9 The time interval between flushes. A value without a unit is set in nanoseconds. This setting is ignored when file rotation is enabled through the rotation settings. 3.4.12. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.5. Connectors A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data. Currently, the following General Availability and Technology Preview connectors are available for the Red Hat build of OpenTelemetry: Count Connector Routing Connector Forward Connector Spanmetrics Connector 3.5.1. Count Connector The Count Connector counts trace spans, trace span events, metrics, metric data points, and log records in exporter pipelines. Important The Count Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following are the default metric names: trace.span.count trace.span.event.count metric.count metric.datapoint.count log.record.count You can also expose custom metric names. OpenTelemetry Collector custom resource (CR) with an enabled Count Connector # ... config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus] # ... 1 It is important to correctly configure the Count Connector as an exporter or receiver in the pipeline and to export the generated metrics to the correct exporter. 2 The Count Connector is configured to receive spans as an exporter. 3 The Count Connector is configured to emit generated metrics as a receiver. Tip If the Count Connector is not generating the expected metrics, you can check whether the OpenTelemetry Collector is receiving the expected spans, metrics, and logs, and whether the telemetry data flow through the Count Connector as expected. You can also use the Debug Exporter to inspect the incoming telemetry data. The Count Connector can count telemetry data according to defined conditions and expose those data as metrics when configured by using such fields as spans , spanevents , metrics , datapoints , or logs . See the example. Example OpenTelemetry Collector CR for the Count Connector to count spans by conditions # ... config: connectors: count: spans: 1 <custom_metric_name>: 2 description: "<custom_metric_description>" conditions: - 'attributes["env"] == "dev"' - 'name == "devevent"' # ... 1 In this example, the exposed metric counts spans with the specified conditions. 2 You can specify a custom metric name such as cluster.prod.event.count . Tip Write conditions correctly and follow the required syntax for attribute matching or telemetry field conditions. Improperly defined conditions are the most likely sources of errors. The Count Connector can count telemetry data according to defined attributes when configured by using such fields as spans , spanevents , metrics , datapoints , or logs . See the example. The attribute keys are injected into the telemetry data. You must define a value for the default_value field for missing attributes. Example OpenTelemetry Collector CR for the Count Connector to count logs by attributes # ... config: connectors: count: logs: 1 <custom_metric_name>: 2 description: "<custom_metric_description>" attributes: - key: env default_value: unknown 3 # ... 1 Specifies attributes for logs. 2 You can specify a custom metric name such as my.log.count . 3 Defines a default value when the attribute is not set. 3.5.2. Routing Connector The Routing Connector routes logs, metrics, and traces to specified pipelines according to resource attributes and their routing conditions, which are written as OpenTelemetry Transformation Language (OTTL) statements. Important The Routing Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Routing Connector # ... config: connectors: routing: table: 1 - statement: route() where attributes["X-Tenant"] == "dev" 2 pipelines: [traces/dev] 3 - statement: route() where attributes["X-Tenant"] == "prod" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod] # ... 1 Connector routing table. 2 Routing conditions written as OTTL statements. 3 Destination pipelines for routing the matching telemetry data. 4 Destination pipelines for routing the telemetry data for which no routing condition is satisfied. 5 Error-handling mode: The propagate value is for logging an error and dropping the payload. The ignore value is for ignoring the condition and attempting to match with the one. The silent value is the same as ignore but without logging the error. The default is propagate . 6 When set to true , the payload is routed only to the first pipeline whose routing condition is met. The default is false . 3.5.3. Forward Connector The Forward Connector merges two pipelines of the same type. Important The Forward Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Forward Connector # ... config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp] # ... 3.5.4. Spanmetrics Connector The Spanmetrics Connector aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data. OpenTelemetry Collector custom resource with an enabled Spanmetrics Connector # ... config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics] # ... 1 Defines the flush interval of the generated metrics. Defaults to 15s . 3.5.5. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.6. Extensions Extensions add capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically. Currently, the following General Availability and Technology Preview extensions are available for the Red Hat build of OpenTelemetry: BearerTokenAuth Extension OAuth2Client Extension File Storage Extension OIDC Auth Extension Jaeger Remote Sampling Extension Performance Profiler Extension Health Check Extension zPages Extension 3.6.1. BearerTokenAuth Extension The BearerTokenAuth Extension is an authenticator for receivers and exporters that are based on the HTTP and the gRPC protocol. You can use the OpenTelemetry Collector custom resource to configure client authentication and server authentication for the BearerTokenAuth Extension on the receiver and exporter side. This extension supports traces, metrics, and logs. OpenTelemetry Collector custom resource with client and server authentication configured for the BearerTokenAuth Extension # ... config: extensions: bearertokenauth: scheme: "Bearer" 1 token: "<token>" 2 filename: "<token_file>" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 You can configure the BearerTokenAuth Extension to send a custom scheme . The default is Bearer . 2 You can add the BearerTokenAuth Extension token as metadata to identify a message. 3 Path to a file that contains an authorization token that is transmitted with every message. 4 You can assign the authenticator configuration to an OTLP Receiver. 5 You can assign the authenticator configuration to an OTLP Exporter. 3.6.2. OAuth2Client Extension The OAuth2Client Extension is an authenticator for exporters that are based on the HTTP and the gRPC protocol. Client authentication for the OAuth2Client Extension is configured in a separate section in the OpenTelemetry Collector custom resource. This extension supports traces, metrics, and logs. Important The OAuth2Client Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with client authentication configured for the OAuth2Client Extension # ... config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: ["api.metrics"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 Client identifier, which is provided by the identity provider. 2 Confidential key used to authenticate the client to the identity provider. 3 Further metadata, in the key-value pair format, which is transferred during authentication. For example, audience specifies the intended audience for the access token, indicating the recipient of the token. 4 The URL of the OAuth2 token endpoint, where the Collector requests access tokens. 5 The scopes define the specific permissions or access levels requested by the client. 6 The Transport Layer Security (TLS) settings for the token client, which is used to establish a secure connection when requesting tokens. 7 When set to true , configures the Collector to use an insecure or non-verified TLS connection to call the configured token endpoint. 8 The path to a Certificate Authority (CA) file that is used to verify the server's certificate during the TLS handshake. 9 The path to the client certificate file that the client must use to authenticate itself to the OAuth2 server if required. 10 The path to the client's private key file that is used with the client certificate if needed for authentication. 11 Sets a timeout for the token client's request. 12 You can assign the authenticator configuration to an OTLP exporter. 3.6.3. File Storage Extension The File Storage Extension supports traces, metrics, and logs. This extension can persist the state to the local file system. This extension persists the sending queue for the OpenTelemetry Protocol (OTLP) exporters that are based on the HTTP and the gRPC protocols. This extension requires the read and write access to a directory. This extension can use a default directory, but the default directory must already exist. Important The File Storage Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with a configured File Storage Extension that persists an OTLP sending queue # ... config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 Specifies the directory in which the telemetry data is stored. 2 Specifies the timeout time interval for opening the stored files. 3 Starts compaction when the Collector starts. If omitted, the default is false . 4 Specifies the directory in which the compactor stores the telemetry data. 5 Defines the maximum size of the compaction transaction. To ignore the transaction size, set to zero. If omitted, the default is 65536 bytes. 6 When set, forces the database to perform an fsync call after each write operation. This helps to ensure database integrity if there is an interruption to the database process, but at the cost of performance. 7 Buffers the OTLP Exporter data on the local file system. 8 Starts the File Storage Extension by the Collector. 3.6.4. OIDC Auth Extension The OIDC Auth Extension authenticates incoming requests to receivers by using the OpenID Connect (OIDC) protocol. It validates the ID token in the authorization header against the issuer and updates the authentication context of the incoming request. Important The OIDC Auth Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured OIDC Auth Extension # ... config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The name of the header that contains the ID token. The default name is authorization . 2 The base URL of the OIDC provider. 3 Optional: The path to the issuer's CA certificate. 4 The audience for the token. 5 The name of the claim that contains the username. The default name is sub . 3.6.5. Jaeger Remote Sampling Extension The Jaeger Remote Sampling Extension enables serving sampling strategies after Jaeger's remote sampling API. You can configure this extension to proxy requests to a backing remote sampling server such as a Jaeger collector down the pipeline or to a static JSON file from the local file system. Important The Jaeger Remote Sampling Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with a configured Jaeger Remote Sampling Extension # ... config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The time interval at which the sampling configuration is updated. 2 The endpoint for reaching the Jaeger remote sampling strategy provider. 3 The path to a local file that contains a sampling strategy configuration in the JSON format. Example of a Jaeger Remote Sampling strategy file { "service_strategies": [ { "service": "foo", "type": "probabilistic", "param": 0.8, "operation_strategies": [ { "operation": "op1", "type": "probabilistic", "param": 0.2 }, { "operation": "op2", "type": "probabilistic", "param": 0.4 } ] }, { "service": "bar", "type": "ratelimiting", "param": 5 } ], "default_strategy": { "type": "probabilistic", "param": 0.5, "operation_strategies": [ { "operation": "/health", "type": "probabilistic", "param": 0.0 }, { "operation": "/metrics", "type": "probabilistic", "param": 0.0 } ] } } 3.6.6. Performance Profiler Extension The Performance Profiler Extension enables the Go net/http/pprof endpoint. Developers use this extension to collect performance profiles and investigate issues with the service. Important The Performance Profiler Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured Performance Profiler Extension # ... config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The endpoint at which this extension listens. Use localhost: to make it available only locally or ":" to make it available on all network interfaces. The default value is localhost:1777 . 2 Sets a fraction of blocking events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0 . 3 Set a fraction of mutex contention events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0 . 4 The name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated. 3.6.7. Health Check Extension The Health Check Extension provides an HTTP URL for checking the status of the OpenTelemetry Collector. You can use this extension as a liveness and readiness probe on OpenShift. Important The Health Check Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured Health Check Extension # ... config: extensions: health_check: endpoint: "0.0.0.0:13133" 1 tls: 2 ca_file: "/path/to/ca.crt" cert_file: "/path/to/cert.crt" key_file: "/path/to/key.key" path: "/health/status" 3 check_collector_pipeline: 4 enabled: true 5 interval: "5m" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The target IP address for publishing the health check status. The default is 0.0.0.0:13133 . 2 The TLS server-side configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled. 3 The path for the health check server. The default is / . 4 Settings for the Collector pipeline health check. 5 Enables the Collector pipeline health check. The default is false . 6 The time interval for checking the number of failures. The default is 5m . 7 The threshold of multiple failures until which a container is still marked as healthy. The default is 5 . 3.6.8. zPages Extension The zPages Extension provides an HTTP endpoint that serves live data for debugging instrumented components in real time. You can use this extension for in-process diagnostics and insights into traces and metrics without relying on an external backend. With this extension, you can monitor and troubleshoot the behavior of the OpenTelemetry Collector and related components by watching the diagnostic information at the provided endpoint. Important The zPages Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured zPages Extension # ... config: extensions: zpages: endpoint: "localhost:55679" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 Specifies the HTTP endpoint for serving the zPages extension. The default is localhost:55679 . Important Accessing the HTTP endpoint requires port-forwarding because the Red Hat build of OpenTelemetry Operator does not expose this route. You can enable port-forwarding by running the following oc command: USD oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679 The Collector provides the following zPages for diagnostics: ServiceZ Shows an overview of the Collector services and links to the following zPages: PipelineZ , ExtensionZ , and FeatureZ . This page also displays information about the build version and runtime. An example of this page's URL is http://localhost:55679/debug/servicez . PipelineZ Shows detailed information about the active pipelines in the Collector. This page displays the pipeline type, whether data are modified, and the associated receivers, processors, and exporters for each pipeline. An example of this page's URL is http://localhost:55679/debug/pipelinez . ExtensionZ Shows the currently active extensions in the Collector. An example of this page's URL is http://localhost:55679/debug/extensionz . FeatureZ Shows the feature gates enabled in the Collector along with their status and description. An example of this page's URL is http://localhost:55679/debug/featurez . TraceZ Shows spans categorized by latency. Available time ranges include 0 ms, 10 ms, 100 ms, 1 ms, 10 ms, 100 ms, 1 s, 10 s, 1 m. This page also allows for quick inspection of error samples. An example of this page's URL is http://localhost:55679/debug/tracez . 3.6.9. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.7. Target Allocator The Target Allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances. The Target Allocator integrates with the Prometheus PodMonitor and ServiceMonitor custom resources (CR). When the Target Allocator is enabled, the OpenTelemetry Operator adds the http_sd_config field to the enabled prometheus receiver that connects to the Target Allocator service. Important The Target Allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example OpenTelemetryCollector CR with the enabled Target Allocator apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug] # ... 1 When the Target Allocator is enabled, the deployment mode must be set to statefulset . 2 Enables the Target Allocator. Defaults to false . 3 The service account name of the Target Allocator deployment. The service account needs to have RBAC to get the ServiceMonitor , PodMonitor custom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is <collector_name>-targetallocator . 4 Enables integration with the Prometheus PodMonitor and ServiceMonitor custom resources. 5 Label selector for the Prometheus ServiceMonitor custom resources. When left empty, enables all service monitors. 6 Label selector for the Prometheus PodMonitor custom resources. When left empty, enables all pod monitors. 7 Prometheus receiver with the minimal, empty scrape_config: [] configuration option. The Target Allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration. RBAC configuration for the Target Allocator service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [""] resources: - services - pods - namespaces verbs: ["get", "list", "watch"] - apiGroups: ["monitoring.coreos.com"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: ["get", "list", "watch"] - apiGroups: ["discovery.k8s.io"] resources: - endpointslices verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2 # ... 1 The name of the Target Allocator service account mane. 2 The namespace of the Target Allocator service account. | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus]",
"receivers:",
"processors:",
"exporters:",
"connectors:",
"extensions:",
"service: pipelines:",
"service: pipelines: traces: receivers:",
"service: pipelines: traces: processors:",
"service: pipelines: traces: exporters:",
"service: pipelines: metrics: receivers:",
"service: pipelines: metrics: processors:",
"service: pipelines: metrics: exporters:",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp]",
"config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - \"\" resources: - events - pods verbs: - get - list - watch - apiGroups: - \"events.k8s.io\" resources: - events verbs: - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug]",
"config: receivers: kubeletstats: collection_interval: 20s auth_type: \"serviceAccount\" endpoint: \"https://USD{env:K8S_NODE_NAME}:10250\" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [\"\"] resources: [\"nodes/proxy\"] 1 verbs: [\"get\"]",
"config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus]",
"config: otlpjsonfile: include: - \"/var/log/*.log\" 1 exclude: - \"/var/log/test.log\" 2",
"config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin]",
"config: receivers: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka]",
"config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug]",
"apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default",
"config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus]",
"config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev",
"apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events]",
"config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection]",
"config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false",
"config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int",
"config: processors: attributes: - key: cloud.availability_zone value: \"zone-1\" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete",
"config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2",
"config: processors: span/to_attributes: name: to_attributes: rules: - ^\\/api\\/v1\\/document\\/(?P<documentId>.*)\\/updateUSD 1",
"config: processors: span/set_status: status: code: Error description: \"<error_description>\"",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list']",
"config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME",
"config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes[\"container.name\"] == \"app_container_1\"' 2 - 'resource.attributes[\"host.name\"] == \"localhost\"' 3",
"config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250",
"config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - \"<regular_expression_for_metric_names>\"",
"config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2>",
"config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string>",
"config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"]) 2 - replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes[\"http.path\"] == \"/health\" - set(name, attributes[\"http.route\"]) - replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\") - limit(attributes, 100, []) - truncate_all(attributes, 4096)",
"config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: \"dev\" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp]",
"config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: \"dev\" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp]",
"config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug]",
"config: exporters: loadbalancing: routing_key: \"service\" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317",
"config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus]",
"config: exporters: prometheusremotewrite: endpoint: \"https://my-prometheus:7900/api/v1/push\" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite]",
"config: exporters: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka]",
"config: exporters: awscloudwatchlogs: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5",
"config: exporters: awsemf: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7",
"config: exporters: awsxray: region: \"<region>\" 1 endpoint: <endpoint> 2 resource_arn: \"<aws_resource_arn>\" 3 role_arn: \"<iam_role>\" 4 indexed_attributes: [ \"<indexed_attr_0>\", \"<indexed_attr_1>\" ] 5 aws_log_groups: [\"<group1>\", \"<group2>\"] 6 request_timeout_seconds: 120 7",
"config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus]",
"config: connectors: count: spans: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" conditions: - 'attributes[\"env\"] == \"dev\"' - 'name == \"devevent\"'",
"config: connectors: count: logs: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" attributes: - key: env default_value: unknown 3",
"config: connectors: routing: table: 1 - statement: route() where attributes[\"X-Tenant\"] == \"dev\" 2 pipelines: [traces/dev] 3 - statement: route() where attributes[\"X-Tenant\"] == \"prod\" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod]",
"config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp]",
"config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics]",
"config: extensions: bearertokenauth: scheme: \"Bearer\" 1 token: \"<token>\" 2 filename: \"<token_file>\" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: [\"api.metrics\"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug]",
"{ \"service_strategies\": [ { \"service\": \"foo\", \"type\": \"probabilistic\", \"param\": 0.8, \"operation_strategies\": [ { \"operation\": \"op1\", \"type\": \"probabilistic\", \"param\": 0.2 }, { \"operation\": \"op2\", \"type\": \"probabilistic\", \"param\": 0.4 } ] }, { \"service\": \"bar\", \"type\": \"ratelimiting\", \"param\": 5 } ], \"default_strategy\": { \"type\": \"probabilistic\", \"param\": 0.5, \"operation_strategies\": [ { \"operation\": \"/health\", \"type\": \"probabilistic\", \"param\": 0.0 }, { \"operation\": \"/metrics\", \"type\": \"probabilistic\", \"param\": 0.0 } ] } }",
"config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: health_check: endpoint: \"0.0.0.0:13133\" 1 tls: 2 ca_file: \"/path/to/ca.crt\" cert_file: \"/path/to/cert.crt\" key_file: \"/path/to/key.key\" path: \"/health/status\" 3 check_collector_pipeline: 4 enabled: true 5 interval: \"5m\" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: zpages: endpoint: \"localhost:55679\" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug]",
"oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [\"\"] resources: - services - pods - namespaces verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"monitoring.coreos.com\"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"discovery.k8s.io\"] resources: - endpointslices verbs: [\"get\", \"list\", \"watch\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/red_hat_build_of_opentelemetry/configuring-the-collector |
Chapter 17. StatefulSet [apps/v1] | Chapter 17. StatefulSet [apps/v1] Description StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. Type object 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object A StatefulSetSpec is the specification of a StatefulSet. status object StatefulSetStatus represents the current state of a StatefulSet. 17.1.1. .spec Description A StatefulSetSpec is the specification of a StatefulSet. Type object Required selector template serviceName Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) ordinals object StatefulSetOrdinals describes the policy used for replica ordinal assignment in this StatefulSet. persistentVolumeClaimRetentionPolicy object StatefulSetPersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates. podManagementPolicy string podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady , where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once. Possible enum values: - "OrderedReady" will create pods in strictly increasing order on scale up and strictly decreasing order on scale down, progressing only when the pod is ready or terminated. At most one pod will be changed at any time. - "Parallel" will create and delete pods as soon as the stateful set replica count is changed, and will not wait for pods to be ready or complete termination. replicas integer replicas is the desired number of replicas of the given Template. These are replicas in the sense that they are instantiations of the same Template, but individual replicas also have a consistent identity. If unspecified, defaults to 1. revisionHistoryLimit integer revisionHistoryLimit is the maximum number of revisions that will be maintained in the StatefulSet's revision history. The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version. The default value is 10. selector LabelSelector selector is a label query over pods that should match the replica count. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors serviceName string serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller. template PodTemplateSpec template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet. Each pod will be named with the format <statefulsetname>-<podindex>. For example, a pod in a StatefulSet named "web" with index number "3" would be named "web-3". The only allowed template.spec.restartPolicy value is "Always". updateStrategy object StatefulSetUpdateStrategy indicates the strategy that the StatefulSet controller will use to perform updates. It includes any additional parameters necessary to perform the update for the indicated strategy. volumeClaimTemplates array (PersistentVolumeClaim) volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name. 17.1.2. .spec.ordinals Description StatefulSetOrdinals describes the policy used for replica ordinal assignment in this StatefulSet. Type object Property Type Description start integer start is the number representing the first replica's index. It may be used to number replicas from an alternate index (eg: 1-indexed) over the default 0-indexed names, or to orchestrate progressive movement of replicas from one StatefulSet to another. If set, replica indices will be in the range: [.spec.ordinals.start, .spec.ordinals.start + .spec.replicas). If unset, defaults to 0. Replica indices will be in the range: [0, .spec.replicas). 17.1.3. .spec.persistentVolumeClaimRetentionPolicy Description StatefulSetPersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates. Type object Property Type Description whenDeleted string WhenDeleted specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is deleted. The default policy of Retain causes PVCs to not be affected by StatefulSet deletion. The Delete policy causes those PVCs to be deleted. whenScaled string WhenScaled specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is scaled down. The default policy of Retain causes PVCs to not be affected by a scaledown. The Delete policy causes the associated PVCs for any excess pods above the replica count to be deleted. 17.1.4. .spec.updateStrategy Description StatefulSetUpdateStrategy indicates the strategy that the StatefulSet controller will use to perform updates. It includes any additional parameters necessary to perform the update for the indicated strategy. Type object Property Type Description rollingUpdate object RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType. type string Type indicates the type of the StatefulSetUpdateStrategy. Default is RollingUpdate. Possible enum values: - "OnDelete" triggers the legacy behavior. Version tracking and ordered rolling restarts are disabled. Pods are recreated from the StatefulSetSpec when they are manually deleted. When a scale operation is performed with this strategy,specification version indicated by the StatefulSet's currentRevision. - "RollingUpdate" indicates that update will be applied to all Pods in the StatefulSet with respect to the StatefulSet ordering constraints. When a scale operation is performed with this strategy, new Pods will be created from the specification version indicated by the StatefulSet's updateRevision. 17.1.5. .spec.updateStrategy.rollingUpdate Description RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType. Type object Property Type Description maxUnavailable IntOrString The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding up. This can not be 0. Defaults to 1. This field is alpha-level and is only honored by servers that enable the MaxUnavailableStatefulSet feature. The field applies to all pods in the range 0 to Replicas-1. That means if there is any unavailable pod in the range 0 to Replicas-1, it will be counted towards MaxUnavailable. partition integer Partition indicates the ordinal at which the StatefulSet should be partitioned for updates. During a rolling update, all pods from ordinal Replicas-1 to Partition are updated. All pods from ordinal Partition-1 to 0 remain untouched. This is helpful in being able to do a canary based deployment. The default value is 0. 17.1.6. .status Description StatefulSetStatus represents the current state of a StatefulSet. Type object Required replicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this statefulset. collisionCount integer collisionCount is the count of hash collisions for the StatefulSet. The StatefulSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision. conditions array Represents the latest available observations of a statefulset's current state. conditions[] object StatefulSetCondition describes the state of a statefulset at a certain point. currentReplicas integer currentReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by currentRevision. currentRevision string currentRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [0,currentReplicas). observedGeneration integer observedGeneration is the most recent generation observed for this StatefulSet. It corresponds to the StatefulSet's generation, which is updated on mutation by the API Server. readyReplicas integer readyReplicas is the number of pods created for this StatefulSet with a Ready Condition. replicas integer replicas is the number of Pods created by the StatefulSet controller. updateRevision string updateRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [replicas-updatedReplicas,replicas) updatedReplicas integer updatedReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by updateRevision. 17.1.7. .status.conditions Description Represents the latest available observations of a statefulset's current state. Type array 17.1.8. .status.conditions[] Description StatefulSetCondition describes the state of a statefulset at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of statefulset condition. 17.2. API endpoints The following API endpoints are available: /apis/apps/v1/statefulsets GET : list or watch objects of kind StatefulSet /apis/apps/v1/watch/statefulsets GET : watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/statefulsets DELETE : delete collection of StatefulSet GET : list or watch objects of kind StatefulSet POST : create a StatefulSet /apis/apps/v1/watch/namespaces/{namespace}/statefulsets GET : watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} DELETE : delete a StatefulSet GET : read the specified StatefulSet PATCH : partially update the specified StatefulSet PUT : replace the specified StatefulSet /apis/apps/v1/watch/namespaces/{namespace}/statefulsets/{name} GET : watch changes to an object of kind StatefulSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status GET : read status of the specified StatefulSet PATCH : partially update status of the specified StatefulSet PUT : replace status of the specified StatefulSet 17.2.1. /apis/apps/v1/statefulsets Table 17.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind StatefulSet Table 17.2. HTTP responses HTTP code Reponse body 200 - OK StatefulSetList schema 401 - Unauthorized Empty 17.2.2. /apis/apps/v1/watch/statefulsets Table 17.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. Table 17.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.3. /apis/apps/v1/namespaces/{namespace}/statefulsets Table 17.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 17.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of StatefulSet Table 17.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 17.8. Body parameters Parameter Type Description body DeleteOptions schema Table 17.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind StatefulSet Table 17.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK StatefulSetList schema 401 - Unauthorized Empty HTTP method POST Description create a StatefulSet Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body StatefulSet schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 202 - Accepted StatefulSet schema 401 - Unauthorized Empty 17.2.4. /apis/apps/v1/watch/namespaces/{namespace}/statefulsets Table 17.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 17.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. Table 17.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.5. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} Table 17.18. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 17.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a StatefulSet Table 17.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 17.21. Body parameters Parameter Type Description body DeleteOptions schema Table 17.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified StatefulSet Table 17.23. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StatefulSet Table 17.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.25. Body parameters Parameter Type Description body Patch schema Table 17.26. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StatefulSet Table 17.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.28. Body parameters Parameter Type Description body StatefulSet schema Table 17.29. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty 17.2.6. /apis/apps/v1/watch/namespaces/{namespace}/statefulsets/{name} Table 17.30. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 17.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind StatefulSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 17.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.7. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status Table 17.33. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 17.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified StatefulSet Table 17.35. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified StatefulSet Table 17.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.37. Body parameters Parameter Type Description body Patch schema Table 17.38. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified StatefulSet Table 17.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.40. Body parameters Parameter Type Description body StatefulSet schema Table 17.41. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/statefulset-apps-v1 |
Red Hat Quay Operator features | Red Hat Quay Operator features Red Hat Quay 3.10 Advanced Red Hat Quay Operator features Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_operator_features/index |
E.2.27. /proc/sysrq-trigger | E.2.27. /proc/sysrq-trigger Using the echo command to write to this file, a remote root user can execute most System Request Key commands remotely as if at the local terminal. To echo values to this file, the /proc/sys/kernel/sysrq must be set to a value other than 0 . For more information about the System Request Key, see Section E.3.9.3, "/proc/sys/kernel/" . Although it is possible to write to this file, it cannot be read, even by the root user. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-sysrq-trigger |
Red Hat build of OpenTelemetry | Red Hat build of OpenTelemetry OpenShift Container Platform 4.11 Red Hat build of OpenTelemetry Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/red_hat_build_of_opentelemetry/index |
8.138. microcode_ctl | 8.138. microcode_ctl 8.138.1. RHEA-2014:1466 - microcode_ctl enhancement update Updated microcode_ctl packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The microcode_ctl packages provide microcode updates for Intel and AMD processors. Enhancement BZ# 1036240 , BZ# 1113394 The Intel CPU microcode file has been updated to version 20140624. This is the most recent version of the microcode available from Intel. Users of microcode_ctl are advised to upgrade to these updated packages, which add this enhancement. Note that the system must be rebooted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/microcode_ctl |
Appendix A. Administration settings | Appendix A. Administration settings This section contains information about settings that you can edit in the Satellite web UI by navigating to Administer > Settings . A.1. General settings Setting Default Value Description Administrator email address The default administrator email address Satellite URL URL where your Satellite instance is reachable. See also Provisioning > Unattended URL . Entries per page 20 Number of records shown per page in Satellite Fix DB cache No Satellite maintains a cache of permissions and roles. When set to Yes , Satellite recreates this cache on the restart. DB pending seed No Should the foreman-rake db:seed be executed on the run of the installer modules? Capsule request timeout 60 Open and read timeout for HTTP requests from Satellite to Capsule (in seconds). Login page footer text Text to be shown in the login-page footer. HTTP(S) proxy Set a proxy for outgoing HTTP(S) connections from the Satellite product. System-wide proxies must be configured at the operating system level. HTTP(S) proxy except hosts [] Set hostnames to which requests are not to be proxied. Requests to the local host are excluded by default. Show Experimental Labs No Whether or not to show a menu to access experimental lab features (requires reload of page). Display FQDN for hosts Yes If set to Yes , Satellite displays names of hosts as fully qualified domain names (FQDNs). Out of sync interval 30 Hosts report periodically, and if the time between reports exceeds this duration in minutes, hosts are considered out of sync. You can override this on your hosts by adding the outofsync_interval parameter, per host, at Hosts > All hosts > USDhost > Edit > Parameters > Add Parameter . Satellite UUID Satellite instance ID. Uniquely identifies a Satellite instance. Default language The UI for new users uses this language. Default timezone The timezone to use for new users. Instance title The instance title is shown on the top navigation bar (requires a page reload). Saved audits interval Duration in days to preserve audit data. Leave empty to disable the audits cleanup. New host details UI Yes Satellite loads the new UI for host details. A.2. Satellite task settings Setting Default Value Description Sync task timeout 120 Number of seconds to wait for a synchronous task to finish before an exception is raised. Enable dynflow console Yes Enable the dynflow console ( /foreman_tasks/dynflow ) for debugging. Require auth for dynflow console Yes The user must be authenticated as having administrative rights before accessing the dynflow console. Capsule action retry count 4 Number of attempts permitted to start a task on the Capsule before failing. Capsule action retry interval 15 Time in seconds between retries. Allow Capsule batch tasks Yes Enable batch triggering of tasks on the Capsule. Capsule tasks batch size 100 Number of tasks included in one request to the Capsule if foreman_tasks_proxy_batch_trigger is enabled. Tasks troubleshooting URL URL pointing to the task troubleshooting documentation. It should contain a %{label} placeholder that is replaced with a normalized task label (restricted to only alphanumeric characters)). A %{version} placeholder is also available. Polling intervals multiplier 1 Polling multiplier used to multiply the default polling intervals. You can use this to prevent polling too frequently for long running tasks. A.3. Template sync settings Setting Default Value Description Associate New Associate templates with OS, organization and location. Branch Default branch in Git repo. Commit message Templates export made by a Satellite user Custom commit message for exported templates. Dirname / The directory within the Git repo containing the templates. Filter Import or export of names matching this regex. Case-insensitive. Snippets are not filtered. Force import No If set to Yes , locked templates are overwritten during an import. Lock templates Keep, do not lock new How to handle lock for imported templates. Metadata export mode Refresh Default metadata export mode. Possible options: refresh re-renders metadata. keep keeps existing metadata. remove exports the template without metadata. Negate No Negate the filter for import or export. Prefix A string added as a prefix to imported templates. Repo Target path from where to import or export templates. Different protocols can be used, for example: /tmp/dir git://example.com https://example.com ssh://example.com When exporting to /tmp , note that production deployments may be configured to use private tmp . Verbosity No Choose verbosity for Rake task importing templates. A.4. Discovery settings Setting Default Value Description Discovery location Indicates the default location to place discovered hosts in. Discovery organization Indicates the default organization to which discovered hosts are added. Interface fact discovery_bootif Fact name to use for primary interface detection. Create bond interfaces No Automatically create a bond interface if another interface is detected on the same VLAN using LLDP. Clean all facts No Clean all reported facts (except discovery facts) during provisioning. Hostname facts discovery_bootif List of facts to use for the hostname (comma separated, first wins). Auto provisioning No Use the provisioning rules to automatically provision newly discovered hosts. Reboot Yes Automatically reboot or kexec discovered hosts during provisioning. Hostname prefix mac The default prefix to use for the hostname. Must start with a letter. Fact columns Extra facter columns to show in host lists (comma separated). Highlighted facts Regex to organize facts for highlights section - e.g. ^(abc|cde)USD . Storage facts Regex to organize facts for the storage section. Software facts Regex to organize facts for the software section. Hardware facts Regex to organize facts for the hardware section. Network facts Regex to organize facts for the network section. IPMI facts Regex to organize facts for the Intelligent Platform Management Interface (IPMI) section. Lock PXE No Automatically generate a Preboot Execution Environment (PXE) configuration to pin a newly discovered host to discovery. Locked PXELinux template name pxelinux_discovery PXELinux template to be used when pinning a host to discovery. Locked PXEGrub template name pxegrub_discovery PXEGrub template to be used when pinning a host to discovery. Locked PXEGrub2 template name pxegrub2_discovery PXEGrub2 template to be used when pinning a host to discovery. Force DNS Yes Force the creation of DNS entries when provisioning a discovered host. Error on existing NIC No Do not permit to discover an existing host matching the MAC of a provisioning Network Interface Card (NIC) (errors out early). Type of name generator Fact + prefix Discovery hostname naming pattern. Prefer IPv6 No Prefer IPv6 to IPv4 when calling discovered nodes. A.5. Boot disk settings Setting Default Value Description iPXE directory /usr/share/ipxe Path to directory containing iPXE images. ISOLINUX directory /usr/share/syslinux Path to directory containing ISOLINUX images. SYSLINUX directory /usr/share/syslinux Path to directory containing SYSLINUX images. Grub2 directory /var/lib/tftpboot/grub2 Path to directory containing grubx64.efi and shimx64.efi . Host image template Boot disk iPXE - host iPXE template to use for host-specific boot disks. Generic image template Boot disk iPXE - generic host iPXE template to use for generic host boot disks. Generic Grub2 EFI image template Boot disk Grub2 EFI - generic host Grub2 template to use for generic Extensible Firmware Interface (EFI) host boot disks. ISO generation command genisoimage Command to generate ISO image, use genisoimage or mkisofs . Installation media caching Yes Installation media files are cached for full host images. Allowed bootdisk types [generic, host, full_host, subnet] List of permitted bootdisk types. Leave blank to disable it. A.6. Red Hat Cloud settings Setting Default Value Description Automatic inventory upload Yes Enable automatic upload of your host inventory to the Red Hat cloud. Synchronize recommendations Automatically No Enable automatic synchronization of Insights recommendations from the Red Hat cloud. Obfuscate host names No Obfuscate hostnames sent to the Red Hat cloud. Obfuscate host ipv4 addresses No Obfuscate IPv4 addresses sent to the Red Hat cloud. ID of the RHC daemon ***** RHC daemon id. A.7. Content settings Setting Default Value Description Default HTTP Proxy Default HTTP Proxy for syncing content. CDN SSL version SSL version used to communicate with the CDN. Default synced OS provisioning template Kickstart default Default provisioning template for operating systems created from synced content. Default synced OS finish template Kickstart default finish Default finish template for new operating systems created from synced content. Default synced OS user-data Kickstart default user data Default user data for new operating systems created from synced content. Default synced OS PXELinux template Kickstart default PXELinux Default PXELinux template for new operating systems created from synced content. Default synced OS PXEGrub template Kickstart default PXEGrub Default PXEGrub template for new operating systems created from synced content. Default synced OS PXEGrub2 template Kickstart default PXEGrub2 Default PXEGrub2 template for new operating systems created from synced content. Default synced OS iPXE template Kickstart default iPXE Default iPXE template for new operating systems created from synced content. Default synced OS partition table Kickstart default Default partitioning table for new operating systems created from synced content. Default synced OS kexec template Discovery Red Hat kexec Default kexec template for new operating systems created from synced content. Default synced OS Atomic template Atomic Kickstart default Default provisioning template for new atomic operating systems created from synced content. Manifest refresh timeout 1200 Timeout when refreshing a manifest (in seconds). Subscription connection enabled Yes Can communicate with the Red Hat Portal for subscriptions. Installable errata from Content View No Calculate errata host status based only on errata in a host's content view and lifecycle environment. Restrict Composite Content View promotion No If this is enabled, a composite content view cannot be published or promoted, unless the content view versions that it includes exist in the target environment. Check services before actions Yes Check the status of backend services such as pulp and candlepin before performing actions? Batch size to sync repositories in 100 How many repositories should be synced concurrently on a Capsule. A smaller number may lead to longer sync times. A larger number will increase dynflow load. Sync Capsules after Content View promotion Yes Whether or not to auto sync Capsules after a content view promotion. Default Custom Repository download policy immediate Default download policy for custom repositories. Either immediate or on_demand . Default Red Hat Repository download policy on_demand Default download policy for enabled Red Hat repositories. Either immediate or on_demand . Default Capsule download policy on_demand Default download policy for Capsule syncs. Either inherit , immediate , or on_demand . Pulp export destination filepath /var/lib/pulp/katello-export On-disk location for exported repositories. Pulp 3 export destination filepath /var/lib/pulp/exports On-disk location for Pulp 3 exported repositories. Pulp client key /etc/pki/katello/private/pulp-client.key Path for SSL key used for Pulp server authentication. Pulp client cert /etc/pki/katello/certs/pulp-client.crt Path for SSL certificate used for Pulp server authentication. Sync Connection Timeout 300 Total timeout in seconds for connections when syncing. Delete Host upon unregister No When unregistering a host using subscription-manager, also delete the host record. Managed resources linked to the host such as virtual machines and DNS records might also be deleted. Subscription manager name registration fact When registering a host using subscription-manager, force use the specified fact for the host name (in the form of fact.fact ). Subscription manager name registration fact strict matching No If this is enabled, and register_hostname_fact is set and provided, registration looks for a new host by name only using that fact, and skips all hostname matching. Default Location subscribed hosts Default Location Default location where new subscribed hosts are stored after registration. Expire soon days 120 The number of days remaining in a subscription before you are reminded about renewing it. Content View Dependency Solving Default No The default dependency solving value for new content views. Host Duplicate DMI UUIDs [] If hosts fail to register because of duplicate Desktop Management Interface (DMI) UUIDs, add their comma-separated values here. Subsequent registrations generate a unique DMI UUID for the affected hosts. Host Profile Assume Yes Enable new host registrations to assume registered profiles with matching hostname as long as the registering DMI UUID is not used by another host. Host Profile Can Change In Build No Enable host registrations to bypass Host Profile Assume as long as the host is in build mode. Host Can Re-Register Only In Build No Enable hosts to re-register only when they are in build mode. Host Tasks Workers Pool Size 5 Number of workers in the pool to handle the execution of host-related tasks. When set to 0, the default queue is used. Restart of the dynflowd/foreman-tasks service is required. Applicability Batch Size 50 Number of host applicability calculations to process per task. Autosearch Yes For pages that support it, automatically perform the search while typing in search input. Autosearch delay 500 If Autosearch is enabled, delay in milliseconds before executing searches while typing. Pulp bulk load size 2000 The number of items fetched from a single paged Pulp API call. Upload profiles without Dynflow Yes Enable Katello to update host installed packages, enabled repositories, and module inventory directly instead of wrapped in Dynflow tasks (try turning off if Puma processes are using too much memory). Orphaned Content Protection Time 1440 Time in minutes to consider orphan content as orphaned. Prefer registered through Capsule for remote execution No Prefer using a proxy to which a host is registered when using remote execution. Allow deleting repositories in published content views Yes Enable removal of repositories that the user has previously published in one or more content view versions. A.8. Authentication settings Setting Default Value Description OAuth active Yes Satellite will use OAuth for API authorization. OAuth consumer key ***** OAuth consumer key. OAuth consumer secret ***** OAuth consumer secret. OAuth map users No Satellite maps users by username in the request-header. If this is disabled, OAuth requests have administrator rights. Failed login attempts limit 30 Satellite blocks user logins from an incoming IP address for 5 minutes after the specified number of failed login attempts. Set to 0 to disable brute force protection. Restrict registered Capsules Yes Only known Capsules can access features that use Capsule authentication. Require SSL for capsules Yes Client SSL certificates are used to identify Capsules ( :require_ssl should also be enabled). Trusted hosts [] List of hostnames, IPv4, IPv6 addresses or subnets to be trusted in addition to Capsules for access to fact/report importers and ENC output. SSL certificate /etc/foreman/client_cert.pem SSL Certificate path that Satellite uses to communicate with its proxies. SSL CA file /etc/foreman/proxy_ca.pem SSL CA file path that Satellite uses to communicate with its proxies. SSL private key /etc/foreman/client_key.pem SSL Private Key path that Satellite uses to communicate with its proxies. SSL client DN env HTTP_SSL_CLIENT_S_DN Environment variable containing the subject DN from a client SSL certificate. SSL client verify env HTTP_SSL_CLIENT_VERIFY Environment variable containing the verification status of a client SSL certificate. SSL client cert env HTTP_SSL_CLIENT_CERT Environment variable containing a client's SSL certificate. Server CA file SSL CA file path used in templates to verify the connection to Satellite. Websockets SSL key etc/pki/katello/private/katello-apache.key Private key file path that Satellite uses to encrypt websockets. Websockets SSL certificate /etc/pki/katello/certs/katello-apache.crt Certificate path that Satellite uses to encrypt websockets. Websockets encryption Yes VNC/SPICE websocket proxy console access encryption ( websockets_ssl_key/cert setting required). Login delegation logout URL Redirect your users to this URL on logout. Enable Authorize login delegation also. Authorize login delegation auth source user autocreate External Name of the external authentication source where unknown externally authenticated users (see Authorize login delegation ) are created. Empty means no autocreation. Authorize login delegation No Authorize login delegation with REMOTE_USER HTTP header. Authorize login delegation API No Authorize login delegation with REMOTE_USER HTTP header for API calls too. Idle timeout 60 Log out idle users after the specified number of minutes. BCrypt password cost 9 Cost value of bcrypt password hash function for internal auth-sources (4 - 30). A higher value is safer but verification is slower, particularly for stateless API calls and UI logins. A password change is needed to affect existing passwords. BMC credentials access Yes Permits access to BMC interface passwords through ENC YAML output and in templates. OIDC JWKs URL OpenID Connect JSON Web Key Set (JWKS) URL. Typically https://keycloak.example.com/auth/realms/<realm name>/protocol/openid-connect/certs when using Keycloak as an OpenID provider. OIDC Audience [] Name of the OpenID Connect Audience that is being used for authentication. In the case of Keycloak this is the Client ID. OIDC Issuer The issuer claim identifies the principal that issued the JSON Web tokens (JWT), which exists at a /.well-known/openid-configuration in case of most of the OpenID providers. OIDC Algorithm The algorithm used to encode the JWT in the OpenID provider. A.9. Email settings Setting Default Value Description Email reply address Email reply address for emails that Satellite is sending. Email subject prefix Prefix to add to all outgoing email. Send welcome email No Send a welcome email including username and URL to new users. Delivery method Sendmail Method used to deliver email. SMTP enable StartTLS auto Yes SMTP automatically enables StartTLS. SMTP OpenSSL verify mode Default verification mode When using TLS, you can set how OpenSSL checks the certificate. SMTP address SMTP address to connect to. SMTP port 25 SMTP port to connect to. SMTP HELO/EHLO domain HELO/EHLO domain. SMTP username Username to use to authenticate, if required. SMTP password ***** Password to use to authenticate, if required. SMTP authentication none Specify authentication type, if required. Sendmail arguments -i Specify additional options to sendmail. Only used when the delivery method is set to sendmail. Sendmail location /usr/sbin/sendmail The location of the sendmail executable. Only used when the delivery method is set to sendmail. A.10. Notifications settings Setting Default Value Description RSS enable Yes Pull RSS notifications. RSS URL https://www.redhat.com/en/rss/blog/channel/red-hat-satellite URL from which to fetch RSS notifications. A.11. Provisioning settings Setting Default Value Description Host owner Default owner on provisioned hosts, if empty Satellite uses the current user. Root password ***** Default encrypted root password on provisioned hosts. Unattended URL URL that hosts retrieve templates from during the build. When it starts with https, unattended, or userdata, controllers cannot be accessed using HTTP. Safemode rendering Yes Enables safe mode rendering of provisioning templates. The default and recommended option Yes denies access to variables and any object that is not listed in Satellite. When set to No , any object may be accessed by a user with permission to use templating features, either by editing templates, parameters or smart variables. This permits users full remote code execution on Satellite Server, effectively disabling all authorization. This is not a safe option, especially in larger companies. Access unattended without build No Enable access to unattended URLs without build mode being used. Query local nameservers No Satellite queries the locally configured resolver instead of the SOA/NS authorities. Installation token lifetime 360 Time in minutes that installation tokens should be valid for. Set to 0 to disable the token. SSH timeout 120 Time in seconds before SSH provisioning times out. Libvirt default console address 0.0.0.0 The IP address that should be used for the console listen address when provisioning new virtual machines using libvirt. Update IP from built request No Satellite updates the host IP with the IP that made the build request. Use short name for VMs No Satellite uses the short hostname instead of the FQDN for creating new virtual machines. DNS timeout [5, 10, 15, 20] List of timeouts (in seconds) for DNS lookup attempts such as the dns_lookup macro and DNS record conflict validation. Clean up failed deployment Yes Satellite deletes the virtual machine if the provisioning script ends with a non-zero exit code. Type of name generator Random-based Specifies the method used to generate a hostname when creating a new host. The default Random-based option generates a unique random hostname which you can but do not have to use. This is useful for users who create many hosts and do not know how to name them. The MAC-based option is for bare-metal hosts only. If you delete a host and create it later on, it receives the same hostname based on the MAC address. This can be useful for users who recycle servers and want them to always get the same hostname. The Off option disables the name generator function and leaves the hostname field blank. Default PXE global template entry Default PXE menu item in a global template - local , discovery or custom, use blank for template default. Default PXE local template entry Default PXE menu item in local template - local , local_chain_hd0 , or custom, use blank for template default. iPXE intermediate script iPXE intermediate script Intermediate iPXE script for unattended installations. Destroy associated VM on host delete No Destroy associated VM on host delete. When enabled, VMs linked to hosts are deleted on Compute Resource. When disabled, VMs are unlinked when the host is deleted, meaning they remain on Compute Resource and can be re-associated or imported back to Satellite again. This does not automatically power off the VM Maximum structured facts 100 Maximum number of keys in structured subtree, statistics stored in satellite::dropped_subtree_facts . Default Global registration template Global Registration Global Registration template. Default 'Host initial configuration' template Linux host_init_config default Default 'Host initial configuration' template, automatically assigned when a new operating system is created. Global default PXEGrub2 template PXEGrub2 global default Global default PXEGrub2 template. This template is deployed to all configured TFTP servers. It is not affected by upgrades. Global default PXELinux template PXELinux global default Global default PXELinux template. This template is deployed to all configured TFTP servers. It is not affected by upgrades. Global default PXEGrub template PXEGrub global default Global default PXEGrub template. This template is deployed to all configured TFTP servers. It is not affected by upgrades. Global default iPXE template iPXE global default Global default iPXE template. This template is deployed to all configured TFTP servers. It is not affected by upgrades. Local boot PXEGrub2 template PXEGrub2 default local boot Template that is selected as PXEGrub2 default for local boot. Local boot PXELinux template PXELinux default local boot Template that is selected as PXELinux default for local boot. Local boot PXEGrub template PXEGrub default local boot Template that is selected as PXEGrub default for local boot. Local boot iPXE template iPXE default local boot Template that is selected as iPXE default for local boot. Manage PuppetCA Yes Satellite automates certificate signing upon provision of a new host. Use UUID for certificates No Satellite uses random UUIDs for certificate signing instead of hostnames. Show unsupported provisioning templates No Show unsupported provisioning templates. When enabled, all the available templates are shown. When disabled, only Red Hat supported templates are shown. A.12. Facts settings Setting Default Value Description Create new host when facts are uploaded Yes Satellite creates the host when new facts are received. Location fact satellite_location Hosts created after a Puppet run are placed in the location specified by this fact. Organization fact satellite_organization Hosts created after a Puppet run are placed in the organization specified by this fact. The content of this fact should be the full label of the organization. Default location Default Location Hosts created after a Puppet run that did not send a location fact are placed in this location. Default organization Default Organization Hosts created after a Puppet run that did not send an organization fact are placed in this organization. Update hostgroup from facts Yes Satellite updates a host's hostgroup from its facts. Ignore facts for operating system No Stop updating operating system from facts. Ignore facts for domain No Stop updating domain values from facts. Update subnets from facts None Satellite updates a host's subnet from its facts. Ignore interfaces facts for provisioning No Stop updating IP and MAC address values from facts (affects all interfaces). Ignore interfaces with matching identifier [ lo , en*v* , usb* , vnet* , macvtap* , ;vdsmdummy; , veth* , tap* , qbr* , qvb* , qvo* , qr-* , qg-* , vlinuxbr* , vovsbr* , br-int ] Skip creating or updating host network interfaces objects with identifiers matching these values from incoming facts. You can use a * wildcard to match identifiers with indexes, e.g. macvtap* . The ignored interface raw facts are still stored in the database, see the Exclude pattern setting for more details. Exclude pattern for facts stored in Satellite [ lo , en*v* , usb* , vnet* , macvtap* , ;vdsmdummy; , veth* , tap* , qbr* , qvb* , qvo* , qr-* , qg-* , vlinuxbr* , vovsbr* , br-int , load_averages::* , memory::swap::available* , memory::swap::capacity , memory::swap::used* , memory::system::available* , memory::system::capacity , memory::system::used* , memoryfree , memoryfree_mb , swapfree , swapfree_mb , uptime_hours , uptime_days ] Exclude pattern for all types of imported facts (Puppet, Ansible, rhsm). Those facts are not stored in the satellite database. You can use a * wildcard to match names with indexes, e.g. ignore* filters out ignore, ignore123 as well as a::ignore or even a::ignore123::b. A.13. Configuration management settings Setting Default Value Description Create new host when report is uploaded Yes Satellite creates the host when a report is received. Matchers inheritance Yes Satellite matchers are inherited by children when evaluating smart class parameters for hostgroups, organizations, and locations. Default parameters lookup path [ fqdn , hostgroup , os , domain ] Satellite evaluates host smart class parameters in this order by default. Interpolate ERB in parameters Yes Satellite parses ERB in parameters value in the ENC output. Always show configuration status No All hosts show a configuration status even when a Puppet Capsule is not assigned. A.14. Remote execution settings Setting Default Value Description Fallback to Any Capsule No Search the host for any proxy with Remote Execution. This is useful when the host has no subnet or the subnet does not have an execution proxy. Enable Global Capsule Yes Search for Remote Execution proxy outside of the proxies assigned to the host. The search is limited to the host's organization and location. SSH User root Default user to use for SSH. You can override per host by setting the remote_execution_ssh_user parameter. Effective User root Default user to use for executing the script. If the user differs from the SSH user, su or sudo is used to switch the user. Effective User Method sudo The command used to switch to the effective user. One of [ sudo , dzdo , su ] Effective user password ***** Effective user password. See Effective User . Sync Job Templates Yes Whether to sync templates from disk when running db:seed . SSH Port 22 Port to use for SSH communication. Default port 22. You can override per host by setting the remote_execution_ssh_port parameter. Connect by IP No Whether the IP addresses on host interfaces are preferred over the FQDN. It is useful when the DNS is not resolving the FQDNs properly. You can override this per host by setting the remote_execution_connect_by_ip parameter. For dual-stacked hosts, consider the remote_execution_connect_by_ip_prefer_ipv6 setting. Prefer IPv6 over IPv4 No When connecting using an IP address, are IPv6 addresses preferred? If no IPv6 address is set, it falls back to IPv4 automatically. You can override this per host by setting the remote_execution_connect_by_ip_prefer_ipv6 parameter. By default and for compatibility, IPv4 is preferred over IPv6. Default SSH password ***** Default password to use for SSH. You can override per host by setting the remote_execution_ssh_password parameter. Default SSH key passphrase ***** Default key passphrase to use for SSH. You can override per host by setting the remote_execution_ssh_key_passphrase parameter. Workers pool size 5 Number of workers in the pool to handle the execution of the remote execution jobs. Restart of the dynflowd/satellite-tasks service is required. Cleanup working directories Yes Whether working directories are removed after task completion. You can override this per host by setting the remote_execution_cleanup_working_dirs parameter. Cockpit URL Where to find the Cockpit instance for the Web Console button. By default, no button is shown. Form Job Template Run Command - SSH Default Choose a job template that is pre-selected in job invocation form. Job Invocation Report Template Jobs - Invocation report template Select a report template used for generating a report for a particular remote execution job. Time to pickup 86400 Time in seconds within which the host has to pick up a job. If the job is not picked up within this limit, the job will be cancelled. Applies only to pull-mqtt based jobs. Defaults to one day. A.15. Ansible settings Setting Default Value Description Private Key Path Use this to supply a path to an SSH Private Key that Ansible uses instead of a password. Override with the ansible_ssh_private_key_file host parameter. Connection type ssh Use this connection type by default when running Ansible playbooks. You can override this on hosts by adding the ansible_connection parameter. WinRM cert Validation validate Enable or disable WinRM server certificate validation when running Ansible playbooks. You can override this on hosts by adding the ansible_winrm_server_cert_validation parameter. Default verbosity level Disabled Satellite adds this level of verbosity for additional debugging output when running Ansible playbooks. Post-provision timeout 360 Timeout (in seconds) to set when Satellite triggers an Ansible roles task playbook after a host is fully provisioned. Set this to the maximum time you expect a host to take until it is ready after a reboot. Ansible report timeout 30 Timeout (in minutes) when hosts should have reported. Ansible out of sync disabled No Disable host configuration status turning to out of sync for Ansible after a report does not arrive within the configured interval. Default Ansible inventory report template Ansible - Ansible Inventory Satellite uses this template to schedule the report with Ansible inventory. Ansible roles to ignore [] The roles to exclude when importing roles from Capsule. The expected input is comma separated values and you can use * wildcard metacharacters. For example: foo* , *b* , *bar . Capsule tasks batch size for Ansible Number of tasks which should be sent to the Capsule in one request if satellite_tasks_proxy_batch_trigger is enabled. If set, it overrides satellite_tasks_proxy_batch_size setting for Ansible jobs. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/administration_settings_admin |
Chapter 9. Sending Binary Data with SOAP MTOM | Chapter 9. Sending Binary Data with SOAP MTOM Abstract SOAP Message Transmission Optimization Mechanism (MTOM) replaces SOAP with attachments as a mechanism for sending binary data as part of an XML message. Using MTOM with Apache CXF requires adding the correct schema types to a service's contract and enabling the MTOM optimizations. 9.1. Overview of MTOM SOAP Message Transmission Optimization Mechanism (MTOM) specifies an optimized method for sending binary data as part of a SOAP message. Unlike SOAP with Attachments, MTOM requires the use of XML-binary Optimized Packaging (XOP) packages for transmitting binary data. Using MTOM to send binary data does not require you to fully define the MIME Multipart/Related message as part of the SOAP binding. It does, however, require that you do the following: Annotate the data that you are going to send as an attachment. You can annotate either your WSDL or the Java class that implements your data. Enable the runtime's MTOM support. This can be done either programmatically or through configuration. Develop a DataHandler for the data being passed as an attachment. Note Developing DataHandler s is beyond the scope of this book. 9.2. Annotating Data Types to use MTOM Overview In WSDL, when defining a data type for passing along a block of binary data, such as an image file or a sound file, you define the element for the data to be of type xsd:base64Binary . By default, any element of type xsd:base64Binary results in the generation of a byte[] which can be serialized using MTOM. However, the default behavior of the code generators does not take full advantage of the serialization. In order to fully take advantage of MTOM you must add annotations to either your service's WSDL document or the JAXB class that implements the binary data structure. Adding the annotations to the WSDL document forces the code generators to generate streaming data handlers for the binary data. Annotating the JAXB class involves specifying the proper content types and might also involve changing the type specification of the field containing the binary data. WSDL first Example 9.1, "Message for MTOM" shows a WSDL document for a Web service that uses a message which contains one string field, one integer field, and a binary field. The binary field is intended to carry a large image file, so it is not appropriate to send it as part of a normal SOAP message. Example 9.1. Message for MTOM If you want to use MTOM to send the binary part of the message as an optimized attachment you must add the xmime:expectedContentTypes attribute to the element containing the binary data. This attribute is defined in the http://www.w3.org/2005/05/xmlmime namespace and specifies the MIME types that the element is expected to contain. You can specify a comma separated list of MIME types. The setting of this attribute changes how the code generators create the JAXB class for the data. For most MIME types, the code generator creates a DataHandler. Some MIME types, such as those for images, have defined mappings. Note The MIME types are maintained by the Internet Assigned Numbers Authority(IANA) and are described in detail in Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies and Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types . For most uses you specify application/octet-stream . Example 9.2, "Binary Data for MTOM" shows how you can modify xRayType from Example 9.1, "Message for MTOM" for using MTOM. Example 9.2. Binary Data for MTOM The generated JAXB class generated for xRayType no longer contains a byte[] . Instead the code generator sees the xmime:expectedContentTypes attribute and generates a DataHandler for the imageData field. Note You do not need to change the binding element to use MTOM. The runtime makes the appropriate changes when the data is sent. Java first If you are doing Java first development you can make your JAXB class MTOM ready by doing the following: Make sure the field holding the binary data is a DataHandler. Add the @XmlMimeType() annotation to the field containing the data you want to stream as an MTOM attachment. Example 9.3, "JAXB Class for MTOM" shows a JAXB class annotated for using MTOM. Example 9.3. JAXB Class for MTOM 9.3. Enabling MTOM By default the Apache CXF runtime does not enable MTOM support. It sends all binary data as either part of the normal SOAP message or as an unoptimized attachment. You can activate MTOM support either programmatically or through the use of configuration. 9.3.1. Using JAX-WS APIs Overview Both service providers and consumers must have the MTOM optimizations enabled. The JAX-WS APIs offer different mechanisms for each type of endpoint. Service provider If you published your service provider using the JAX-WS APIs you enable the runtime's MTOM support as follows: Access the Endpoint object for your published service. The easiest way to access the Endpoint object is when you publish the endpoint. For more information see Chapter 31, Publishing a Service . Get the SOAP binding from the Endpoint using its getBinding() method, as shown in Example 9.4, "Getting the SOAP Binding from an Endpoint" . Example 9.4. Getting the SOAP Binding from an Endpoint You must cast the returned binding object to a SOAPBinding object to access the MTOM property. Set the binding's MTOM enabled property to true using the binding's setMTOMEnabled() method, as shown in Example 9.5, "Setting a Service Provider's MTOM Enabled Property" . Example 9.5. Setting a Service Provider's MTOM Enabled Property Consumer To MTOM enable a JAX-WS consumer you must do the following: Cast the consumer's proxy to a BindingProvider object. For information on getting a consumer proxy see Chapter 25, Developing a Consumer Without a WSDL Contract or Chapter 28, Developing a Consumer From a WSDL Contract . Get the SOAP binding from the BindingProvider using its getBinding() method, as shown in Example 9.6, "Getting a SOAP Binding from a BindingProvider " . Example 9.6. Getting a SOAP Binding from a BindingProvider Set the bindings MTOM enabled property to true using the binding's setMTOMEnabled() method, as shown in Example 9.7, "Setting a Consumer's MTOM Enabled Property" . Example 9.7. Setting a Consumer's MTOM Enabled Property 9.3.2. Using configuration Overview If you publish your service using XML, such as when deploying to a container, you can enable your endpoint's MTOM support in the endpoint's configuration file. For more information on configuring endpoint's see Part IV, "Configuring Web Service Endpoints" . Procedure The MTOM property is set inside the jaxws:endpoint element for your endpoint. To enable MTOM do the following: Add a jaxws:property child element to the endpoint's jaxws:endpoint element. Add a entry child element to the jaxws:property element. Set the entry element's key attribute to mtom-enabled . Set the entry element's value attribute to true . Example Example 9.8, "Configuration for Enabling MTOM" shows an endpoint that is MTOM enabled. Example 9.8. Configuration for Enabling MTOM | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions name=\"XrayStorage\" targetNamespace=\"http://mediStor.org/x-rays\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:tns=\"http://mediStor.org/x-rays\" xmlns:soap12=\"http://schemas.xmlsoap.org/wsdl/soap12/\" xmlns:xsd1=\"http://mediStor.org/types/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"> <types> <schema targetNamespace=\"http://mediStor.org/types/\" xmlns=\"http://www.w3.org/2001/XMLSchema\"> <complexType name=\"xRayType\"> <sequence> <element name=\"patientName\" type=\"xsd:string\" /> <element name=\"patientNumber\" type=\"xsd:int\" /> <element name=\"imageData\" type=\"xsd:base64Binary\" /> </sequence> </complexType> <element name=\"xRay\" type=\"xsd1:xRayType\" /> </schema> </types> <message name=\"storRequest\"> <part name=\"record\" element=\"xsd1:xRay\"/> </message> <message name=\"storResponse\"> <part name=\"success\" type=\"xsd:boolean\"/> </message> <portType name=\"xRayStorage\"> <operation name=\"store\"> <input message=\"tns:storRequest\" name=\"storRequest\"/> <output message=\"tns:storResponse\" name=\"storResponse\"/> </operation> </portType> <binding name=\"xRayStorageSOAPBinding\" type=\"tns:xRayStorage\"> <soap12:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"store\"> <soap12:operation soapAction=\"\" style=\"document\"/> <input name=\"storRequest\"> <soap12:body use=\"literal\"/> </input> <output name=\"storResponse\"> <soap12:body use=\"literal\"/> </output> </operation> </binding> </definitions>",
"<types> <schema targetNamespace=\"http://mediStor.org/types/\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:xmime=\"http://www.w3.org/2005/05/xmlmime\"> <complexType name=\"xRayType\"> <sequence> <element name=\"patientName\" type=\"xsd:string\" /> <element name=\"patientNumber\" type=\"xsd:int\" /> <element name=\"imageData\" type=\"xsd:base64Binary\" xmime:expectedContentTypes=\"application/octet-stream\" /> </sequence> </complexType> <element name=\"xRay\" type=\"xsd1:xRayType\" /> </schema> </types>",
"@XmlType public class XRayType { protected String patientName; protected int patientNumber; @XmlMimeType(\"application/octet-stream\") protected DataHandler imageData; }",
"// Endpoint ep is declared previously SOAPBinding binding = (SOAPBinding)ep.getBinding();",
"binding.setMTOMEnabled(true);",
"// BindingProvider bp declared previously SOAPBinding binding = (SOAPBinding)bp.getBinding();",
"binding.setMTOMEnabled(true);",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxws=\"http://cxf.apache.org/jaxws\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schema/jaxws.xsd\"> <jaxws:endpoint id=\"xRayStorage\" implementor=\"demo.spring.xRayStorImpl\" address=\"http://localhost/xRayStorage\"> <jaxws:properties> <entry key=\"mtom-enabled\" value=\"true\"/> </jaxws:properties> </jaxws:endpoint> </beans>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/fusecxfmtom |
Chapter 11. Managing TLS certificates | Chapter 11. Managing TLS certificates AMQ Streams supports encrypted communication between the Kafka and AMQ Streams components using the TLS protocol. Communication between Kafka brokers (interbroker communication), between ZooKeeper nodes (internodal communication), and between those components and the AMQ Streams operators is always encrypted. Communication between Kafka clients and Kafka brokers is encrypted according to how the cluster is configured. For the Kafka and AMQ Streams components, TLS certificates are also used for authentication. The Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster. It also sets up other TLS certificates if you want to enable encryption or TLS authentication between Kafka brokers and clients. Certificates provided by users are not renewed. You can provide your own server certificates, called Kafka listener certificates , for TLS listeners or external listeners which have TLS encryption enabled. For more information, see Section 11.7, "Kafka listener certificates" . Figure 11.1. Example architecture of the communication secured by TLS 11.1. Certificate Authorities To support encryption, each AMQ Streams component needs its own private keys and public key certificates. All component certificates are signed by an internal Certificate Authority (CA) called the cluster CA . Similarly, each Kafka client application connecting to AMQ Streams using TLS client authentication needs to provide private keys and certificates. A second internal CA, named the clients CA , is used to sign certificates for the Kafka clients. 11.1.1. CA certificates Both the cluster CA and clients CA have a self-signed public key certificate. Kafka brokers are configured to trust certificates signed by either the cluster CA or clients CA. Components that clients do not need to connect to, such as ZooKeeper, only trust certificates signed by the cluster CA. Unless TLS encryption for external listeners is disabled, client applications must trust certificates signed by the cluster CA. This is also true for client applications that perform mutual TLS authentication . By default, AMQ Streams automatically generates and renews CA certificates issued by the cluster CA or clients CA. You can configure the management of these CA certificates in the Kafka.spec.clusterCa and Kafka.spec.clientsCa objects. Certificates provided by users are not renewed. You can provide your own CA certificates for the cluster CA or clients CA. For more information, see Section 11.1.2, "Installing your own CA certificates" . If you provide your own certificates, you must manually renew them when needed. 11.1.2. Installing your own CA certificates This procedure describes how to install your own CA certificates and keys instead of using the CA certificates and private keys generated by the Cluster Operator. You can use this procedure to install your own cluster or client CA certificates. The procedure describes renewal of CA certificates in PEM format. You can also use certificates in PKCS #12 format. Prerequisites The Cluster Operator is running. A Kafka cluster is not yet deployed. Your own X.509 certificates and keys in PEM format for the cluster CA or clients CA. If you want to use a cluster or clients CA which is not a Root CA, you have to include the whole chain in the certificate file. The chain should be in the following order: The cluster or clients CA One or more intermediate CAs The root CA All CAs in the chain should be configured as a CA in the X509v3 Basic Constraints. Procedure Put your CA certificate in the corresponding Secret . Delete the existing secret: oc delete secret CA-CERTIFICATE-SECRET CA-CERTIFICATE-SECRET is the name of the Secret , which is CLUSTER-NAME -cluster-ca-cert for the cluster CA certificate and CLUSTER-NAME -clients-ca-cert for the clients CA certificate. Ignore any "Not Exists" errors. Create and label the new secret oc create secret generic CA-CERTIFICATE-SECRET --from-file=ca.crt= CA-CERTIFICATE-FILENAME Put your CA key in the corresponding Secret . Delete the existing secret: oc delete secret CA-KEY-SECRET CA-KEY-SECRET is the name of CA key, which is CLUSTER-NAME -cluster-ca for the cluster CA key and CLUSTER-NAME -clients-ca for the clients CA key. Create the new secret: oc create secret generic CA-KEY-SECRET --from-file=ca.key= CA-KEY-SECRET-FILENAME Label the secrets with the labels strimzi.io/kind=Kafka and strimzi.io/cluster= CLUSTER-NAME : oc label secret CA-CERTIFICATE-SECRET strimzi.io/kind=Kafka strimzi.io/cluster= CLUSTER-NAME oc label secret CA-KEY-SECRET strimzi.io/kind=Kafka strimzi.io/cluster= CLUSTER-NAME Create the Kafka resource for your cluster, configuring either the Kafka.spec.clusterCa or the Kafka.spec.clientsCa object to not use generated CAs: Example fragment Kafka resource configuring the cluster CA to use certificates you supply for yourself kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # ... clusterCa: generateCertificateAuthority: false Additional resources To renew CA certificates you have previously installed, see Section 11.3.5, "Renewing your own CA certificates" . Section 11.7.1, "Providing your own Kafka listener certificates" . 11.2. Secrets AMQ Streams uses Secrets to store private keys and certificates for Kafka cluster components and clients. Secrets are used for establishing TLS encrypted connections between Kafka brokers, and between brokers and clients. They are also used for mutual TLS authentication. A Cluster Secret contains a cluster CA certificate to sign Kafka broker certificates, and is used by a connecting client to establish a TLS encrypted connection with the Kafka cluster to validate broker identity. A Client Secret contains a client CA certificate for a user to sign its own client certificate to allow mutual authentication against the Kafka cluster. The broker validates the client identity through the client CA certificate itself. A User Secret contains a private key and certificate, which are generated and signed by the client CA certificate when a new user is created. The key and certificate are used for authentication and authorization when accessing the cluster. Secrets provide private keys and certificates in PEM and PKCS #12 formats. Using private keys and certificates in PEM format means that users have to get them from the Secrets, and generate a corresponding truststore (or keystore) to use in their Java applications. PKCS #12 storage provides a truststore (or keystore) that can be used directly. All keys are 2048 bits in size. 11.2.1. PKCS #12 storage PKCS #12 defines an archive file format ( .p12 ) for storing cryptography objects into a single file with password protection. You can use PKCS #12 to manage certificates and keys in one place. Each Secret contains fields specific to PKCS #12. The .p12 field contains the certificates and keys. The .password field is the password that protects the archive. 11.2.2. Cluster CA Secrets The following tables describe the Cluster Secrets that are managed by the Cluster Operator in a Kafka cluster. Only the <cluster> -cluster-ca-cert Secret needs to be used by clients. All other Secrets described only need to be accessed by the AMQ Streams components. You can enforce this using OpenShift role-based access controls, if necessary. Table 11.1. Fields in the <cluster>-cluster-ca Secret Field Description ca.key The current private key for the cluster CA. Table 11.2. Fields in the <cluster>-cluster-ca-cert Secret Field Description ca.p12 PKCS #12 archive file for storing certificates and keys. ca.password Password for protecting the PKCS #12 archive file. ca.crt The current certificate for the cluster CA. Note The CA certificates in <cluster> -cluster-ca-cert must be trusted by Kafka client applications so that they validate the Kafka broker certificates when connecting to Kafka brokers over TLS. Table 11.3. Fields in the <cluster>-kafka-brokers Secret Field Description <cluster> -kafka- <num> .p12 PKCS #12 archive file for storing certificates and keys. <cluster> -kafka- <num> .password Password for protecting the PKCS #12 archive file. <cluster> -kafka- <num> .crt Certificate for Kafka broker pod <num> . Signed by a current or former cluster CA private key in <cluster> -cluster-ca . <cluster> -kafka- <num> .key Private key for Kafka broker pod <num> . Table 11.4. Fields in the <cluster>-zookeeper-nodes Secret Field Description <cluster> -zookeeper- <num> .p12 PKCS #12 archive file for storing certificates and keys. <cluster> -zookeeper- <num> .password Password for protecting the PKCS #12 archive file. <cluster> -zookeeper- <num> .crt Certificate for ZooKeeper node <num> . Signed by a current or former cluster CA private key in <cluster> -cluster-ca . <cluster> -zookeeper- <num> .key Private key for ZooKeeper pod <num> . Table 11.5. Fields in the <cluster>-entity-operator-certs Secret Field Description entity-operator_.p12 PKCS #12 archive file for storing certificates and keys. entity-operator_.password Password for protecting the PKCS #12 archive file. entity-operator_.crt Certificate for TLS communication between the Entity Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster> -cluster-ca . entity-operator.key Private key for TLS communication between the Entity Operator and Kafka or ZooKeeper. 11.2.3. Client CA Secrets Table 11.6. Clients CA Secrets managed by the Cluster Operator in <cluster> Secret name Field within Secret Description <cluster> -clients-ca ca.key The current private key for the clients CA. <cluster> -clients-ca-cert ca.p12 PKCS #12 archive file for storing certificates and keys. ca.password Password for protecting the PKCS #12 archive file. ca.crt The current certificate for the clients CA. The certificates in <cluster> -clients-ca-cert are those which the Kafka brokers trust. Note <cluster> -clients-ca is used to sign certificates of client applications. It needs to be accessible to the AMQ Streams components and for administrative access if you are intending to issue application certificates without using the User Operator. You can enforce this using OpenShift role-based access controls if necessary. 11.2.4. Adding labels and annotations to Secrets By configuring the clusterCaCert template property in the Kafka custom resource, you can add custom labels and annotations to the Cluster CA Secrets created by the Cluster Operator. Labels and annotations are useful for identifying objects and adding contextual information. You configure template properties in AMQ Streams custom resources. Example template customization to add labels and annotations to Secrets apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... For more information on configuring template properties, see Section 2.6, "Customizing OpenShift resources" . 11.2.5. Disabling ownerReference in the CA Secrets By default, the Cluster and Client CA Secrets are created with an ownerReference property that is set to the Kafka custom resource. This means that, when the Kafka custom resource is deleted, the CA secrets are also deleted (garbage collected) by OpenShift. If you want to reuse the CA for a new cluster, you can disable the ownerReference by setting the generateSecretOwnerReference property for the Cluster and Client CA Secrets to false in the Kafka configuration. When the ownerReference is disabled, CA Secrets are not deleted by OpenShift when the corresponding Kafka custom resource is deleted. Example Kafka configuration with disabled ownerReference for Cluster and Client CAs apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false # ... Additional resources CertificateAuthority schema reference 11.2.6. User Secrets Table 11.7. Secrets managed by the User Operator Secret name Field within Secret Description <user> user.p12 PKCS #12 archive file for storing certificates and keys. user.password Password for protecting the PKCS #12 archive file. user.crt Certificate for the user, signed by the clients CA user.key Private key for the user 11.3. Certificate renewal and validity periods Cluster CA and clients CA certificates are only valid for a limited time period, known as the validity period. This is usually defined as a number of days since the certificate was generated. For CA certificates automatically created by the Cluster Operator, you can configure the validity period of: Cluster CA certificates in Kafka.spec.clusterCa.validityDays Client CA certificates in Kafka.spec.clientsCa.validityDays The default validity period for both certificates is 365 days. Manually-installed CA certificates should have their own validity periods defined. When a CA certificate expires, components and clients that still trust that certificate will not accept TLS connections from peers whose certificates were signed by the CA private key. The components and clients need to trust the new CA certificate instead. To allow the renewal of CA certificates without a loss of service, the Cluster Operator will initiate certificate renewal before the old CA certificates expire. You can configure the renewal period of the certificates created by the Cluster Operator: Cluster CA certificates in Kafka.spec.clusterCa.renewalDays Client CA certificates in Kafka.spec.clientsCa.renewalDays The default renewal period for both certificates is 30 days. The renewal period is measured backwards, from the expiry date of the current certificate. Validity period against renewal period To make a change to the validity and renewal periods after creating the Kafka cluster, you configure and apply the Kafka custom resource, and manually renew the CA certificates . If you do not manually renew the certificates, the new periods will be used the time the certificate is renewed automatically. Example Kafka configuration for certificate validity and renewal periods apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true # ... The behavior of the Cluster Operator during the renewal period depends on the settings for the certificate generation properties, generateCertificateAuthority and generateCertificateAuthority . true If the properties are set to true , a CA certificate is generated automatically by the Cluster Operator, and renewed automatically within the renewal period. false If the properties are set to false , a CA certificate is not generated by the Cluster Operator. Use this option if you are installing your own certificates . 11.3.1. Renewal process with automatically generated CA certificates The Cluster Operator performs the following process to renew CA certificates: Generate a new CA certificate, but retain the existing key. The new certificate replaces the old one with the name ca.crt within the corresponding Secret . Generate new client certificates (for ZooKeeper nodes, Kafka brokers, and the Entity Operator). This is not strictly necessary because the signing key has not changed, but it keeps the validity period of the client certificate in sync with the CA certificate. Restart ZooKeeper nodes so that they will trust the new CA certificate and use the new client certificates. Restart Kafka brokers so that they will trust the new CA certificate and use the new client certificates. Restart the Topic and User Operators so that they will trust the new CA certificate and use the new client certificates. 11.3.2. Client certificate renewal The Cluster Operator is not aware of the client applications using the Kafka cluster. When connecting to the cluster, and to ensure they operate correctly, client applications must: Trust the cluster CA certificate published in the <cluster> -cluster-ca-cert Secret. Use the credentials published in their <user-name> Secret to connect to the cluster. The User Secret provides credentials in PEM and PKCS #12 format, or it can provide a password when using SCRAM-SHA authentication. The User Operator creates the user credentials when a user is created. You must ensure clients continue to work after certificate renewal. The renewal process depends on how the clients are configured. If you are provisioning client certificates and keys manually, you must generate new client certificates and ensure the new certificates are used by clients within the renewal period. Failure to do this by the end of the renewal period could result in client applications being unable to connect to the cluster. Note For workloads running inside the same OpenShift cluster and namespace, Secrets can be mounted as a volume so the client Pods construct their keystores and truststores from the current state of the Secrets. For more details on this procedure, see Configuring internal clients to trust the cluster CA . 11.3.3. Manually renewing the CA certificates generated by the Cluster Operator Cluster and clients CA certificates generated by the Cluster Operator auto-renew at the start of their respective certificate renewal periods. However, you can use the strimzi.io/force-renew annotation to manually renew one or both of these certificates before the certificate renewal period starts. You might do this for security reasons, or if you have changed the renewal or validity periods for the certificates . A renewed certificate uses the same private key as the old certificate. Note If you are using your own CA certificates, the force-renew annotation cannot be used. Instead, follow the procedure for renewing your own CA certificates . Prerequisites The Cluster Operator is running. A Kafka cluster in which CA certificates and private keys are installed. Procedure Apply the strimzi.io/force-renew annotation to the Secret that contains the CA certificate that you want to renew. Table 11.8. Annotation for the Secret that forces renewal of certificates Certificate Secret Annotate command Cluster CA KAFKA-CLUSTER-NAME -cluster-ca-cert oc annotate secret KAFKA-CLUSTER-NAME -cluster-ca-cert strimzi.io/force-renew=true Clients CA KAFKA-CLUSTER-NAME -clients-ca-cert oc annotate secret KAFKA-CLUSTER-NAME -clients-ca-cert strimzi.io/force-renew=true At the reconciliation the Cluster Operator will generate a new CA certificate for the Secret that you annotated. If maintenance time windows are configured, the Cluster Operator will generate the new CA certificate at the first reconciliation within the maintenance time window. Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator. Check the period the CA certificate is valid: For example, using an openssl command: oc get secret CA-CERTIFICATE-SECRET -o 'jsonpath={.data. CA-CERTIFICATE }' | base64 -d | openssl x509 -subject -issuer -startdate -enddate -noout CA-CERTIFICATE-SECRET is the name of the Secret , which is KAFKA-CLUSTER-NAME -cluster-ca-cert for the cluster CA certificate and KAFKA-CLUSTER-NAME -clients-ca-cert for the clients CA certificate. CA-CERTIFICATE is the name of the CA certificate, such as jsonpath={.data.ca\.crt} . The command returns a notBefore and notAfter date, which is the validity period for the CA certificate. For example, for a cluster CA certificate: subject=O = io.strimzi, CN = cluster-ca v0 issuer=O = io.strimzi, CN = cluster-ca v0 notBefore=Jun 30 09:43:54 2020 GMT notAfter=Jun 30 09:43:54 2021 GMT Delete old certificates from the Secret. When components are using the new certificates, older certificates might still be active. Delete the old certificates to remove any potential security risk. Additional resources Section 11.2, "Secrets" Section 2.1.5, "Maintenance time windows for rolling updates" Section 13.2.49, " CertificateAuthority schema reference" 11.3.4. Replacing private keys used by the CA certificates generated by the Cluster Operator You can replace the private keys used by the cluster CA and clients CA certificates generated by the Cluster Operator. When a private key is replaced, the Cluster Operator generates a new CA certificate for the new private key. Note If you are using your own CA certificates, the force-replace annotation cannot be used. Instead, follow the procedure for renewing your own CA certificates . Prerequisites The Cluster Operator is running. A Kafka cluster in which CA certificates and private keys are installed. Procedure Apply the strimzi.io/force-replace annotation to the Secret that contains the private key that you want to renew. Table 11.9. Commands for replacing private keys Private key for Secret Annotate command Cluster CA CLUSTER-NAME -cluster-ca oc annotate secret CLUSTER-NAME -cluster-ca strimzi.io/force-replace=true Clients CA CLUSTER-NAME -clients-ca oc annotate secret CLUSTER-NAME -clients-ca strimzi.io/force-replace=true At the reconciliation the Cluster Operator will: Generate a new private key for the Secret that you annotated Generate a new CA certificate If maintenance time windows are configured, the Cluster Operator will generate the new private key and CA certificate at the first reconciliation within the maintenance time window. Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator. Additional resources Section 11.2, "Secrets" Section 2.1.5, "Maintenance time windows for rolling updates" 11.3.5. Renewing your own CA certificates This procedure describes how to renew CA certificates and keys you installed yourself, instead of using the certificates generated by the Cluster Operator. If you are using your own certificates, the Cluster Operator will not renew them automatically. Therefore, it is important that you follow this procedure during the renewal period of the certificate in order to replace CA certificates that will soon expire. The procedure describes the renewal of CA certificates in PEM format. You can also use certificates in PKCS #12 format. Prerequisites The Cluster Operator is running. Your own CA certificates and private keys are installed . You have new cluster and clients X.509 certificates and keys in PEM format. These could be generated using an openssl command, such as: openssl req -x509 -new -days NUMBER-OF-DAYS-VALID --nodes -out ca.crt -keyout ca.key Procedure Check the details of the current CA certificates in the Secret : oc describe secret CA-CERTIFICATE-SECRET CA-CERTIFICATE-SECRET is the name of the Secret , which is KAFKA-CLUSTER-NAME -cluster-ca-cert for the cluster CA certificate and KAFKA-CLUSTER-NAME -clients-ca-cert for the clients CA certificate. Create a directory to contain the existing CA certificates in the secret. mkdir new-ca-cert-secret cd new-ca-cert-secret Fetch the secret for each CA certificate you wish to renew: oc get secret CA-CERTIFICATE-SECRET -o 'jsonpath={.data. CA-CERTIFICATE }' | base64 -d > CA-CERTIFICATE Replace CA-CERTIFICATE with the name of each CA certificate. Rename the old ca.crt file as ca- DATE .crt , where DATE is the certificate expiry date in the format YEAR-MONTH-DAYTHOUR-MINUTE-SECONDZ . For example ca-2018-09-27T17-32-00Z.crt . mv ca.crt ca-USD(date -u -dUSD(openssl x509 -enddate -noout -in ca.crt | sed 's/.*=//') +'%Y-%m-%dT%H-%M-%SZ').crt Copy your new CA certificate into the directory, naming it ca.crt : cp PATH-TO-NEW-CERTIFICATE ca.crt Put your CA certificate in the corresponding Secret . Delete the existing secret: oc delete secret CA-CERTIFICATE-SECRET CA-CERTIFICATE-SECRET is the name of the Secret , as returned in the first step. Ignore any "Not Exists" errors. Recreate the secret: oc create secret generic CA-CERTIFICATE-SECRET --from-file=. Delete the directory you created: cd .. rm -r new-ca-cert-secret Put your CA key in the corresponding Secret . Delete the existing secret: oc delete secret CA-KEY-SECRET CA-KEY-SECRET is the name of CA key, which is KAFKA-CLUSTER-NAME -cluster-ca for the cluster CA key and KAFKA-CLUSTER-NAME -clients-ca for the clients CA key. Recreate the secret with the new CA key: oc create secret generic CA-KEY-SECRET --from-file=ca.key= CA-KEY-SECRET-FILENAME Label the secrets with the labels strimzi.io/kind=Kafka and strimzi.io/cluster= KAFKA-CLUSTER-NAME : oc label secret CA-CERTIFICATE-SECRET strimzi.io/kind=Kafka strimzi.io/cluster= KAFKA-CLUSTER-NAME oc label secret CA-KEY-SECRET strimzi.io/kind=Kafka strimzi.io/cluster= KAFKA-CLUSTER-NAME 11.4. TLS connections 11.4.1. ZooKeeper communication Communication between the ZooKeeper nodes on all ports, as well as between clients and ZooKeeper, is encrypted using TLS. Communication between Kafka brokers and ZooKeeper nodes is also encrypted. 11.4.2. Kafka inter-broker communication Communication between Kafka brokers is always encrypted using TLS. Unless the ControlPlaneListener feature gate is enabled, all inter-broker communication goes through an internal listener on port 9091. If you enable the feature gate, traffic from the control plane goes through an internal control plane listener on port 9090. Traffic from the data plane continues to use the existing internal listener on port 9091. These internal listeners are not available to Kafka clients. 11.4.3. Topic and User Operators All Operators use encryption for communication with both Kafka and ZooKeeper. In Topic and User Operators, a TLS sidecar is used when communicating with ZooKeeper. 11.4.4. Cruise Control Cruise Control uses encryption for communication with both Kafka and ZooKeeper. A TLS sidecar is used when communicating with ZooKeeper. 11.4.5. Kafka Client connections Encrypted or unencrypted communication between Kafka brokers and clients is configured using the tls property for spec.kafka.listeners . 11.5. Configuring internal clients to trust the cluster CA This procedure describes how to configure a Kafka client that resides inside the OpenShift cluster - connecting to a TLS listener - to trust the cluster CA certificate. The easiest way to achieve this for an internal client is to use a volume mount to access the Secrets containing the necessary certificates and keys. Follow the steps to configure trust certificates that are signed by the cluster CA for Java-based Kafka Producer, Consumer, and Streams APIs. Choose the steps to follow according to the certificate format of the cluster CA: PKCS #12 ( .p12 ) or PEM ( .crt ). The steps describe how to mount the Cluster Secret that verifies the identity of the Kafka cluster to the client pod. Prerequisites The Cluster Operator must be running. There needs to be a Kafka resource within the OpenShift cluster. You need a Kafka client application inside the OpenShift cluster that will connect using TLS, and needs to trust the cluster CA certificate. The client application must be running in the same namespace as the Kafka resource. Using PKCS #12 format (.p12) Mount the cluster Secret as a volume when defining the client pod. For example: kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert Here we're mounting: The PKCS #12 file into an exact path, which can be configured The password into an environment variable, where it can be used for Java configuration Configure the Kafka client with the following properties: A security protocol option: security.protocol: SSL when using TLS for encryption (with or without TLS authentication). security.protocol: SASL_SSL when using SCRAM-SHA authentication over TLS. ssl.truststore.location with the truststore location where the certificates were imported. ssl.truststore.password with the password for accessing the truststore. ssl.truststore.type=PKCS12 to identify the truststore type. Using PEM format (.crt) Mount the cluster Secret as a volume when defining the client pod. For example: kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert Use the certificate with clients that use certificates in X.509 format. 11.6. Configuring external clients to trust the cluster CA This procedure describes how to configure a Kafka client that resides outside the OpenShift cluster - connecting to an external listener - to trust the cluster CA certificate. Follow this procedure when setting up the client and during the renewal period, when the old clients CA certificate is replaced. Follow the steps to configure trust certificates that are signed by the cluster CA for Java-based Kafka Producer, Consumer, and Streams APIs. Choose the steps to follow according to the certificate format of the cluster CA: PKCS #12 ( .p12 ) or PEM ( .crt ). The steps describe how to obtain the certificate from the Cluster Secret that verifies the identity of the Kafka cluster. Important The <cluster-name> -cluster-ca-cert Secret will contain more than one CA certificate during the CA certificate renewal period. Clients must add all of them to their truststores. Prerequisites The Cluster Operator must be running. There needs to be a Kafka resource within the OpenShift cluster. You need a Kafka client application outside the OpenShift cluster that will connect using TLS, and needs to trust the cluster CA certificate. Using PKCS #12 format (.p12) Extract the cluster CA certificate and password from the generated <cluster-name> -cluster-ca-cert Secret. oc get secret <cluster-name> -cluster-ca-cert -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12 oc get secret <cluster-name> -cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password Configure the Kafka client with the following properties: A security protocol option: security.protocol: SSL when using TLS for encryption (with or without TLS authentication). security.protocol: SASL_SSL when using SCRAM-SHA authentication over TLS. ssl.truststore.location with the truststore location where the certificates were imported. ssl.truststore.password with the password for accessing the truststore. This property can be omitted if it is not needed by the truststore. ssl.truststore.type=PKCS12 to identify the truststore type. Using PEM format (.crt) Extract the cluster CA certificate from the generated <cluster-name> -cluster-ca-cert Secret. oc get secret <cluster-name> -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the certificate with clients that use certificates in X.509 format. 11.7. Kafka listener certificates You can provide your own server certificates and private keys for the following types of listeners: Internal TLS listeners for communication within the OpenShift cluster External listeners ( route , loadbalancer , ingress , and nodeport types), which have TLS encryption enabled, for communication between Kafka clients and Kafka brokers These user-provided certificates are called Kafka listener certificates . Providing Kafka listener certificates for external listeners allows you to leverage existing security infrastructure, such as your organization's private CA or a public CA. Kafka clients will connect to Kafka brokers using Kafka listener certificates rather than certificates signed by the cluster CA or clients CA. You must manually renew Kafka listener certificates when needed. 11.7.1. Providing your own Kafka listener certificates This procedure shows how to configure a listener to use your own private key and server certificate, called a Kafka listener certificate . Your client applications should use the CA public key as a trusted certificate in order to verify the identity of the Kafka broker. Prerequisites An OpenShift cluster. The Cluster Operator is running. For each listener, a compatible server certificate signed by an external CA. Provide an X.509 certificate in PEM format. Specify the correct Subject Alternative Names (SANs) for each listener. For more information, see Section 11.7.2, "Alternative subjects in server certificates for Kafka listeners" . You can provide a certificate that includes the whole CA chain in the certificate file. Procedure Create a Secret containing your private key and server certificate: oc create secret generic my-secret --from-file= my-listener-key.key --from-file= my-listener-certificate.crt Edit the Kafka resource for your cluster. Configure the listener to use your Secret , certificate file, and private key file in the configuration.brokerCertChainAndKey property. Example configuration for a loadbalancer external listener with TLS encryption enabled # ... listeners: - name: plain port: 9092 type: internal tls: false - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... Example configuration for a TLS listener # ... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... Apply the new configuration to create or update the resource: oc apply -f kafka.yaml The Cluster Operator starts a rolling update of the Kafka cluster, which updates the configuration of the listeners. Note A rolling update is also started if you update a Kafka listener certificate in a Secret that is already used by a TLS or external listener. Additional resources Alternative subjects in server certificates for Kafka listeners GenericKafkaListener schema reference Kafka listener certificates 11.7.2. Alternative subjects in server certificates for Kafka listeners In order to use TLS hostname verification with your own Kafka listener certificates , you must use the correct Subject Alternative Names (SANs) for each listener. The certificate SANs must specify hostnames for: All of the Kafka brokers in your cluster The Kafka cluster bootstrap service You can use wildcard certificates if they are supported by your CA. 11.7.2.1. TLS listener SAN examples Use the following examples to help you specify hostnames of the SANs in your certificates for TLS listeners. Wildcards example //Kafka brokers *. <cluster-name> -kafka-brokers *. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc Non-wildcards example // Kafka brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc <cluster-name> -kafka-1. <cluster-name> -kafka-brokers <cluster-name> -kafka-1. <cluster-name> -kafka-brokers. <namespace> .svc # ... // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc 11.7.2.2. External listener SAN examples For external listeners which have TLS encryption enabled, the hostnames you need to specify in certificates depends on the external listener type . Table 11.10. SANs for each type of external listener External listener type In the SANs, specify... Route Addresses of all Kafka broker Routes and the address of the bootstrap Route . You can use a matching wildcard name. loadbalancer Addresses of all Kafka broker loadbalancers and the bootstrap loadbalancer address. You can use a matching wildcard name. NodePort Addresses of all OpenShift worker nodes that the Kafka broker pods might be scheduled to. You can use a matching wildcard name. Additional resources Section 11.7.1, "Providing your own Kafka listener certificates" | [
"delete secret CA-CERTIFICATE-SECRET",
"create secret generic CA-CERTIFICATE-SECRET --from-file=ca.crt= CA-CERTIFICATE-FILENAME",
"delete secret CA-KEY-SECRET",
"create secret generic CA-KEY-SECRET --from-file=ca.key= CA-KEY-SECRET-FILENAME",
"label secret CA-CERTIFICATE-SECRET strimzi.io/kind=Kafka strimzi.io/cluster= CLUSTER-NAME label secret CA-KEY-SECRET strimzi.io/kind=Kafka strimzi.io/cluster= CLUSTER-NAME",
"kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # clusterCa: generateCertificateAuthority: false",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false",
"Not Before Not After | | |<--------------- validityDays --------------->| <--- renewalDays --->|",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true",
"get secret CA-CERTIFICATE-SECRET -o 'jsonpath={.data. CA-CERTIFICATE }' | base64 -d | openssl x509 -subject -issuer -startdate -enddate -noout",
"subject=O = io.strimzi, CN = cluster-ca v0 issuer=O = io.strimzi, CN = cluster-ca v0 notBefore=Jun 30 09:43:54 2020 GMT notAfter=Jun 30 09:43:54 2021 GMT",
"openssl req -x509 -new -days NUMBER-OF-DAYS-VALID --nodes -out ca.crt -keyout ca.key",
"describe secret CA-CERTIFICATE-SECRET",
"mkdir new-ca-cert-secret cd new-ca-cert-secret",
"get secret CA-CERTIFICATE-SECRET -o 'jsonpath={.data. CA-CERTIFICATE }' | base64 -d > CA-CERTIFICATE",
"mv ca.crt ca-USD(date -u -dUSD(openssl x509 -enddate -noout -in ca.crt | sed 's/.*=//') +'%Y-%m-%dT%H-%M-%SZ').crt",
"cp PATH-TO-NEW-CERTIFICATE ca.crt",
"delete secret CA-CERTIFICATE-SECRET",
"create secret generic CA-CERTIFICATE-SECRET --from-file=.",
"cd .. rm -r new-ca-cert-secret",
"delete secret CA-KEY-SECRET",
"create secret generic CA-KEY-SECRET --from-file=ca.key= CA-KEY-SECRET-FILENAME",
"label secret CA-CERTIFICATE-SECRET strimzi.io/kind=Kafka strimzi.io/cluster= KAFKA-CLUSTER-NAME label secret CA-KEY-SECRET strimzi.io/kind=Kafka strimzi.io/cluster= KAFKA-CLUSTER-NAME",
"kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert",
"kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert",
"get secret <cluster-name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12",
"get secret <cluster-name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password",
"get secret <cluster-name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"create secret generic my-secret --from-file= my-listener-key.key --from-file= my-listener-certificate.crt",
"listeners: - name: plain port: 9092 type: internal tls: false - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"apply -f kafka.yaml",
"//Kafka brokers *. <cluster-name> -kafka-brokers *. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc",
"// Kafka brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc <cluster-name> -kafka-1. <cluster-name> -kafka-brokers <cluster-name> -kafka-1. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/security-str |
Installation Guide | Installation Guide Red Hat Enterprise Linux 7 Installing Red Hat Enterprise Linux 7 on all architectures Jana Heves Red Hat Customer Content Services [email protected] Vladimir Slavik Red Hat Customer Content Services [email protected] Abstract This manual explains how to boot the Red Hat Enterprise Linux 7 installation program ( Anaconda ) and how to install Red Hat Enterprise Linux 7 on AMD64 and Intel 64 systems, 64-bit ARM systems, 64-bit IBM Power Systems servers, and IBM Z servers. It also covers advanced installation methods such as Kickstart installations, PXE installations, and installations over VNC. Finally, it describes common post-installation tasks and explains how to troubleshoot installation problems. Information on installing Red Hat Enterprise Linux Atomic Host can be found in the Red Hat Enterprise Linux Atomic Host Installation and Configuration Guide . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/index |
Chapter 10. Compute (nova) Parameters | Chapter 10. Compute (nova) Parameters You can modify the nova service with compute parameters. Parameter Description ApacheCertificateKeySize Override the private key size used when creating the certificate for this service. ApacheTimeout The timeout in seconds for Apache, which defines duration Apache waits for I/O operations. The default value is 90 . AuthCloudName Entry in clouds.yaml to use for authentication. CertificateKeySize Specifies the private key size used when creating the certificate. The default value is 2048 . ContainerNovaLibvirtPidsLimit Tune nova_libvirt container PID limit (set to 0 for unlimited) (defaults to 65536). The default value is 65536 . ContainerNovaLibvirtUlimit Ulimit for OpenStack Compute (nova) Libvirt Container. The default value is ['nofile=131072', 'nproc=126960'] . CustomProviderInventories Array of hashes describing the custom providers for the compute role. Format: name/uuid - Resource providers to target can be identified by either UUID or name. In addition, the value USDCOMPUTE_NODE can be used in the UUID field to identify all nodes managed by the service. Exactly one of uuid or name is mandatory. If neither uuid or name is provided, the special uuid USDCOMPUTE_NODE gets set in the template. inventories - (Optional) Hash of custom provider inventories. total is a mandatory property. Any other optional properties not populated will be given a default value by placement. If overriding a pre-existing provider values will not be preserved from the existing inventory. traits - (Optional) Array of additional traits. Example: ComputeParameters: CustomProviderInventories: - uuid: USDCOMPUTE_NODE inventories: CUSTOM_EXAMPLE_RESOURCE_CLASS: total: 100 reserved: 0 min_unit: 1 max_unit: 10 step_size: 1 allocation_ratio: 1.0 CUSTOM_ANOTHER_EXAMPLE_RESOURCE_CLASS: total: 100 traits: - CUSTOM_P_STATE_ENABLED - CUSTOM_C_STATE_ENABLED. DockerNovaComputeUlimit Ulimit for OpenStack Compute (nova) Compute Container. The default value is ['nofile=131072', 'memlock=67108864'] . DockerNovaMigrationSshdPort Port that dockerized nova migration target sshd service binds to. The default value is 2022 . EnableCache Enable caching with memcached. The default value is true . EnableConfigPurge Remove configuration that is not generated by the director. Used to avoid configuration remnants after upgrades. The default value is false . EnableInstanceHA Whether to enable an Instance Ha configurarion or not. This setup requires the Compute role to have the PacemakerRemote service added to it. The default value is false . EnableSQLAlchemyCollectd Set to true to enable the SQLAlchemy-collectd server plugin. The default value is false . EnforceSecureRbac Setting this option to True will configure each OpenStack service to enforce Secure RBAC by setting [oslo_policy] enforce_new_defaults and [oslo_policy] enforce_scope to True. This introduces a consistent set of RBAC personas across OpenStack services that include support for system and project scope, as well as keystone's default roles, admin, member, and reader. Do not enable this functionality until all services in your deployment actually support secure RBAC. The default value is false . GlanceBackendID The default backend's identifier. The default value is default_backend . GlanceMultistoreConfig Dictionary of settings when configuring additional glance backends. The hash key is the backend ID, and the value is a dictionary of parameter values unique to that backend. Multiple rbd and cinder backends are allowed, but file and swift backends are limited to one each. Example: # Default glance store is rbd. GlanceBackend: rbd GlanceStoreDescription: Default rbd store # GlanceMultistoreConfig specifies a second rbd backend, plus a cinder # backend. GlanceMultistoreConfig: rbd2_store: GlanceBackend: rbd GlanceStoreDescription: Second rbd store CephClusterName: ceph2 # Override CephClientUserName if this cluster uses a different # client name. CephClientUserName: client2 cinder1_store: GlanceBackend: cinder GlanceCinderVolumeType: volume-type-1 GlanceStoreDescription: First cinder store cinder2_store: GlanceBackend: cinder GlanceCinderVolumeType: volume-type-2 GlanceStoreDescription: Seconde cinder store . InstanceNameTemplate Template string to be used to generate instance names. The default value is instance-%08x . InternalTLSVncProxyCAFile Specifies the CA cert to use for VNC TLS. The default value is /etc/ipa/ca.crt . KernelArgs Kernel Args to apply to the host. LibvirtCACert This specifies the CA certificate to use for TLS in libvirt. This file will be symlinked to the default CA path in libvirt, which is /etc/pki/CA/cacert.pem. Note that due to limitations GNU TLS, which is the TLS backend for libvirt, the file must be less than 65K (so we can't use the system's CA bundle). This parameter should be used if the default (which comes from the InternalTLSCAFile parameter) is not desired. The current default reflects TripleO's default CA, which is FreeIPA. It will only be used if internal TLS is enabled. LibvirtCertificateKeySize Override the private key size used when creating the certificate for this service. LibvirtEnabledPerfEvents This is a performance event list which could be used as monitor. For example: cmt,mbml,mbmt . Make sure you are using Red Hat Enterprise Linux 7.4 as the base and libvirt version is 1.3.3 or above. Also ensure you have enabled the notifications and are using hardware with a CPU that supports the cmt flag. LibvirtLogFilters Defines a filter in libvirt daemon to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . The default value is 1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util . LibvirtTLSPassword The password for the libvirt service when TLS is enabled. LibvirtTLSPriority Override the compile time default TLS priority string. The default value is NORMAL:-VERS-SSL3.0:-VERS-TLS-ALL:+VERS-TLS1.2 . LibvirtVirtlogdLogFilters Defines a filter in virtlogd to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . The default value is 1:logging 4:object 4:json 4:event 1:util . LibvirtVirtnodedevdLogFilters Defines a filter in virtnodedevd to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . The default value is 1:qemu 1:libvirt 4:object 4:json 4:event 1:util . LibvirtVirtproxydLogFilters Defines a filter in virtproxyd to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . The default value is 1:qemu 1:libvirt 4:object 4:json 4:event 1:util . LibvirtVirtqemudLogFilters Defines a filter in virtqemud to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . The default value is 1:qemu 1:libvirt 4:object 4:json 4:event 1:util . LibvirtVirtsecretdLogFilters Defines a filter in virtsecretd to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . The default value is 1:qemu 1:libvirt 4:object 4:json 4:event 1:util . LibvirtVirtstoragedLogFilters Defines a filter in virtstoraged to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . The default value is 1:qemu 1:libvirt 4:object 4:json 4:event 1:util . LibvirtVncCACert This specifies the CA certificate to use for VNC TLS. This file will be symlinked to the default CA path, which is /etc/pki/CA/certs/vnc.crt. This parameter should be used if the default (which comes from the InternalTLSVncProxyCAFile parameter) is not desired. The current default reflects TripleO's default CA, which is FreeIPA. It will only be used if internal TLS is enabled. LibvirtVNCClientCertificateKeySize Override the private key size used when creating the certificate for this service. MemcachedTLS Set to True to enable TLS on Memcached service. Because not all services support Memcached TLS, during the migration period, Memcached will listen on 2 ports - on the port set with MemcachedPort parameter (above) and on 11211, without TLS. The default value is false . MemcacheUseAdvancedPool Use the advanced (eventlet safe) memcached client pool. The default value is true . MigrationSshKey SSH key for migration. Expects a dictionary with keys public_key and private_key . Values should be identical to SSH public/private key files. The default value is {'public_key': '', 'private_key': ''} . MigrationSshPort Target port for migration over ssh. The default value is 2022 . MultipathdEnable Whether to enable the multipath daemon. The default value is false . MysqlIPv6 Enable IPv6 in MySQL. The default value is false . NeutronMetadataProxySharedSecret Shared secret to prevent spoofing. NeutronPhysnetNUMANodesMapping Map of phynet name as key and NUMA nodes as value. For example: NeutronPhysnetNUMANodesMapping: {'foo': [0, 1], 'bar': [1]} where foo and bar are physnet names and corresponding values are list of associated numa_nodes . NeutronTunnelNUMANodes Used to configure NUMA affinity for all tunneled networks. NotificationDriver Driver or drivers to handle sending notifications. The default value is noop . NovaAdditionalCell Whether this is an cell additional to the default cell. The default value is false . NovaAllowResizeToSameHost Allow destination machine to match source for resize. The default value is false . NovaApiMaxLimit Max number of objects returned per API query. The default value is 1000 . NovaAutoDisabling Max number of consecutive build failures before the nova-compute will disable itself. The default value is 10 . NovaComputeCpuDedicatedSet A comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. For example, NovaComputeCpuDedicatedSet: [4-12,^8,15] reserves cores from 4-12 and 15, excluding 8. If setting this option, do not set the deprecated NovaVcpuPinSet parameter. NovaComputeCpuSharedSet If the deprecated NovaVcpuPinSet option is not set, then NovaComputeCpuSharedSet is set to a comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUS that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy, hw:emulator_threads_policy=share . If the deprecated NovaVcpuPinSet is set, then NovaComputeCpuSharedSet is set to a list or range of host CPU cores used to determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy (hw:emulator_threads_policy=share). In this case, NovaVcpuPinSet is used to provide vCPU inventory and to determine the host CPUs that both pinned and unpinned instances can be scheduled to. For example, NovaComputeCpuSharedSet: [4-12,^8,15] reserves cores from 4-12 and 15, excluding 8. NovaComputeDisableIrqBalance Whether to disable irqbalance on compute nodes or not. Especially in Realtime Compute role one wants to keep it disabled. The default value is false . NovaComputeEnableKsm Whether to enable KSM on compute nodes or not. Especially in NFV use case one wants to keep it disabled. The default value is false . NovaComputeForceRawImages Set to "True" to convert non-raw cached base images to raw format. Set to "False" if you have a system with slow I/O or low available space, trading the higher CPU requirements of compression for that of minimized input bandwidth. Notes:: - The Compute service removes any compression from the base image during compression, to avoid CPU bottlenecks. Converted images cannot have backing files, which might be a security issue. - The raw image format uses more space than other image formats, for example, qcow2. Raw base images are always used with libvirt_images_type=lvm. The default value is true . NovaComputeImageCacheManagerInterval Specifies the number of seconds to wait between runs of the image cache manager, which impacts base image caching on Compute nodes. This period is used in the auto removal of unused cached images configured with remove_unused_base_images and remove_unused_original_minimum_age_seconds. Set to "0" to run at the default interval of 60 seconds (not recommended). The default value is 2400 . NovaComputeImageCachePrecacheConcurrency Maximum number of Compute nodes to trigger image precaching in parallel. When an image precache request is made, Compute nodes are contacted {by who/which service} to initiate the download. Higher numbers will cause more computes to work in parallel and may result in reduced time to complete the operation, but may also DDoS the image service. Lower numbers will result in more sequential operation, lower image service load, but likely longer runtime to completion. The default value is 1 . NovaComputeImageCacheRemoveUnusedBaseImages Set to "True" to automatically remove unused base images from the cache at intervals configured by using image_cache_manager_interval. Images are defined as unused if they have not been accessed during the time specified by using remove_unused_original_minimum_age_seconds. The default value is true . NovaComputeImageCacheRemoveUnusedResizedMinimumAge Specifies the minimum age that an unused resized base image must be to be removed from the cache, in seconds. Unused unresized base images younger than this will not be removed. The default value is 3600 . NovaComputeLibvirtPreAllocateImages Specifies the preallocation mode for libvirt instance disks. Set to one of the following valid values:: - none - No storage is provisioned at instance start. - space - Storage is fully allocated at instance start using fallocate, which can help with both space guarantees and I/O performance. Even when not using CoW instance disks, the copy each instance gets is sparse and so the instance might fail unexpectedly at run time with ENOSPC. By running fallocate(1) on the instance disk images, the Compute service immediately and efficiently allocates the space for them in the file system, if supported. Run time performance should also be improved because the file system does not have to dynamically allocate blocks at run time, which reduces CPU overhead and file fragmentation. The default value is none . NovaComputeLibvirtType Libvirt domain type. Defaults to kvm . The default value is kvm . NovaComputeOptEnvVars List of optional environment variables. NovaComputeOptVolumes List of optional volumes. NovaComputeStartupDelay Delays the startup of nova-compute service after compute node is booted. This is to give a chance to ceph to get back healthy before booting instances after and overcloud reboot. The default value is 0 . NovaComputeUseCowImages Set to "True" to use CoW (Copy on Write) images in cqow2 format for libvirt instance disks. With CoW, depending on the backing store and host caching, there might be better concurrency achieved by having each instance operate on its own copy. Set to "False" to use the raw format. Raw format uses more space for common parts of the disk image. The default value is true . NovaCPUAllocationRatio Virtual CPU to physical CPU allocation ratio. The default value is 0.0 . NovaCronArchiveDeleteAllCells Archive deleted instances from all cells. The default value is true . NovaCronArchiveDeleteRowsAge Cron to archive deleted instances - Age. This will define the retention policy when archiving the deleted instances entries in days. 0 means archive data older than today in shadow tables. The default value is 90 . The default value is 90 . NovaCronArchiveDeleteRowsDestination Cron to move deleted instances to another table - Log destination. The default value is /var/log/nova/nova-rowsflush.log . NovaCronArchiveDeleteRowsHour Cron to move deleted instances to another table - Hour. The default value is 0 . NovaCronArchiveDeleteRowsMaxDelay Cron to move deleted instances to another table - Max Delay. The default value is 3600 . NovaCronArchiveDeleteRowsMaxRows Cron to move deleted instances to another table - Max Rows. The default value is 1000 . NovaCronArchiveDeleteRowsMinute Cron to move deleted instances to another table - Minute. The default value is 1 . NovaCronArchiveDeleteRowsMonth Cron to move deleted instances to another table - Month. The default value is * . NovaCronArchiveDeleteRowsMonthday Cron to move deleted instances to another table - Month Day. The default value is * . NovaCronArchiveDeleteRowsPurge Purge shadow tables immediately after scheduled archiving. The default value is false . NovaCronArchiveDeleteRowsUntilComplete Cron to move deleted instances to another table - Until complete. The default value is true . NovaCronArchiveDeleteRowsUser Cron to move deleted instances to another table - User. The default value is nova . NovaCronArchiveDeleteRowsWeekday Cron to move deleted instances to another table - Week Day. The default value is * . NovaCronPurgeShadowTablesAge Cron to purge shadow tables - Age This will define the retention policy when purging the shadow tables in days. 0 means, purge data older than today in shadow tables. The default value is 14 . NovaCronPurgeShadowTablesAllCells Cron to purge shadow tables - All cells. The default value is true . NovaCronPurgeShadowTablesDestination Cron to purge shadow tables - Log destination. The default value is /var/log/nova/nova-rowspurge.log . NovaCronPurgeShadowTablesHour Cron to purge shadow tables - Hour. The default value is 5 . NovaCronPurgeShadowTablesMaxDelay Cron to purge shadow tables - Max Delay. The default value is 3600 . NovaCronPurgeShadowTablesMinute Cron to purge shadow tables - Minute. The default value is 0 . NovaCronPurgeShadowTablesMonth Cron to purge shadow tables - Month. The default value is * . NovaCronPurgeShadowTablesMonthday Cron to purge shadow tables - Month Day. The default value is * . NovaCronPurgeShadowTablesUser Cron to purge shadow tables - User. The default value is nova . NovaCronPurgeShadowTablesVerbose Cron to purge shadow tables - Verbose. The default value is false . NovaCronPurgeShadowTablesWeekday Cron to purge shadow tables - Week Day. The default value is * . NovaCrossAZAttach Whether instances can attach cinder volumes from a different availability zone. The default value is true . NovaDefaultFloatingPool Default pool for floating IP addresses. The default value is public . NovaDisableComputeServiceCheckForFfu Facilitate a Fast-Forward upgrade where new control services are being started before compute nodes have been able to update their service record. The default value is false . NovaDisableImageDownloadToRbd Refuse to boot an instance if it would require downloading from glance and uploading to ceph instead of a COW clone. The default value is false . NovaDiskAllocationRatio Virtual disk to physical disk allocation ratio. The default value is 0.0 . NovaEnableDBArchive Whether to create cron job for archiving soft deleted rows in OpenStack Compute (nova) database. The default value is true . NovaEnableDBPurge Whether to create cron job for purging soft deleted rows in OpenStack Compute (nova) database. The default value is true . NovaEnableVirtlogdContainerWrapper Generate a virtlogd wrapper script so that virtlogd launches in a separate container and won't get restarted e.g. on minor updates. The default value is true . NovaEnableVTPM Whether to enable support for enumlated Trusted Platform Module (TPM) devices. The default value is false . NovaGlanceEnableRbdDownload Enable download of OpenStack Image Storage (glance) images directly via RBD. The default value is false . NovaGlanceRbdCopyPollInterval The interval in seconds with which to poll OpenStack Image Storage (glance) after asking for it to copy an image to the local rbd store. The default value is 15 . NovaGlanceRbdCopyTimeout The overall maximum time we will wait for OpenStack Image Storage (glance) to complete an image copy to our local rbd store. The default value is 600 . NovaGlanceRbdDownloadMultistoreID The hash key, which is the backend ID, of the GlanceMultistoreConfig to be used for the role where NovaGlanceEnableRbdDownload is enabled and defaults should be overridden. If CephClientUserName or GlanceRbdPoolName are not set in the GlanceMultistoreConfig, the global values of those parameters will be used. NovaHWMachineType Specifies the default machine type for each host architecture. Red Hat recommends setting the default to the lowest RHEL minor release in your environment, for backwards compatibility during live migration. The default value is x86_64=pc-q35-rhel9.0.0 . NovaImageCacheTTL Time in seconds that nova compute should continue caching an image once it is no longer used by any instances on the host. The default value is 86400 . NovaImageTypeExcludeList List of image formats that should not be advertised as supported by the compute service. NovaLibvirtCPUMode The libvirt CPU mode to configure. Defaults to host-model if virt_type is set to kvm, otherwise defaults to none . The default value is host-model . NovaLibvirtCPUModelExtraFlags This allows specifying granular CPU feature flags when specifying CPU models. Only has effect if cpu_mode is not set to none . NovaLibvirtCPUModels The named libvirt CPU model (see names listed in /usr/share/libvirt/cpu_map.xml). Only has effect if cpu_mode="custom" and virt_type="kvm|qemu". NovaLibvirtFileBackedMemory Available capacity in MiB for file-backed memory. When configured, the NovaRAMAllocationRatio and NovaReservedHostMemory parameters must be set to 0. The default value is 0 . NovaLibvirtMaxQueues Add parameter to configure the libvirt max_queues. The maximum number of virtio queue pairs that can be enabled when creating a multiqueue guest. The number of virtio queues allocated will be the lesser of the CPUs requested by the guest and the max value defined. Default 0 corresponds to not set. The default value is 0 . NovaLibvirtMemStatsPeriodSeconds A number of seconds to memory usage statistics period, zero or negative value mean to disable memory usage statistics. The default value is 10 . NovaLibvirtNumPciePorts Set num_pcie_ports to specify the number of PCIe ports an instance will get. Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use. The default value is 16 . NovaLibvirtOptVolumes List of optional volumes to be mounted. NovaLibvirtRxQueueSize Virtio-net RX queue size. Valid values are 256, 512, 1024. The default value is 512 . NovaLibvirtTxQueueSize Virtio-net TX queue size. Valid values are 256, 512, 1024. The default value is 512 . NovaLibvirtVolumeUseMultipath Whether to enable or not the multipath connection of the volumes. The default value is false . NovaLiveMigrationPermitAutoConverge Defaults to "True" to slow down the instance CPU until the memory copy process is faster than the instance's memory writes when the migration performance is slow and might not complete. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU. The default value is true . NovaLiveMigrationPermitPostCopy If "True" activates the instance on the destination node before migration is complete, and to set an upper bound on the memory that needs to be transferred. Post copy gets enabled per default if the compute roles is not a realtime role or disabled by this parameter. The default value is true . NovaLiveMigrationWaitForVIFPlug Whether to wait for network-vif-plugged events before starting guest transfer. The default value is true . NovaLocalMetadataPerCell Indicates that the nova-metadata API service has been deployed per-cell, so that we can have better performance and data isolation in a multi-cell deployment. Users should consider the use of this configuration depending on how OpenStack Networking (neutron) is setup. If networks span cells, you might need to run nova-metadata API service globally. If your networks are segmented along cell boundaries, then you can run nova-metadata API service per cell. When running nova-metadata API service per cell, you should also configure each OpenStack Networking (neutron) metadata-agent to point to the corresponding nova-metadata API service. The default value is false . NovaMaxDiskDevicesToAttach Maximum number of disk devices allowed to attach to a single server. Note that the number of disks supported by an server depends on the bus used. For example, the ide disk bus is limited to 4 attached devices. The configured maximum is enforced during server create, rebuild, evacuate, unshelve, live migrate, and attach volume. Operators changing this parameter on a compute service that is hosting servers should be aware that it could cause rebuilds to fail, if the maximum is decreased lower than the number of devices already attached to servers. Operators should also be aware that during a cold migration, the configured maximum is only enforced in-place and the destination is not checked before the move. -1 means unlimited. The default value is -1 . NovaMkisofsCmd Name or path of the tool used for ISO image creation. The default value is mkisofs . NovaNfsEnabled Whether to enable or not the NFS backend for OpenStack Compute (nova). The default value is false . NovaNfsOptions NFS mount options for nova storage (when NovaNfsEnabled is true). The default value is context=system_u:object_r:nfs_t:s0 . NovaNfsShare NFS share to mount for nova storage (when NovaNfsEnabled is true). NovaNfsVersion NFS version used for nova storage (when NovaNfsEnabled is true). Since NFSv3 does not support full locking a NFSv4 version need to be used. The default value is 4.2 . NovaOVSBridge Name of integration bridge used by Open vSwitch. The default value is br-int . NovaOVSDBConnection OVS DB connection string to used by OpenStack Compute (nova). NovaPassword The password for the OpenStack Compute (nova) service and database account. NovaPCIPassthrough YAML list of PCI passthrough whitelist parameters. NovaPMEMMappings PMEM namespace mappings as backend for vPMEM feature. This parameter sets Nova's pmem_namespaces configuration options. PMEM namespaces needs to be create manually or with conjunction with NovaPMEMNamespaces parameter. Requires format: USDLABEL:USDNSNAME[|USDNSNAME][,USDLABEL:USDNSNAME[|USDNSNAME]]. NovaPMEMNamespaces Creates PMEM namespaces on the host server using ndctl tool through Ansible. Requires format: USDSIZE:USDNSNAME[,USDSIZE:USDNSNAME... ]. USDSIZE supports the suffixes "k" or "K" for KiB, "m" or "M" for MiB, "g" or "G" for GiB and "t" or "T" for TiB. NOTE: This requires properly configured NVDIMM regions and enough space for requested namespaces. NovaRAMAllocationRatio Virtual RAM to physical RAM allocation ratio. The default value is 1.0 . NovaReservedHostMemory Reserved RAM for host processes. The default value is 4096 . NovaReservedHugePages A list of valid key=value which reflect NUMA node ID, page size (Default unit is KiB) and number of pages to be reserved. Example - NovaReservedHugePages: ["node:0,size:2048,count:64","node:1,size:1GB,count:1"] will reserve on NUMA node 0 64 pages of 2MiB and on NUMA node 1 1 page of 1GiB. NovaResumeGuestsShutdownTimeout Number of seconds we're willing to wait for a guest to shut down. If this is 0, then there is no time out (use with caution, as guests might not respond to a shutdown request). The default value is 300 seconds (5 minutes). The default value is 300 . NovaResumeGuestsStateOnHostBoot Whether to start running instance on compute host reboot. The default value is false . NovaSchedulerAvailableFilters List of available filters for OpenStack Compute (nova) to use to filter nodes. NovaSchedulerDefaultFilters (DEPRECATED) An array of filters used by OpenStack Compute (nova) to filter a node. These filters will be applied in the order they are listed, so place your most restrictive filters first to make the filtering process more efficient. NovaSchedulerDiscoverHostsInCellsInterval This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. The default value of -1 disables the periodic task completely. It is recommended to set this parameter for deployments using OpenStack Bare Metal (ironic). The default value is -1 . NovaSchedulerEnabledFilters An array of filters that OpenStack Compute (nova) uses to filter a node. OpenStack Compute applies these filters in the order they are listed. Place your most restrictive filters first to make the filtering process more efficient. NovaSchedulerEnableIsolatedAggregateFiltering This setting allows the scheduler to restrict hosts in aggregates based on matching required traits in the aggregate metadata and the instance flavor/image. If an aggregate is configured with a property with key trait:USDTRAIT_NAME and value required, the instance flavor extra_specs and/or image metadata must also contain trait:USDTRAIT_NAME=required to be eligible to be scheduled to hosts in that aggregate. The default value is false . NovaSchedulerHostSubsetSize Size of subset of best hosts selected by scheduler. The default value is 1 . NovaSchedulerLimitTenantsToPlacementAggregate This value allows to have tenant isolation with placement. It ensures hosts in tenant-isolated host aggregate and availability zones will only be available to specific set of tenants. The default value is false . NovaSchedulerMaxAttempts Maximum number of attempts the scheduler will make when deploying the instance. You should keep it greater or equal to the number of bare metal nodes you expect to deploy at once to work around potential race conditions when scheduling. The default value is 3 . NovaSchedulerPlacementAggregateRequiredForTenants This setting, when NovaSchedulerLimitTenantsToPlacementAggregate is true, controls whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True. The default value is false . NovaSchedulerQueryImageType This setting causes the scheduler to ask placement only for compute hosts that support the disk_format of the image used in the request. The default value is true . NovaSchedulerQueryPlacementForAvailabilityZone This setting allows the scheduler to look up a host aggregate with metadata key of availability zone set to the value provided by incoming request, and request result from placement be limited to that aggregate. The default value is false . NovaSchedulerQueryPlacementForRoutedNetworkAggregates This setting allows the scheduler to verify if the requested networks or port are related to OpenStack Networking (neutron) routed network. This requires that the related aggregates to be reported in placement, so only hosts within the asked aggregates would be accepted. The default value is false . NovaSchedulerShuffleBestSameWeighedHosts Enable spreading the instances between hosts with the same best weight. The default value is false . NovaSchedulerWorkers Number of workers for OpenStack Compute (nova) Scheduler services. The default value is 0 . NovaStatedirOwnershipSkip List of paths relative to nova_statedir to ignore when recursively setting the ownership and selinux context. The default value is ['triliovault-mounts'] . NovaSyncPowerStateInterval Interval to sync power states between the database and the hypervisor. Set to -1 to disable. Setting this to 0 will run at the default rate(60) defined in oslo.service. The default value is 600 . NovaVcpuPinSet (Deprecated) A list or range of physical CPU cores to reserve for virtual machine processes. For example, NovaVcpuPinSet: [4-12,^8] reserves cores from 4-12 excluding 8. This parameter has been deprecated. Use NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet instead. NovaVGPUTypesDeviceAddressesMapping Map of vgpu type(s) the instances can get as key and list of corresponding device addresses as value. For example, NovaVGPUTypesDeviceAddressesMapping: { nvidia-35 : [ 0000:84:00.0 , 0000:85:00.0 ], nvidia-36 : [ 0000:86:00.0 ]} where nvidia-35 and nvidia-36 are vgpu types and corresponding values are list of associated device addresses. NovaVNCCertificateKeySize Override the private key size used when creating the certificate for this service. NovaVNCProxySSLCiphers OpenSSL cipher preference string that specifies what ciphers to allow for TLS connections from clients. See the man page for the OpenSSL ciphers command for details of the cipher preference string format and allowed values. NovaVNCProxySSLMinimumVersion Minimum allowed SSL/TLS protocol version. Valid values are default , tlsv1_1 , tlsv1_2 , and tlsv1_3 . A value of default will use the underlying system OpenSSL defaults. The default value is default . NovaWorkers Number of workers for the Compute's Conductor service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. The default value is 0 . OvsDpdkSocketMemory Sets the amount of hugepage memory to assign per NUMA node. It is recommended to use the socket closest to the PCIe slot used for the desired DPDK NIC. The format should be in "<socket 0 mem>, <socket 1 mem>, <socket n mem>", where the value is specified in MB. For example: "1024,0". PlacementAPIInterface Endpoint interface to be used for the placement API. The default value is internal . PlacementPassword The password for the Placement service and database account. QemuCACert This specifies the CA certificate to use for qemu. This file will be symlinked to the default CA path, which is /etc/pki/qemu/ca-cert.pem. This parameter should be used if the default (which comes from the InternalTLSCAFile parameter) is not desired. The current default reflects TripleO's default CA, which is FreeIPA. It will only be used if internal TLS is enabled. QemuClientCertificateKeySize Override the private key size used when creating the certificate for this service. QemuDefaultTLSVerify Whether to enable or disable TLS client certificate verification. Enabling this option will reject any client who does not have a certificate signed by the CA in /etc/pki/qemu/ca-cert.pem. The default value is true . QemuMemoryBackingDir Directory used for memoryBacking source if configured as file. NOTE: big files will be stored here. QemuServerCertificateKeySize Override the private key size used when creating the certificate for this service. RbdDiskCachemodes Disk cachemodes for RBD backend. The default value is ['network=writeback'] . UpgradeLevelNovaCompute OpenStack Compute upgrade level. UseTLSTransportForNbd If set to true and if EnableInternalTLS is enabled, it will enable TLS transport for libvirt NBD and configure the relevant keys for libvirt. The default value is true . UseTLSTransportForVnc If set to true and if EnableInternalTLS is enabled, it will enable TLS transport for libvirt VNC and configure the relevant keys for libvirt. The default value is true . VerifyGlanceSignatures Whether to verify image signatures. The default value is False . VhostuserSocketGroup The vhost-user socket directory group name. Defaults to qemu . When vhostuser mode is dpdkvhostuserclient (which is the default mode), the vhost socket is created by qemu. The default value is qemu . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/ref_compute-nova-parameters_overcloud_parameters |
Chapter 2. Requirements for scaling storage | Chapter 2. Requirements for scaling storage Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Resource requirements Storage device requirements Dynamic storage devices Local storage devices Capacity planning Important Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space completely. Full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/scaling_storage/requirements-for-scaling-storage-nodes |
Chapter 1. What's new with OpenShift Dedicated | Chapter 1. What's new with OpenShift Dedicated With its foundation in Kubernetes, OpenShift Dedicated is a complete OpenShift Container Platform cluster provided as a cloud service, configured for high availability, and dedicated to a single customer. OpenShift Dedicated is professionally managed by Red Hat and hosted on Google Cloud Platform (GCP) or Amazon Web Services (AWS). Each OpenShift Dedicated cluster includes a fully managed control plane (Control and Infrastructure nodes), application nodes, installation and management by Red Hat Site Reliability Engineers (SRE), premium Red Hat Support, and cluster services such as logging, metrics, monitoring, notifications portal, and a cluster portal. OpenShift Dedicated clusters are available on the Hybrid Cloud Console . With the Red Hat OpenShift Cluster Manager application, you can deploy OpenShift Dedicated clusters to either on-premises or cloud environments. 1.1. New changes and updates 1.1.1. Q1 2025 New version of OpenShift Dedicated available. OpenShift Dedicated on Google Cloud Platform (GCP) and OpenShift Dedicated on Amazon Web Services (AWS) versions 4.18 are now available. For more information about upgrading to this latest version, see Red Hat OpenShift Dedicated cluster upgrades . Support for assigning newly created machine pools to specific availability zones within a Multi-AZ cluster. OpenShift Dedicated on Google Cloud Platform (GCP) users can now assign machine pools to specific availability zones using the OpenShift Cluster Manager CLI ( ocm ). For more information, see Deploying a machine pool in a single availability zone within a Multi-AZ cluster . Support for specifying OpenShift Dedicated versions when creating or updating a Workload Identity Federation (WIF) configuration. OpenShift Dedicated on Google Cloud Platform (GCP) users can now specify minor versions when creating or updating a WIF configuration. For more information, see Creating a Workload Identity Federation cluster using the OCM CLI . Cluster node limit update. OpenShift Dedicated clusters versions 4.14.14 and greater can now scale to 249 worker nodes. This is an increase from the limit of 180 nodes. For more information, see limits and scalability . Initiate live migration from OpenShift SDN to OVN-Kubernetes. As part of the OpenShift Dedicated move to OVN-Kubernetes as the only supported network plugin starting with OpenShift Dedicated version 4.17, users can now initiate live migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin. If your cluster uses the OpenShift SDN network plugin, you cannot upgrade to future major versions of OpenShift Dedicated without migrating to OVN-Kubernetes. For more information about migrating to OVN-Kubernetes, see Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin . Red Hat SRE log-based alerting endpoints have been updated. OpenShift Dedicated customers who are using a firewall to control egress traffic can now remove all references to *.osdsecuritylogs.splunkcloud.com:9997 from your firewall allowlist. OpenShift Dedicated clusters still require the http-inputs-osdsecuritylogs.splunkcloud.com:443 log-based alerting endpoint to be accessible from the cluster. 1.1.2. Q4 2024 Workload Identity Federation (WIF) authentication type is now available. OpenShift Dedicated on Google Cloud Platform (GCP) customers can now use WIF as an authentication type when creating a cluster. WIF is a GCP Identity and Access Management (IAM) feature that provides third parties a secure method to access resources on a customer's cloud account. WIF is Google Cloud's preferred method for credential authentication. For more information, see Creating a cluster on GCP with Workload Identity Federation authentication . Private Service Connect (PSC) networking feature is now available. You can now create a private OpenShift Dedicated cluster on Google Cloud Platform (GCP) using Google Cloud's security-enhanced networking feature Private Service Connect (PSC). PSC is a capability of Google Cloud networking that enables private communication between services across different GCP projects or organizations. Implementing PSC as part of your network connectivity allows you to deploy OpenShift Dedicated clusters in a private and secured environment within GCP without using any public-facing cloud resources. For more information, see Private Service Connect overview . Support for GCP A3 instances with NVIDIA H100 80GB GPUs. OpenShift Dedicated on Google Cloud Platform (GCP) now supports A3 instance types with NVIDIA H100 80GB GPUs. The GCP A3 instance type is available in all three zones of a GCP region, which is a prerequisite for multi-AZ deployment. For more information, see Google Cloud compute types . 1.1.3. Q3 2024 Support for GCP A2 instance types with A100 80GB GPUs. OpenShift Dedicated on Google Cloud Platform (GCP) now supports A2 instance types with A100 80GB GPUs. These instance types meet the specific requirements listed by IBM Watsonx.ai. For more information, see Google Cloud compute types . Expanded support for GCP standard instance types. OpenShift Dedicated on Google Cloud Platform (GCP) now supports standard instance types for control plane and infrastructure nodes. For more information, see Control plane and infrastructure node sizing and scaling . OpenShift Dedicated regions added. OpenShift Dedicated on Google Cloud Platform (GCP) is now available in the following additional regions: Melbourne ( australia-southeast2 ) Milan ( europe-west8 ) Turin ( europe-west12 ) Madrid ( europe-southwest1 ) Santiago ( southamerica-west1 ) Doha ( me-central1 ) Dammam ( me-central2 ) For more information about region availabilities, see Regions and availability zones . 1.1.4. Q2 2024 Cluster delete protection. OpenShift Dedicated on Google Cloud Platform (GCP) users can now enable the cluster delete protection option, which helps to prevent users from accidentally deleting a cluster. CSI Operator update. OpenShift Dedicated is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore Storage. For more information, see Google Compute Platform Filestore CSI Driver Operator . Support for new GCP instances. OpenShift Dedicated now supports more worker node types and sizes on Google Cloud Platform. For more information, see Google Cloud compute types . 1.1.5. Q1 2024 OpenShift Dedicated regions added. OpenShift Dedicated on Google Cloud Platform (GCP) is now available in the Delhi, India ( asia-south2 ) region. For more information on region availabilities, see Regions and availability zones . Policy constraint update. OpenShift Dedicated on Google Cloud Platform (GCP) users are now allowed to deploy clusters with the constraints/iam.allowedPolicyMemberDomains constraint in place. This feature allows users to restrict the set of identities that are allowed to be used in Identity and Access Management policies, further enhancing overall security for their resources. 1.1.6. Q4 2023 Policy constraint update. OpenShift Dedicated on Google Cloud Platform (GCP) users can now enable UEFISecureBoot during cluster installation, as required by the GCP ShieldVM policy. This new feature adds further protection from boot or kernel-level malware or rootkits. Cluster install update. OpenShift Dedicated clusters can now be installed on Google Cloud Platform (GCP) shared VPCs. OpenShift Dedicated on Google Cloud Marketplace availability. When creating an OpenShift Dedicated (OSD) cluster on Google Cloud through the Hybrid Cloud Console, customers can now select Google Cloud Marketplace as their preferred billing model. This billing model allows Red Hat customers to take advantage of their Google Committed Use Discounts (CUD) towards OpenShift Dedicated purchased through the Google Cloud Marketplace. 1.2. Known issues OpenShift Container Platform 4.14 introduced an updated HAProxy image from 2.2 to 2.6. This update created a change in behavior enforcing strict RFC 7230 compliance, rejecting requests with multiple Transfer-Encoding headers. This may cause exposed pods in OpenShift Dedicated 4.14 clusters sending multiple Transfer-Encoding headers to respond with a 502 Bad Gateway or 400 Bad Request error . To avoid this issue, ensure that your applications are not sending multiple Transfer-Encoding headers. For more information, see Red Hat Knowledgebase article . ( OCPBUGS-43095 ) | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/whats_new/osd-whats-new |
Appendix A. Using the Maven OSGi Tooling | Appendix A. Using the Maven OSGi Tooling Abstract Manually creating a bundle, or a collection of bundles, for a large project can be cumbersome. The Maven bundle plug-in makes the job easier by automating the process and providing a number of shortcuts for specifying the contents of the bundle manifest. A.1. The Maven Bundle Plug-In The Red Hat Fuse OSGi tooling uses the Maven bundle plug-in from Apache Felix. The bundle plug-in is based on the bnd tool from Peter Kriens. It automates the construction of OSGi bundle manifests by introspecting the contents of the classes being packaged in the bundle. Using the knowledge of the classes contained in the bundle, the plug-in can calculate the proper values to populate the Import-Packages and the Export-Package properties in the bundle manifest. The plug-in also has default values that are used for other required properties in the bundle manifest. To use the bundle plug-in, do the following: Section A.2, "Setting up a Red Hat Fuse OSGi project" the bundle plug-in to your project's POM file. Section A.3, "Configuring the Bundle Plug-In" the plug-in to correctly populate your bundle's manifest. A.2. Setting up a Red Hat Fuse OSGi project Overview A Maven project for building an OSGi bundle can be a simple single level project. It does not require any sub-projects. However, it does require that you do the following: Add the bundle plug-in to your POM. Instruct Maven to package the results as an OSGi bundle. Note There are several Maven archetypes you can use to set up your project with the appropriate settings. Directory structure A project that constructs an OSGi bundle can be a single level project. It only requires that you have a top-level POM file and a src folder. As in all Maven projects, you place all Java source code in the src/java folder, and you place any non-Java resources in the src/resources folder. Non-Java resources include Spring configuration files, JBI endpoint configuration files, and WSDL contracts. Note Red Hat Fuse OSGi projects that use Apache CXF, Apache Camel, or another Spring configured bean also include a beans.xml file located in the src/resources/META-INF/spring folder. Adding a bundle plug-in Before you can use the bundle plug-in you must add a dependency on Apache Felix. After you add the dependency, you can add the bundle plug-in to the plug-in portion of the POM. Example A.1, "Adding an OSGi bundle plug-in to a POM" shows the POM entries required to add the bundle plug-in to your project. Example A.1. Adding an OSGi bundle plug-in to a POM The entries in Example A.1, "Adding an OSGi bundle plug-in to a POM" do the following: Adds the dependency on Apache Felix Adds the bundle plug-in to your project Configures the plug-in to use the project's artifact ID as the bundle's symbolic name Configures the plug-in to include all Java packages imported by the bundled classes; also imports the org.apache.camel.osgi package Configures the plug-in to bundle the listed class, but not to include them in the list of exported packages Note Edit the configuration to meet the requirements of your project. For more information on configuring the bundle plug-in, see Section A.3, "Configuring the Bundle Plug-In" . Activating a bundle plug-in To have Maven use the bundle plug-in, instruct it to package the results of the project as a bundle. Do this by setting the POM file's packaging element to bundle . Useful Maven archetypes There are several Maven archetypes available to generate a project that is preconfigured to use the bundle plug-in: the section called "Spring OSGi archetype" the section called "Apache CXF code-first archetype" the section called "Apache CXF wsdl-first archetype" the section called "Apache Camel archetype" Spring OSGi archetype The Spring OSGi archetype creates a generic project for building an OSGi project using Spring DM, as shown: You invoke the archetype using the following command: Apache CXF code-first archetype The Apache CXF code-first archetype creates a project for building a service from Java, as shown: You invoke the archetype using the following command: Apache CXF wsdl-first archetype The Apache CXF wsdl-first archetype creates a project for creating a service from WSDL, as shown: You invoke the archetype using the following command: Apache Camel archetype The Apache Camel archetype creates a project for building a route that is deployed into Red Hat Fuse, as shown: You invoke the archetype using the following command: A.3. Configuring the Bundle Plug-In Overview A bundle plug-in requires very little information to function. All of the required properties use default settings to generate a valid OSGi bundle. While you can create a valid bundle using just the default values, you will probably want to modify some of the values. You can specify most of the properties inside the plug-in's instructions element. Configuration properties Some of the commonly used configuration properties are: Bundle-SymbolicName Bundle-Name Bundle-Version Export-Package Private-Package Import-Package Setting a bundle's symbolic name By default, the bundle plug-in sets the value for the Bundle-SymbolicName property to groupId + "." + artifactId , with the following exceptions: If groupId has only one section (no dots), the first package name with classes is returned. For example, if the group Id is commons-logging:commons-logging , the bundle's symbolic name is org.apache.commons.logging . If artifactId is equal to the last section of groupId , then groupId is used. For example, if the POM specifies the group ID and artifact ID as org.apache.maven:maven , the bundle's symbolic name is org.apache.maven . If artifactId starts with the last section of groupId , that portion is removed. For example, if the POM specifies the group ID and artifact ID as org.apache.maven:maven-core , the bundle's symbolic name is org.apache.maven.core . To specify your own value for the bundle's symbolic name, add a Bundle-SymbolicName child in the plug-in's instructions element, as shown in Example A.2, "Setting a bundle's symbolic name" . Example A.2. Setting a bundle's symbolic name Setting a bundle's name By default, a bundle's name is set to USD{project.name} . To specify your own value for the bundle's name, add a Bundle-Name child to the plug-in's instructions element, as shown in Example A.3, "Setting a bundle's name" . Example A.3. Setting a bundle's name Setting a bundle's version By default, a bundle's version is set to USD{project.version} . Any dashes ( - ) are replaced with dots ( . ) and the number is padded up to four digits. For example, 4.2-SNAPSHOT becomes 4.2.0.SNAPSHOT . To specify your own value for the bundle's version, add a Bundle-Version child to the plug-in's instructions element, as shown in Example A.4, "Setting a bundle's version" . Example A.4. Setting a bundle's version Specifying exported packages By default, the OSGi manifest's Export-Package list is populated by all of the packages in your local Java source code (under src/main/java ), except for the default package, . , and any packages containing .impl or .internal . Important If you use a Private-Package element in your plug-in configuration and you do not specify a list of packages to export, the default behavior includes only the packages listed in the Private-Package element in the bundle. No packages are exported. The default behavior can result in very large packages and in exporting packages that should be kept private. To change the list of exported packages you can add an Export-Package child to the plug-in's instructions element. The Export-Package element specifies a list of packages that are to be included in the bundle and that are to be exported. The package names can be specified using the * wildcard symbol. For example, the entry com.fuse.demo.* includes all packages on the project's classpath that start with com.fuse.demo . You can specify packages to be excluded be prefixing the entry with ! . For example, the entry !com.fuse.demo.private excludes the package com.fuse.demo.private . When excluding packages, the order of entries in the list is important. The list is processed in order from the beginning and any subsequent contradicting entries are ignored. For example, to include all packages starting with com.fuse.demo except the package com.fuse.demo.private , list the packages using: However, if you list the packages using com.fuse.demo.*,!com.fuse.demo.private , then com.fuse.demo.private is included in the bundle because it matches the first pattern. Specifying private packages If you want to specify a list of packages to include in a bundle without exporting them, you can add a Private-Package instruction to the bundle plug-in configuration. By default, if you do not specify a Private-Package instruction, all packages in your local Java source are included in the bundle. Important If a package matches an entry in both the Private-Package element and the Export-Package element, the Export-Package element takes precedence. The package is added to the bundle and exported. The Private-Package element works similarly to the Export-Package element in that you specify a list of packages to be included in the bundle. The bundle plug-in uses the list to find all classes on the project's classpath that are to be included in the bundle. These packages are packaged in the bundle, but not exported (unless they are also selected by the Export-Package instruction). Example A.5, "Including a private package in a bundle" shows the configuration for including a private package in a bundle Example A.5. Including a private package in a bundle Specifying imported packages By default, the bundle plug-in populates the OSGi manifest's Import-Package property with a list of all the packages referred to by the contents of the bundle. While the default behavior is typically sufficient for most projects, you might find instances where you want to import packages that are not automatically added to the list. The default behavior can also result in unwanted packages being imported. To specify a list of packages to be imported by the bundle, add an Import-Package child to the plug-in's instructions element. The syntax for the package list is the same as for the Export-Package element and the Private-Package element. Important When you use the Import-Package element, the plug-in does not automatically scan the bundle's contents to determine if there are any required imports. To ensure that the contents of the bundle are scanned, you must place an * as the last entry in the package list. Example A.6, "Specifying the packages imported by a bundle" shows the configuration for specifying the packages imported by a bundle Example A.6. Specifying the packages imported by a bundle More information For more information on configuring a bundle plug-in, see: olink:OsgiDependencies/OsgiDependencies Apache Felix documentation Peter Kriens' aQute Software Consultancy web site | [
"<dependencies> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.osgi.core</artifactId> <version>1.0.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-SymbolicName>USD{pom.artifactId}</Bundle-SymbolicName> <Import-Package>*,org.apache.camel.osgi</Import-Package> <Private-Package>org.apache.servicemix.examples.camel</Private-Package> </instructions> </configuration> </plugin> </plugins> </build>",
"org.springframework.osgi/spring-bundle-osgi-archetype/1.1.2",
"mvn archetype:generate -DarchetypeGroupId=org.springframework.osgi -DarchetypeArtifactId=spring-osgi-bundle-archetype -DarchetypeVersion=1.1.2 -DgroupId= groupId -DartifactId= artifactId -Dversion= version",
"org.apache.servicemix.tooling/servicemix-osgi-cxf-code-first-archetype/2010.02.0-fuse-02-00",
"mvn archetype:generate -DarchetypeGroupId=org.apache.servicemix.tooling -DarchetypeArtifactId=servicemix-osgi-cxf-code-first-archetype -DarchetypeVersion=2010.02.0-fuse-02-00 -DgroupId= groupId -DartifactId= artifactId -Dversion= version",
"org.apache.servicemix.tooling/servicemix-osgi-cxf-wsdl-first-archetype/2010.02.0-fuse-02-00",
"mvn archetype:generate -DarchetypeGroupId=org.apache.servicemix.tooling -DarchetypeArtifactId=servicemix-osgi-cxf-wsdl-first-archetype -DarchetypeVersion=2010.02.0-fuse-02-00 -DgroupId= groupId -DartifactId= artifactId -Dversion= version",
"org.apache.servicemix.tooling/servicemix-osgi-camel-archetype/2010.02.0-fuse-02-00",
"mvn archetype:generate -DarchetypeGroupId=org.apache.servicemix.tooling -DarchetypeArtifactId=servicemix-osgi-camel-archetype -DarchetypeVersion=2010.02.0-fuse-02-00 -DgroupId= groupId -DartifactId= artifactId -Dversion= version",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-SymbolicName>USD{project.artifactId}</Bundle-SymbolicName> </instructions> </configuration> </plugin>",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-Name>JoeFred</Bundle-Name> </instructions> </configuration> </plugin>",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Bundle-Version>1.0.3.1</Bundle-Version> </instructions> </configuration> </plugin>",
"!com.fuse.demo.private,com.fuse.demo.*",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Private-Package>org.apache.cxf.wsdlFirst.impl</Private-Package> </instructions> </configuration> </plugin>",
"<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <configuration> <instructions> <Import-Package>javax.jws, javax.wsdl, org.apache.cxf.bus, org.apache.cxf.bus.spring, org.apache.cxf.bus.resource, org.apache.cxf.configuration.spring, org.apache.cxf.resource, org.springframework.beans.factory.config, * </Import-Package> </instructions> </configuration> </plugin>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/ESBMavenOSGiAppx |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/red_hat_openshift_data_foundation_architecture/making-open-source-more-inclusive |
probe::nfsd.write | probe::nfsd.write Name probe::nfsd.write - NFS server writing data to a file for client Synopsis nfsd.write Values offset the offset of file fh file handle (the first part is the length of the file handle) vlen read blocks file argument file, indicates if the file has been opened. client_ip the ip address of client count read bytes size read bytes vec struct kvec, includes buf address in kernel address and length of each buffer | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-write |
Chapter 14. Sample DPDK SR-IOV YAML files | Chapter 14. Sample DPDK SR-IOV YAML files This section provides sample yaml files as a reference to add single root I/O virtualization (SR-IOV) and Data Plane Development Kit (DPDK) interfaces on the same compute node. Note These templates are from a fully-configured environment, and include parameters unrelated to NFV, that might not apply to your deployment. For a list of component support levels, see the Red Hat Knowledgebase solution Component Support Graduation . 14.1. roles_data.yaml Run the openstack overcloud roles generate command to generate the roles_data.yaml file. Include role names in the command according to the roles that you want to deploy in your environment, such as Controller , ComputeSriov , ComputeOvsDpdkRT , ComputeOvsDpdkSriov , or other roles. Example For example, to generate a roles_data.yaml file that contains the roles Controller and ComputeHCIOvsDpdkSriov , run the following command: USD openstack overcloud roles generate -o roles_data.yaml \ Controller ComputeHCIOvsDpdkSriov ############################################################################### # File generated by TripleO ############################################################################### ############################################################################### # Role: Controller # ############################################################################### - name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 tags: - primary - controller networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet # For systems with both IPv4 and IPv6, you may specify a gateway network for # each, such as ['ControlPlane', 'External'] default_route_networks: ['External'] HostnameFormatDefault: '%stackname%-controller-%index%' # Deprecated & backward-compatible values (FIXME: Make parameters consistent) # Set uses_deprecated_params to True if any deprecated params are used. uses_deprecated_params: True deprecated_param_extraconfig: 'controllerExtraConfig' deprecated_param_flavor: 'OvercloudControlFlavor' deprecated_param_image: 'controllerImage' deprecated_nic_config_name: 'controller.yaml' update_serial: 1 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AodhApi - OS::TripleO::Services::AodhEvaluator - OS::TripleO::Services::AodhListener - OS::TripleO::Services::AodhNotifier - OS::TripleO::Services::AuditD - OS::TripleO::Services::BarbicanApi - OS::TripleO::Services::BarbicanBackendSimpleCrypto - OS::TripleO::Services::BarbicanBackendDogtag - OS::TripleO::Services::BarbicanBackendKmip - OS::TripleO::Services::BarbicanBackendPkcs11Crypto - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CeilometerAgentCentral - OS::TripleO::Services::CeilometerAgentNotification - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephGrafana - OS::TripleO::Services::CephMds - OS::TripleO::Services::CephMgr - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephRbdMirror - OS::TripleO::Services::CephRgw - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackendDellPs - OS::TripleO::Services::CinderBackendDellSc - OS::TripleO::Services::CinderBackendDellEMCPowermax - OS::TripleO::Services::CinderBackendDellEMCPowerStore - OS::TripleO::Services::CinderBackendDellEMCSc - OS::TripleO::Services::CinderBackendDellEMCUnity - OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI - OS::TripleO::Services::CinderBackendDellEMCVNX - OS::TripleO::Services::CinderBackendDellEMCVxFlexOS - OS::TripleO::Services::CinderBackendDellEMCXtremio - OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI - OS::TripleO::Services::CinderBackendNetApp - OS::TripleO::Services::CinderBackendPure - OS::TripleO::Services::CinderBackendScaleIO - OS::TripleO::Services::CinderBackendVRTSHyperScale - OS::TripleO::Services::CinderBackendNVMeOF - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderHPELeftHandISCSI - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Clustercheck - OS::TripleO::Services::Collectd - OS::TripleO::Services::ContainerImagePrepare - OS::TripleO::Services::DesignateApi - OS::TripleO::Services::DesignateCentral - OS::TripleO::Services::DesignateProducer - OS::TripleO::Services::DesignateWorker - OS::TripleO::Services::DesignateMDNS - OS::TripleO::Services::DesignateSink - OS::TripleO::Services::Docker - OS::TripleO::Services::Ec2Api - OS::TripleO::Services::Etcd - OS::TripleO::Services::ExternalSwiftProxy - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GnocchiApi - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::HAproxy - OS::TripleO::Services::HeatApi - OS::TripleO::Services::HeatApiCloudwatch - OS::TripleO::Services::HeatApiCfn - OS::TripleO::Services::HeatEngine - OS::TripleO::Services::Horizon - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::IronicApi - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::IronicInspector - OS::TripleO::Services::IronicPxe - OS::TripleO::Services::IronicNeutronAgent - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Keepalived - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::ManilaApi - OS::TripleO::Services::ManilaBackendCephFs - OS::TripleO::Services::ManilaBackendIsilon - OS::TripleO::Services::ManilaBackendNetapp - OS::TripleO::Services::ManilaBackendUnity - OS::TripleO::Services::ManilaBackendVNX - OS::TripleO::Services::ManilaBackendVMAX - OS::TripleO::Services::ManilaScheduler - OS::TripleO::Services::ManilaShare - OS::TripleO::Services::Memcached - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MistralApi - OS::TripleO::Services::MistralEngine - OS::TripleO::Services::MistralExecutor - OS::TripleO::Services::MistralEventEngine - OS::TripleO::Services::Multipathd - OS::TripleO::Services::MySQL - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronApi - OS::TripleO::Services::NeutronBgpVpnApi - OS::TripleO::Services::NeutronSfcApi - OS::TripleO::Services::NeutronCorePlugin - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronL2gwAgent - OS::TripleO::Services::NeutronL2gwApi - OS::TripleO::Services::NeutronL3Agent - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronMetadataAgent - OS::TripleO::Services::NeutronML2FujitsuCfab - OS::TripleO::Services::NeutronML2FujitsuFossw - OS::TripleO::Services::NeutronOvsAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NeutronAgentsIBConfig - OS::TripleO::Services::NovaApi - OS::TripleO::Services::NovaConductor - OS::TripleO::Services::NovaIronic - OS::TripleO::Services::NovaMetadata - OS::TripleO::Services::NovaScheduler - OS::TripleO::Services::NovaVncProxy - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OctaviaApi - OS::TripleO::Services::OctaviaDeploymentConfig - OS::TripleO::Services::OctaviaHealthManager - OS::TripleO::Services::OctaviaHousekeeping - OS::TripleO::Services::OctaviaWorker - OS::TripleO::Services::OpenStackClients - OS::TripleO::Services::OVNDBs - OS::TripleO::Services::OVNController - OS::TripleO::Services::Pacemaker - OS::TripleO::Services::PankoApi - OS::TripleO::Services::PlacementApi - OS::TripleO::Services::OsloMessagingRpc - OS::TripleO::Services::OsloMessagingNotify - OS::TripleO::Services::Podman - OS::TripleO::Services::Rear - OS::TripleO::Services::Redis - OS::TripleO::Services::Rhsm - OS::TripleO::Services::Rsyslog - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::SaharaApi - OS::TripleO::Services::SaharaEngine - OS::TripleO::Services::Securetty - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::SwiftProxy - OS::TripleO::Services::SwiftDispersion - OS::TripleO::Services::SwiftRingBuilder - OS::TripleO::Services::SwiftStorage - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::Zaqar ############################################################################### # Role: ComputeHCIOvsDpdkSriov # ############################################################################### - name: ComputeHCIOvsDpdkSriov description: | ComputeOvsDpdkSriov Node role hosting Ceph OSD too networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet # CephOSD present so serial has to be 1 update_serial: 1 RoleParametersDefault: TunedProfileName: "cpu-partitioning" VhostuserSocketGroup: "hugetlbfs" NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephOSD - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsDpdk - OS::TripleO::Services::Docker - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::Multipathd - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe - OS::TripleO::Services::NeutronSriovAgent - OS::TripleO::Services::NeutronSriovHostConfig - OS::TripleO::Services::NovaAZConfig - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaLibvirtGuests - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::OvsDpdkNetcontrold - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::Podman - OS::TripleO::Services::Rear - OS::TripleO::Services::Rhsm - OS::TripleO::Services::Rsyslog - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent - OS::TripleO::Services::Ptp 14.2. network-environment-overrides.yaml resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkSriov::Net::SoftwareConfig: nic-configs/computeovsdpdksriov.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml # Customize all these values to match the local environment parameter_defaults: # The tunnel type for the project network (vxlan or gre). Set to '' to disable tunneling. NeutronTunnelTypes: 'vxlan' # The project network type for Neutron (vlan or vxlan). NeutronNetworkType: 'vxlan,vlan' # The OVS logical->physical bridge mappings to use. NeutronBridgeMappings: 'access:br-access,dpdk-mgmt:br-link0' # The Neutron ML2 and OpenVSwitch vlan mapping range to support. NeutronNetworkVLANRanges: 'access:423:423,dpdk-mgmt:134:137,sriov-1:138:139,sriov-2:138:139' # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: ["10.46.0.31","10.46.0.32"] # Nova flavor to use. OvercloudControllerFlavor: controller OvercloudComputeOvsDpdkSriovFlavor: computeovsdpdksriov # Number of nodes to deploy. ControllerCount: 3 ComputeOvsDpdkSriovCount: 2 # NTP server configuration. NtpServer: ['clock.redhat.com'] # MTU global configuration NeutronGlobalPhysnetMtu: 9000 # Configure the classname of the firewall driver to use for implementing security groups. NeutronOVSFirewallDriver: openvswitch SshServerOptions: UseDns: 'no' # Enable log level DEBUG for supported components Debug: True ControllerHostnameFormat: 'controller-%index%' ControllerSchedulerHints: 'capabilities:node': 'controller-%index%' ComputeOvsDpdkSriovHostnameFormat: 'computeovsdpdksriov-%index%' ComputeOvsDpdkSriovSchedulerHints: 'capabilities:node': 'computeovsdpdksriov-%index%' # From Rocky live migration with NumaTopologyFilter disabled by default # https://bugs.launchpad.net/nova/+bug/1289064 NovaEnableNUMALiveMigration: true ########################## # OVS DPDK configuration # ########################## # In the future, most parameters will be derived by mistral plan. # Currently mistral derive parameters is blocked: # https://bugzilla.redhat.com/show_bug.cgi?id=1777841 # https://bugzilla.redhat.com/show_bug.cgi?id=1777844 ComputeOvsDpdkSriovParameters: KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt intel_iommu=on isolcpus=2-19,22-39" TunedProfileName: "cpu-partitioning" IsolCpusList: "2-19,22-39" NovaComputeCpuDedicatedSet: ['2-10,12-17,19,22-30,32-37,39'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: "1024,3072" OvsDpdkMemoryChannels: "4" OvsPmdCoreList: "11,18,31,38" NovaComputeCpuSharedSet: [0,20,1,21] # When using NIC partitioning on SR-IOV enabled setups, 'derive_pci_passthrough_whitelist.py' # script will be executed which will override NovaPCIPassthrough. # No option to disable as of now - https://bugzilla.redhat.com/show_bug.cgi?id=1774403 NovaPCIPassthrough: - address: "0000:19:0e.3" trusted: "true" physical_network: "sriov1" - address: "0000:19:0e.0" trusted: "true" physical_network: "sriov-2" # NUMA aware vswitch NeutronPhysnetNUMANodesMapping: {dpdk-mgmt: [0]} NeutronTunnelNUMANodes: [0] NeutronPhysicalDevMappings: - sriov1:enp6s0f2 - sriov2:enp6s0f3 ############################ # Scheduler configuration # ############################ NovaSchedulerDefaultFilters: - "AvailabilityZoneFilter" - "ComputeFilter" - "ComputeCapabilitiesFilter" - "ImagePropertiesFilter" - "ServerGroupAntiAffinityFilter" - "ServerGroupAffinityFilter" - "PciPassthroughFilter" - "NUMATopologyFilter" - "AggregateInstanceExtraSpecsFilter" 14.3. controller.yaml heat_template_version: rocky description: > Software Config to drive os-net-config to configure VLANs for the controller role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string ExternalInterfaceRoutes: default: [] description: > Routes for the external network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json InternalApiIpSubnet: default: '' description: IP address/subnet on the internal_api network type: string InternalApiInterfaceRoutes: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageInterfaceRoutes: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage_mgmt network type: string StorageMgmtInterfaceRoutes: default: [] description: > Routes for the storage_mgmt network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string TenantInterfaceRoutes: default: [] description: > Routes for the tenant network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string ManagementInterfaceRoutes: default: [] description: > Routes for the management network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json BondInterfaceOvsOptions: default: bond_mode=active-backup description: >- The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string ExternalNetworkVlanID: default: 10 description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: 20 description: Vlan ID for the internal_api network traffic. type: number StorageNetworkVlanID: default: 30 description: Vlan ID for the storage network traffic. type: number StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage_mgmt network traffic. type: number TenantNetworkVlanID: default: 50 description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 60 description: Vlan ID for the management network traffic. type: number ExternalInterfaceDefaultRoute: default: 10.0.0.1 description: default route for the external network type: string ControlPlaneSubnetCidr: default: '' description: > The subnet CIDR of the control plane network. (The parameter is automatically resolved from the ctlplane subnet's cidr attribute.) type: string ControlPlaneDefaultRoute: default: '' description: >- The default route of the control plane network. (The parameter is automatically resolved from the ctlplane subnet's gateway_ip attribute.) type: string DnsServers: # Override this via parameter_defaults default: [] description: > DNS servers to use for the Overcloud (2 max for some implementations). If not set the nameservers configured in the ctlplane subnet's dns_nameservers attribute will be used. type: comma_delimited_list EC2MetadataIp: default: '' description: >- The IP address of the EC2 metadata server. (The parameter is automatically resolved from the ctlplane subnet's host_routes attribute.) type: string ControlPlaneStaticRoutes: default: [] description: > Routes for the ctlplane network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ControlPlaneMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the network. (The parameter is automatically resolved from the ctlplane network's mtu attribute.) type: number StorageMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Storage network. type: number StorageMgmtMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the StorageMgmt network. type: number InternalApiMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the InternalApi network. type: number TenantMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Tenant network. type: number ExternalMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the External network. type: number resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: USDnetwork_config: network_config: - type: interface name: nic1 use_dhcp: false addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlaneSubnetCidr routes: - ip_netmask: 169.254.169.254/32 next_hop: get_param: EC2MetadataIp - type: ovs_bridge name: br-link0 use_dhcp: false mtu: 9000 members: - type: interface name: nic2 mtu: 9000 - type: vlan vlan_id: get_param: TenantNetworkVlanID mtu: 9000 addresses: - ip_netmask: get_param: TenantIpSubnet - type: vlan vlan_id: get_param: InternalApiNetworkVlanID addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID addresses: - ip_netmask: get_param: StorageIpSubnet - type: vlan vlan_id: get_param: StorageMgmtNetworkVlanID addresses: - ip_netmask: get_param: StorageMgmtIpSubnet - type: ovs_bridge name: br-access use_dhcp: false mtu: 9000 members: - type: interface name: nic3 mtu: 9000 - type: vlan vlan_id: get_param: ExternalNetworkVlanID mtu: 9000 addresses: - ip_netmask: get_param: ExternalIpSubnet routes: - default: true next_hop: get_param: ExternalInterfaceDefaultRoute outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: get_resource: OsNetConfigImpl 14.4. compute-ovs-dpdk.yaml heat_template_version: rocky description: > Software Config to drive os-net-config to configure VLANs for the compute role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string ExternalInterfaceRoutes: default: [] description: > Routes for the external network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json InternalApiIpSubnet: default: '' description: IP address/subnet on the internal_api network type: string InternalApiInterfaceRoutes: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageInterfaceRoutes: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage_mgmt network type: string StorageMgmtInterfaceRoutes: default: [] description: > Routes for the storage_mgmt network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string TenantInterfaceRoutes: default: [] description: > Routes for the tenant network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string ManagementInterfaceRoutes: default: [] description: > Routes for the management network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json BondInterfaceOvsOptions: default: 'bond_mode=active-backup' description: The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string ExternalNetworkVlanID: default: 10 description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: 20 description: Vlan ID for the internal_api network traffic. type: number StorageNetworkVlanID: default: 30 description: Vlan ID for the storage network traffic. type: number StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage_mgmt network traffic. type: number TenantNetworkVlanID: default: 50 description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 60 description: Vlan ID for the management network traffic. type: number ExternalInterfaceDefaultRoute: default: '10.0.0.1' description: default route for the external network type: string ControlPlaneSubnetCidr: default: '' description: > The subnet CIDR of the control plane network. (The parameter is automatically resolved from the ctlplane subnet's cidr attribute.) type: string ControlPlaneDefaultRoute: default: '' description: The default route of the control plane network. (The parameter is automatically resolved from the ctlplane subnet's gateway_ip attribute.) type: string DnsServers: # Override this via parameter_defaults default: [] description: > DNS servers to use for the Overcloud (2 max for some implementations). If not set the nameservers configured in the ctlplane subnet's dns_nameservers attribute will be used. type: comma_delimited_list EC2MetadataIp: default: '' description: The IP address of the EC2 metadata server. (The parameter is automatically resolved from the ctlplane subnet's host_routes attribute.) type: string ControlPlaneStaticRoutes: default: [] description: > Routes for the ctlplane network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ControlPlaneMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the network. (The parameter is automatically resolved from the ctlplane network's mtu attribute.) type: number StorageMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Storage network. type: number InternalApiMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the InternalApi network. type: number TenantMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Tenant network. type: number resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: USDnetwork_config: network_config: - type: interface name: nic1 use_dhcp: false defroute: false - type: interface name: nic2 use_dhcp: false addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlaneSubnetCidr routes: - ip_netmask: 169.254.169.254/32 next_hop: get_param: EC2MetadataIp - default: true next_hop: get_param: ControlPlaneDefaultRoute - type: linux_bond name: bond_api bonding_options: mode=active-backup use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnet - type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: - str_replace: template: set port br-link0 tag=_VLAN_TAG_ params: _VLAN_TAG_: get_param: TenantNetworkVlanID addresses: - ip_netmask: get_param: TenantIpSubnet members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic7 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic8 - type: sriov_pf name: nic9 mtu: 9000 numvfs: 10 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false - type: sriov_pf name: nic10 mtu: 9000 numvfs: 10 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: get_resource: OsNetConfigImpl 14.5. overcloud_deploy.sh #!/bin/bash THT_PATH='/home/stack/ospd-16-vxlan-dpdk-sriov-ctlplane-dataplane-bonding-hybrid' openstack overcloud deploy \ --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dpdk.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml \ -e /home/stack/containers-prepare-parameter.yaml \ -r USDTHT_PATH/roles_data.yaml \ -e USDTHT_PATH/network-environment-overrides.yaml \ -n USDTHT_PATH/network-data.yaml | [
"openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCIOvsDpdkSriov",
"############################################################################### File generated by TripleO ############################################################################### ############################################################################### Role: Controller # ############################################################################### - name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 tags: - primary - controller networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet # For systems with both IPv4 and IPv6, you may specify a gateway network for # each, such as ['ControlPlane', 'External'] default_route_networks: ['External'] HostnameFormatDefault: '%stackname%-controller-%index%' # Deprecated & backward-compatible values (FIXME: Make parameters consistent) # Set uses_deprecated_params to True if any deprecated params are used. uses_deprecated_params: True deprecated_param_extraconfig: 'controllerExtraConfig' deprecated_param_flavor: 'OvercloudControlFlavor' deprecated_param_image: 'controllerImage' deprecated_nic_config_name: 'controller.yaml' update_serial: 1 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AodhApi - OS::TripleO::Services::AodhEvaluator - OS::TripleO::Services::AodhListener - OS::TripleO::Services::AodhNotifier - OS::TripleO::Services::AuditD - OS::TripleO::Services::BarbicanApi - OS::TripleO::Services::BarbicanBackendSimpleCrypto - OS::TripleO::Services::BarbicanBackendDogtag - OS::TripleO::Services::BarbicanBackendKmip - OS::TripleO::Services::BarbicanBackendPkcs11Crypto - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CeilometerAgentCentral - OS::TripleO::Services::CeilometerAgentNotification - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephGrafana - OS::TripleO::Services::CephMds - OS::TripleO::Services::CephMgr - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephRbdMirror - OS::TripleO::Services::CephRgw - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackendDellPs - OS::TripleO::Services::CinderBackendDellSc - OS::TripleO::Services::CinderBackendDellEMCPowermax - OS::TripleO::Services::CinderBackendDellEMCPowerStore - OS::TripleO::Services::CinderBackendDellEMCSc - OS::TripleO::Services::CinderBackendDellEMCUnity - OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI - OS::TripleO::Services::CinderBackendDellEMCVNX - OS::TripleO::Services::CinderBackendDellEMCVxFlexOS - OS::TripleO::Services::CinderBackendDellEMCXtremio - OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI - OS::TripleO::Services::CinderBackendNetApp - OS::TripleO::Services::CinderBackendPure - OS::TripleO::Services::CinderBackendScaleIO - OS::TripleO::Services::CinderBackendVRTSHyperScale - OS::TripleO::Services::CinderBackendNVMeOF - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderHPELeftHandISCSI - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Clustercheck - OS::TripleO::Services::Collectd - OS::TripleO::Services::ContainerImagePrepare - OS::TripleO::Services::DesignateApi - OS::TripleO::Services::DesignateCentral - OS::TripleO::Services::DesignateProducer - OS::TripleO::Services::DesignateWorker - OS::TripleO::Services::DesignateMDNS - OS::TripleO::Services::DesignateSink - OS::TripleO::Services::Docker - OS::TripleO::Services::Ec2Api - OS::TripleO::Services::Etcd - OS::TripleO::Services::ExternalSwiftProxy - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GnocchiApi - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::HAproxy - OS::TripleO::Services::HeatApi - OS::TripleO::Services::HeatApiCloudwatch - OS::TripleO::Services::HeatApiCfn - OS::TripleO::Services::HeatEngine - OS::TripleO::Services::Horizon - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::IronicApi - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::IronicInspector - OS::TripleO::Services::IronicPxe - OS::TripleO::Services::IronicNeutronAgent - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Keepalived - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::ManilaApi - OS::TripleO::Services::ManilaBackendCephFs - OS::TripleO::Services::ManilaBackendIsilon - OS::TripleO::Services::ManilaBackendNetapp - OS::TripleO::Services::ManilaBackendUnity - OS::TripleO::Services::ManilaBackendVNX - OS::TripleO::Services::ManilaBackendVMAX - OS::TripleO::Services::ManilaScheduler - OS::TripleO::Services::ManilaShare - OS::TripleO::Services::Memcached - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MistralApi - OS::TripleO::Services::MistralEngine - OS::TripleO::Services::MistralExecutor - OS::TripleO::Services::MistralEventEngine - OS::TripleO::Services::Multipathd - OS::TripleO::Services::MySQL - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronApi - OS::TripleO::Services::NeutronBgpVpnApi - OS::TripleO::Services::NeutronSfcApi - OS::TripleO::Services::NeutronCorePlugin - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronL2gwAgent - OS::TripleO::Services::NeutronL2gwApi - OS::TripleO::Services::NeutronL3Agent - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronMetadataAgent - OS::TripleO::Services::NeutronML2FujitsuCfab - OS::TripleO::Services::NeutronML2FujitsuFossw - OS::TripleO::Services::NeutronOvsAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NeutronAgentsIBConfig - OS::TripleO::Services::NovaApi - OS::TripleO::Services::NovaConductor - OS::TripleO::Services::NovaIronic - OS::TripleO::Services::NovaMetadata - OS::TripleO::Services::NovaScheduler - OS::TripleO::Services::NovaVncProxy - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OctaviaApi - OS::TripleO::Services::OctaviaDeploymentConfig - OS::TripleO::Services::OctaviaHealthManager - OS::TripleO::Services::OctaviaHousekeeping - OS::TripleO::Services::OctaviaWorker - OS::TripleO::Services::OpenStackClients - OS::TripleO::Services::OVNDBs - OS::TripleO::Services::OVNController - OS::TripleO::Services::Pacemaker - OS::TripleO::Services::PankoApi - OS::TripleO::Services::PlacementApi - OS::TripleO::Services::OsloMessagingRpc - OS::TripleO::Services::OsloMessagingNotify - OS::TripleO::Services::Podman - OS::TripleO::Services::Rear - OS::TripleO::Services::Redis - OS::TripleO::Services::Rhsm - OS::TripleO::Services::Rsyslog - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::SaharaApi - OS::TripleO::Services::SaharaEngine - OS::TripleO::Services::Securetty - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::SwiftProxy - OS::TripleO::Services::SwiftDispersion - OS::TripleO::Services::SwiftRingBuilder - OS::TripleO::Services::SwiftStorage - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::Zaqar ############################################################################### Role: ComputeHCIOvsDpdkSriov # ############################################################################### - name: ComputeHCIOvsDpdkSriov description: | ComputeOvsDpdkSriov Node role hosting Ceph OSD too networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet # CephOSD present so serial has to be 1 update_serial: 1 RoleParametersDefault: TunedProfileName: \"cpu-partitioning\" VhostuserSocketGroup: \"hugetlbfs\" NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephOSD - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsDpdk - OS::TripleO::Services::Docker - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::Multipathd - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe - OS::TripleO::Services::NeutronSriovAgent - OS::TripleO::Services::NeutronSriovHostConfig - OS::TripleO::Services::NovaAZConfig - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaLibvirtGuests - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::OvsDpdkNetcontrold - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::Podman - OS::TripleO::Services::Rear - OS::TripleO::Services::Rhsm - OS::TripleO::Services::Rsyslog - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent - OS::TripleO::Services::Ptp",
"resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkSriov::Net::SoftwareConfig: nic-configs/computeovsdpdksriov.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml Customize all these values to match the local environment parameter_defaults: # The tunnel type for the project network (vxlan or gre). Set to '' to disable tunneling. NeutronTunnelTypes: 'vxlan' # The project network type for Neutron (vlan or vxlan). NeutronNetworkType: 'vxlan,vlan' # The OVS logical->physical bridge mappings to use. NeutronBridgeMappings: 'access:br-access,dpdk-mgmt:br-link0' # The Neutron ML2 and OpenVSwitch vlan mapping range to support. NeutronNetworkVLANRanges: 'access:423:423,dpdk-mgmt:134:137,sriov-1:138:139,sriov-2:138:139' # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: [\"10.46.0.31\",\"10.46.0.32\"] # Nova flavor to use. OvercloudControllerFlavor: controller OvercloudComputeOvsDpdkSriovFlavor: computeovsdpdksriov # Number of nodes to deploy. ControllerCount: 3 ComputeOvsDpdkSriovCount: 2 # NTP server configuration. NtpServer: ['clock.redhat.com'] # MTU global configuration NeutronGlobalPhysnetMtu: 9000 # Configure the classname of the firewall driver to use for implementing security groups. NeutronOVSFirewallDriver: openvswitch SshServerOptions: UseDns: 'no' # Enable log level DEBUG for supported components Debug: True ControllerHostnameFormat: 'controller-%index%' ControllerSchedulerHints: 'capabilities:node': 'controller-%index%' ComputeOvsDpdkSriovHostnameFormat: 'computeovsdpdksriov-%index%' ComputeOvsDpdkSriovSchedulerHints: 'capabilities:node': 'computeovsdpdksriov-%index%' # From Rocky live migration with NumaTopologyFilter disabled by default # https://bugs.launchpad.net/nova/+bug/1289064 NovaEnableNUMALiveMigration: true ########################## # OVS DPDK configuration # ########################## # In the future, most parameters will be derived by mistral plan. # Currently mistral derive parameters is blocked: # https://bugzilla.redhat.com/show_bug.cgi?id=1777841 # https://bugzilla.redhat.com/show_bug.cgi?id=1777844 ComputeOvsDpdkSriovParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt intel_iommu=on isolcpus=2-19,22-39\" TunedProfileName: \"cpu-partitioning\" IsolCpusList: \"2-19,22-39\" NovaComputeCpuDedicatedSet: ['2-10,12-17,19,22-30,32-37,39'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: \"1024,3072\" OvsDpdkMemoryChannels: \"4\" OvsPmdCoreList: \"11,18,31,38\" NovaComputeCpuSharedSet: [0,20,1,21] # When using NIC partitioning on SR-IOV enabled setups, 'derive_pci_passthrough_whitelist.py' # script will be executed which will override NovaPCIPassthrough. # No option to disable as of now - https://bugzilla.redhat.com/show_bug.cgi?id=1774403 NovaPCIPassthrough: - address: \"0000:19:0e.3\" trusted: \"true\" physical_network: \"sriov1\" - address: \"0000:19:0e.0\" trusted: \"true\" physical_network: \"sriov-2\" # NUMA aware vswitch NeutronPhysnetNUMANodesMapping: {dpdk-mgmt: [0]} NeutronTunnelNUMANodes: [0] NeutronPhysicalDevMappings: - sriov1:enp6s0f2 - sriov2:enp6s0f3 ############################ # Scheduler configuration # ############################ NovaSchedulerDefaultFilters: - \"AvailabilityZoneFilter\" - \"ComputeFilter\" - \"ComputeCapabilitiesFilter\" - \"ImagePropertiesFilter\" - \"ServerGroupAntiAffinityFilter\" - \"ServerGroupAffinityFilter\" - \"PciPassthroughFilter\" - \"NUMATopologyFilter\" - \"AggregateInstanceExtraSpecsFilter\"",
"heat_template_version: rocky description: > Software Config to drive os-net-config to configure VLANs for the controller role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string ExternalInterfaceRoutes: default: [] description: > Routes for the external network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json InternalApiIpSubnet: default: '' description: IP address/subnet on the internal_api network type: string InternalApiInterfaceRoutes: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageInterfaceRoutes: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage_mgmt network type: string StorageMgmtInterfaceRoutes: default: [] description: > Routes for the storage_mgmt network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string TenantInterfaceRoutes: default: [] description: > Routes for the tenant network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string ManagementInterfaceRoutes: default: [] description: > Routes for the management network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json BondInterfaceOvsOptions: default: bond_mode=active-backup description: >- The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string ExternalNetworkVlanID: default: 10 description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: 20 description: Vlan ID for the internal_api network traffic. type: number StorageNetworkVlanID: default: 30 description: Vlan ID for the storage network traffic. type: number StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage_mgmt network traffic. type: number TenantNetworkVlanID: default: 50 description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 60 description: Vlan ID for the management network traffic. type: number ExternalInterfaceDefaultRoute: default: 10.0.0.1 description: default route for the external network type: string ControlPlaneSubnetCidr: default: '' description: > The subnet CIDR of the control plane network. (The parameter is automatically resolved from the ctlplane subnet's cidr attribute.) type: string ControlPlaneDefaultRoute: default: '' description: >- The default route of the control plane network. (The parameter is automatically resolved from the ctlplane subnet's gateway_ip attribute.) type: string DnsServers: # Override this via parameter_defaults default: [] description: > DNS servers to use for the Overcloud (2 max for some implementations). If not set the nameservers configured in the ctlplane subnet's dns_nameservers attribute will be used. type: comma_delimited_list EC2MetadataIp: default: '' description: >- The IP address of the EC2 metadata server. (The parameter is automatically resolved from the ctlplane subnet's host_routes attribute.) type: string ControlPlaneStaticRoutes: default: [] description: > Routes for the ctlplane network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ControlPlaneMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the network. (The parameter is automatically resolved from the ctlplane network's mtu attribute.) type: number StorageMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Storage network. type: number StorageMgmtMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the StorageMgmt network. type: number InternalApiMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the InternalApi network. type: number TenantMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Tenant network. type: number ExternalMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the External network. type: number resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: USDnetwork_config: network_config: - type: interface name: nic1 use_dhcp: false addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlaneSubnetCidr routes: - ip_netmask: 169.254.169.254/32 next_hop: get_param: EC2MetadataIp - type: ovs_bridge name: br-link0 use_dhcp: false mtu: 9000 members: - type: interface name: nic2 mtu: 9000 - type: vlan vlan_id: get_param: TenantNetworkVlanID mtu: 9000 addresses: - ip_netmask: get_param: TenantIpSubnet - type: vlan vlan_id: get_param: InternalApiNetworkVlanID addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID addresses: - ip_netmask: get_param: StorageIpSubnet - type: vlan vlan_id: get_param: StorageMgmtNetworkVlanID addresses: - ip_netmask: get_param: StorageMgmtIpSubnet - type: ovs_bridge name: br-access use_dhcp: false mtu: 9000 members: - type: interface name: nic3 mtu: 9000 - type: vlan vlan_id: get_param: ExternalNetworkVlanID mtu: 9000 addresses: - ip_netmask: get_param: ExternalIpSubnet routes: - default: true next_hop: get_param: ExternalInterfaceDefaultRoute outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: get_resource: OsNetConfigImpl",
"heat_template_version: rocky description: > Software Config to drive os-net-config to configure VLANs for the compute role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string ExternalInterfaceRoutes: default: [] description: > Routes for the external network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json InternalApiIpSubnet: default: '' description: IP address/subnet on the internal_api network type: string InternalApiInterfaceRoutes: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageInterfaceRoutes: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage_mgmt network type: string StorageMgmtInterfaceRoutes: default: [] description: > Routes for the storage_mgmt network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string TenantInterfaceRoutes: default: [] description: > Routes for the tenant network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string ManagementInterfaceRoutes: default: [] description: > Routes for the management network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json BondInterfaceOvsOptions: default: 'bond_mode=active-backup' description: The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string ExternalNetworkVlanID: default: 10 description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: 20 description: Vlan ID for the internal_api network traffic. type: number StorageNetworkVlanID: default: 30 description: Vlan ID for the storage network traffic. type: number StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage_mgmt network traffic. type: number TenantNetworkVlanID: default: 50 description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 60 description: Vlan ID for the management network traffic. type: number ExternalInterfaceDefaultRoute: default: '10.0.0.1' description: default route for the external network type: string ControlPlaneSubnetCidr: default: '' description: > The subnet CIDR of the control plane network. (The parameter is automatically resolved from the ctlplane subnet's cidr attribute.) type: string ControlPlaneDefaultRoute: default: '' description: The default route of the control plane network. (The parameter is automatically resolved from the ctlplane subnet's gateway_ip attribute.) type: string DnsServers: # Override this via parameter_defaults default: [] description: > DNS servers to use for the Overcloud (2 max for some implementations). If not set the nameservers configured in the ctlplane subnet's dns_nameservers attribute will be used. type: comma_delimited_list EC2MetadataIp: default: '' description: The IP address of the EC2 metadata server. (The parameter is automatically resolved from the ctlplane subnet's host_routes attribute.) type: string ControlPlaneStaticRoutes: default: [] description: > Routes for the ctlplane network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ControlPlaneMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the network. (The parameter is automatically resolved from the ctlplane network's mtu attribute.) type: number StorageMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Storage network. type: number InternalApiMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the InternalApi network. type: number TenantMtu: default: 1500 description: >- The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Tenant network. type: number resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: USDnetwork_config: network_config: - type: interface name: nic1 use_dhcp: false defroute: false - type: interface name: nic2 use_dhcp: false addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlaneSubnetCidr routes: - ip_netmask: 169.254.169.254/32 next_hop: get_param: EC2MetadataIp - default: true next_hop: get_param: ControlPlaneDefaultRoute - type: linux_bond name: bond_api bonding_options: mode=active-backup use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnet - type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: - str_replace: template: set port br-link0 tag=_VLAN_TAG_ params: _VLAN_TAG_: get_param: TenantNetworkVlanID addresses: - ip_netmask: get_param: TenantIpSubnet members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic7 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic8 - type: sriov_pf name: nic9 mtu: 9000 numvfs: 10 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false - type: sriov_pf name: nic10 mtu: 9000 numvfs: 10 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: get_resource: OsNetConfigImpl",
"#!/bin/bash THT_PATH='/home/stack/ospd-16-vxlan-dpdk-sriov-ctlplane-dataplane-bonding-hybrid' openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dpdk.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml -e /home/stack/containers-prepare-parameter.yaml -r USDTHT_PATH/roles_data.yaml -e USDTHT_PATH/network-environment-overrides.yaml -n USDTHT_PATH/network-data.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/sample-ovsdpdk-sriov-files_rhosp-nfv |
Chapter 12. SelfSubjectRulesReview [authorization.k8s.io/v1] | Chapter 12. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. status object SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. 12.1.1. .spec Description SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. Type object Property Type Description namespace string Namespace to evaluate rules for. Required. 12.1.2. .status Description SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. Type object Required resourceRules nonResourceRules incomplete Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and/or NonResourceRules may be incomplete. incomplete boolean Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation. nonResourceRules array NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. nonResourceRules[] object NonResourceRule holds information that describes a rule for the non-resource resourceRules array ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. resourceRules[] object ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. 12.1.3. .status.nonResourceRules Description NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 12.1.4. .status.nonResourceRules[] Description NonResourceRule holds information that describes a rule for the non-resource Type object Required verbs Property Type Description nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. s are allowed, but only as the full, final step in the path. " " means all. verbs array (string) Verb is a list of kubernetes non-resource API verbs, like: get, post, put, delete, patch, head, options. "*" means all. 12.1.5. .status.resourceRules Description ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 12.1.6. .status.resourceRules[] Description ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "*" means all. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. "*" means all. resources array (string) Resources is a list of resources this rule applies to. " " means all in the specified apiGroups. " /foo" represents the subresource 'foo' for all resources in the specified apiGroups. verbs array (string) Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy. "*" means all. 12.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 12.2.1. /apis/authorization.k8s.io/v1/selfsubjectrulesreviews Table 12.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SelfSubjectRulesReview Table 12.2. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 12.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authorization_apis/selfsubjectrulesreview-authorization-k8s-io-v1 |
Chapter 87. user | Chapter 87. user This chapter describes the commands under the user command. 87.1. user create Create new user Usage: Table 87.1. Positional arguments Value Summary <name> New user name Table 87.2. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Default domain (name or id) --project <project> Default project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --password <password> Set user password --password-prompt Prompt interactively for password --email <email-address> Set user email address --description <description> User description --ignore-lockout-failure-attempts Opt into ignoring the number of times a user has authenticated and locking out the user as a result --no-ignore-lockout-failure-attempts Opt out of ignoring the number of times a user has authenticated and locking out the user as a result --ignore-password-expiry Opt into allowing user to continue using passwords that may be expired --no-ignore-password-expiry Opt out of allowing user to continue using passwords that may be expired --ignore-change-password-upon-first-use Control if a user should be forced to change their password immediately after they log into keystone for the first time. Opt into ignoring the user to change their password during first time login in keystone --no-ignore-change-password-upon-first-use Control if a user should be forced to change their password immediately after they log into keystone for the first time. Opt out of ignoring the user to change their password during first time login in keystone --enable-lock-password Disables the ability for a user to change its password through self-service APIs --disable-lock-password Enables the ability for a user to change its password through self-service APIs --enable-multi-factor-auth Enables the mfa (multi factor auth) --disable-multi-factor-auth Disables the mfa (multi factor auth) --multi-factor-auth-rule <rule> Set multi-factor auth rules. for example, to set a rule requiring the "password" and "totp" auth methods to be provided, use: "--multi-factor-auth-rule password,totp". May be provided multiple times to set different rule combinations. --enable Enable user (default) --disable Disable user --or-show Return existing user Table 87.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 87.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 87.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 87.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 87.2. user delete Delete user(s) Usage: Table 87.7. Positional arguments Value Summary <user> User(s) to delete (name or id) Table 87.8. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <user> (name or id) 87.3. user list List users Usage: Table 87.9. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Filter users by <domain> (name or id) --group <group> Filter users by <group> membership (name or id) --project <project> Filter users by <project> (name or id) --long List additional fields in output Table 87.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 87.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 87.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 87.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 87.4. user password set Change current user password Usage: Table 87.14. Command arguments Value Summary -h, --help Show this help message and exit --password <new-password> New user password --original-password <original-password> Original user password 87.5. user set Set user properties Usage: Table 87.15. Positional arguments Value Summary <user> User to modify (name or id) Table 87.16. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set user name --domain <domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --project <project> Set default project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --password <password> Set user password --password-prompt Prompt interactively for password --email <email-address> Set user email address --description <description> Set user description --ignore-lockout-failure-attempts Opt into ignoring the number of times a user has authenticated and locking out the user as a result --no-ignore-lockout-failure-attempts Opt out of ignoring the number of times a user has authenticated and locking out the user as a result --ignore-password-expiry Opt into allowing user to continue using passwords that may be expired --no-ignore-password-expiry Opt out of allowing user to continue using passwords that may be expired --ignore-change-password-upon-first-use Control if a user should be forced to change their password immediately after they log into keystone for the first time. Opt into ignoring the user to change their password during first time login in keystone --no-ignore-change-password-upon-first-use Control if a user should be forced to change their password immediately after they log into keystone for the first time. Opt out of ignoring the user to change their password during first time login in keystone --enable-lock-password Disables the ability for a user to change its password through self-service APIs --disable-lock-password Enables the ability for a user to change its password through self-service APIs --enable-multi-factor-auth Enables the mfa (multi factor auth) --disable-multi-factor-auth Disables the mfa (multi factor auth) --multi-factor-auth-rule <rule> Set multi-factor auth rules. for example, to set a rule requiring the "password" and "totp" auth methods to be provided, use: "--multi-factor-auth-rule password,totp". May be provided multiple times to set different rule combinations. --enable Enable user (default) --disable Disable user 87.6. user show Display user details Usage: Table 87.17. Positional arguments Value Summary <user> User to display (name or id) Table 87.18. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <user> (name or id) Table 87.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 87.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 87.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 87.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack user create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--project <project>] [--project-domain <project-domain>] [--password <password>] [--password-prompt] [--email <email-address>] [--description <description>] [--ignore-lockout-failure-attempts] [--no-ignore-lockout-failure-attempts] [--ignore-password-expiry] [--no-ignore-password-expiry] [--ignore-change-password-upon-first-use] [--no-ignore-change-password-upon-first-use] [--enable-lock-password] [--disable-lock-password] [--enable-multi-factor-auth] [--disable-multi-factor-auth] [--multi-factor-auth-rule <rule>] [--enable | --disable] [--or-show] <name>",
"openstack user delete [-h] [--domain <domain>] <user> [<user> ...]",
"openstack user list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--domain <domain>] [--group <group> | --project <project>] [--long]",
"openstack user password set [-h] [--password <new-password>] [--original-password <original-password>]",
"openstack user set [-h] [--name <name>] [--domain <domain>] [--project <project>] [--project-domain <project-domain>] [--password <password>] [--password-prompt] [--email <email-address>] [--description <description>] [--ignore-lockout-failure-attempts] [--no-ignore-lockout-failure-attempts] [--ignore-password-expiry] [--no-ignore-password-expiry] [--ignore-change-password-upon-first-use] [--no-ignore-change-password-upon-first-use] [--enable-lock-password] [--disable-lock-password] [--enable-multi-factor-auth] [--disable-multi-factor-auth] [--multi-factor-auth-rule <rule>] [--enable | --disable] <user>",
"openstack user show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] <user>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/user |
Creating a custom LLM using RHEL AI | Creating a custom LLM using RHEL AI Red Hat Enterprise Linux AI 1.2 Creating files for customizing LLMs and running the end-to-end workflow Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/creating_a_custom_llm_using_rhel_ai/index |
3.6. Resource Actions | 3.6. Resource Actions RGManager expects the following return actions to be implemented in resource agents: start - start the resource stop - stop the resource status - check the status of the resource metadata - report the OCF RA XML metadata 3.6.1. Return Values OCF has a wide range of return codes for the monitor operation, but since RGManager calls status, it relies almost exclusively on SysV-style return codes. 0 - success stop after stop or stop when not running must return success start after start or start when running must return success nonzero - failure if the stop operation ever returns a nonzero value, the service enters the failed state and the service must be recovered manually. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/s1-rgmanager-resource |
Chapter 9. Managing Capacity With Instances | Chapter 9. Managing Capacity With Instances Scaling your automation mesh is available on OpenShift deployments of Red Hat Ansible Automation Platform and is possible through adding or removing nodes from your cluster dynamically, using the Instances resource of the automation controller UI, without running the installation script. Instances serve as nodes in your mesh topology. Automation mesh enables you to extend the footprint of your automation. The location where you launch a job can be different from the location where the ansible-playbook runs. To manage instances from the automation controller UI, you must have System Administrator or System Auditor permissions. In general, the more processor cores (CPU) and memory (RAM) a node has, the more jobs that can be scheduled to run on that node at once. For more information, see Automation controller capacity determination and job impact . 9.1. Prerequisites The automation mesh is dependent on hop and execution nodes running on Red Hat Enterprise Linux (RHEL). Your Red Hat Ansible Automation Platform subscription grants you ten Red Hat Enterprise Linux licenses that can be used for running components of Ansible Automation Platform. For additional information about Red Hat Enterprise Linux subscriptions, see Registering the system and managing subscriptions in the Red Hat Enterprise Linux documentation. The following steps prepare the RHEL instances for deployment of the automation mesh. You require a Red Hat Enterprise Linux operating system. Each node in the mesh requires a static IP address, or a resolvable DNS hostname that automation controller can access. Ensure that you have the minimum requirements for the RHEL virtual machine before proceeding. For more information, see the Red Hat Ansible Automation Platform system requirements . Deploy the RHEL instances within the remote networks where communication is required. For information about creating virtual machines, see Creating Virtual Machines in the Red Hat Enterprise Linux documentation. Remember to scale the capacity of your virtual machines sufficiently so that your proposed tasks can run on them. RHEL ISOs can be obtained from access.redhat.com. RHEL cloud images can be built using Image Builder from console.redhat.com. 9.2. Pulling the secret If you are using the default execution environment (provided with automation controller) to run on remote execution nodes, you must add a pull secret in the automation controller that contains the credential for pulling the execution environment image. To do this, create a pull secret on the automation controller namespace and configure the ee_pull_credentials_secret parameter in the Operator as follows: Procedure Create a secret: oc create secret generic ee-pull-secret \ --from-literal=username=<username> \ --from-literal=password=<password> \ --from-literal=url=registry.redhat.io oc edit automationcontrollers <instance name> Add ee_pull_credentials_secret ee-pull-secret to the specification: spec.ee_pull_credentials_secret=ee-pull-secret To manage instances from the automation controller UI, you must have System Administrator or System Auditor permissions. 9.3. Setting up Virtual Machines for use in an automation mesh Procedure SSH into each of the RHEL instances and perform the following steps. Depending on your network access and controls, SSH proxies or other access models might be required. Use the following command: ssh [username]@[host_ip_address] For example, for an Ansible Automation Platform instance running on Amazon Web Services. ssh [email protected] Create or copy an SSH key that can be used to connect from the hop node to the execution node in later steps. This can be a temporary key used just for the automation mesh configuration, or a long-lived key. The SSH user and key are used in later steps. Enable your RHEL subscription with baseos and appstream repositories. Ansible Automation Platform RPM repositories are only available through subscription-manager, not the Red Hat Update Infrastructure (RHUI). If you attempt to use any other Linux footprint, including RHEL with RHUI, this causes errors. sudo subscription-manager register --auto-attach If Simple Content Access is enabled for your account, use: sudo subscription-manager register For more information about Simple Content Access, see Getting started with simple content access . Enable Ansible Automation Platform subscriptions and the proper Red Hat Ansible Automation Platform channel: # subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms for RHEL 8 # subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms for RHEL 9 Ensure the packages are up to date: sudo dnf upgrade -y Install the ansible-core packages: sudo dnf install -y ansible-core 9.4. Managing instances To expand job capacity, create a standalone execution node that can be added to run alongside a deployment of automation controller. These execution nodes are not part of the automation controller Kubernetes cluster. The control nodes run in the cluster connect and submit work to the execution nodes through Receptor. These execution nodes are registered in automation controller as type execution instances, meaning they are only used to run jobs, not dispatch work or handle web requests as control nodes do. Hop nodes can be added to sit between the control plane of automation controller and standalone execution nodes. These hop nodes are not part of the Kubernetes cluster and are registered in automation controller as an instance of type hop , meaning they only handle inbound and outbound traffic for otherwise unreachable nodes in different or more strict networks. The following procedure demonstrates how to set the node type for the hosts. Procedure From the navigation panel, select Administration Instances . On the Instances list page, click Add . The Create new Instance window opens. An instance requires the following attributes: Host Name : (required) Enter a fully qualified domain name (public DNS) or IP address for your instance. This field is equivalent to hostname for installer-based deployments. Note If the instance uses private DNS that cannot be resolved from the control cluster, DNS lookup routing fails, and the generated SSL certificates is invalid. Use the IP address instead. Optional: Description : Enter a description for the instance. Instance State : This field is auto-populated, indicating that it is being installed, and cannot be modified. Listener Port : This port is used for the receptor to listen on for incoming connections. You can set the port to one that is appropriate for your configuration. This field is equivalent to listener_port in the API. The default value is 27199, though you can set your own port value. Instance Type : Only execution and hop nodes can be created. Operator based deployments do not support Control or Hybrid nodes. Options: Enable Instance : Check this box to make it available for jobs to run on an execution node. Check the Managed by Policy box to enable policy to dictate how the instance is assigned. Check the Peers from control nodes box to enable control nodes to peer to this instance automatically. For nodes connected to automation controller, check the Peers from Control nodes box to create a direct communication link between that node and automation controller. For all other nodes: If you are not adding a hop node, make sure Peers from Control is checked. If you are adding a hop node, make sure Peers from Control is not checked. For execution nodes that communicate with hop nodes, do not check this box. To peer an execution node with a hop node, click the icon to the Peers field. The Select Peers window is displayed. Peer the execution node to the hop node. Click Save . To view a graphical representation of your updated topology, see Topology viewer . Note Execute the following steps from any computer that has SSH access to the newly created instance. Click the icon to Install Bundle to download the tar file that includes this new instance and the files necessary to install the created node into the automation mesh. The install bundle contains TLS certificates and keys, a certificate authority, and a proper Receptor configuration file. receptor-ca.crt work-public-key.pem receptor.key install_receptor.yml inventory.yml group_vars/all.yml requirements.yml Extract the downloaded tar.gz Install Bundle from the location where you downloaded it. To ensure that these files are in the correct location on the remote machine, the install bundle includes the install_receptor.yml playbook. The playbook requires the Receptor collection. Run the following command to download the collection: ansible-galaxy collection install -r requirements.yml Before running the ansible-playbook command, edit the following fields in the inventory.yml file: all: hosts: remote-execution: ansible_host: 10.0.0.6 ansible_user: <username> # user provided ansible_ssh_private_key_file: ~/.ssh/<id_rsa> Ensure ansible_host is set to the IP address or DNS of the node. Set ansible_user to the username running the installation. Set ansible_ssh_private_key_file to contain the filename of the private key used to connect to the instance. The content of the inventory.yml file serves as a template and contains variables for roles that are applied during the installation and configuration of a receptor node in a mesh topology. You can modify some of the other fields, or replace the file in its entirety for advanced scenarios. For more information, see Role Variables . For a node that uses a private DNS, add the following line to inventory.yml : ansible_ssh_common_args: <your ssh ProxyCommand setting> This instructs the install-receptor.yml playbook to use the proxy command to connect through the local DNS node to the private node. When the attributes are configured, click Save . The Details page of the created instance opens. Save the file to continue. The system that is going to run the install bundle to setup the remote node and run ansible-playbook requires the ansible.receptor collection to be installed: ansible-galaxy collection install ansible.receptor or ansible-galaxy install -r requirements.yml Installing the receptor collection dependency from the requirements.yml file consistently retrieves the receptor version specified there. Additionally, it retrieves any other collection dependencies that might be needed in the future. Install the receptor collection on all nodes where your playbook will run, otherwise an error occurs. If receptor_listener_port is defined, the machine also requires an available open port on which to establish inbound TCP connections, for example, 27199. Run the following command to open port 27199 for receptor communication: sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp Note It might be the case that some servers do not listen on receptor port (the default is 27199) Suppose you have a Control plane with nodes A, B, C, D The RPM installer creates a strongly connected peering between the control plane nodes with a least privileged approach and opens the tcp listener only on those nodes where it is required. All the receptor connections are bidirectional, so once the connection is created, the receptor can communicate in both directions. The following is an example peering set up for three controller nodes: Controller node A --> Controller node B Controller node A --> Controller node C Controller node B --> Controller node C You can force the listener by setting receptor_listener=True However, a connection Controller B --> A is likely to be rejected as that connection already exists. This means that nothing connects to Controller A as Controller A is creating the connections to the other nodes, and the following command does not return anything on Controller A: [root@controller1 ~]# ss -ntlp | grep 27199 [root@controller1 ~]# Run the following playbook on the machine where you want to update your automation mesh: ansible-playbook -i inventory.yml install_receptor.yml After this playbook runs, your automation mesh is configured. To remove an instance from the mesh, see Removing instances . | [
"create secret generic ee-pull-secret --from-literal=username=<username> --from-literal=password=<password> --from-literal=url=registry.redhat.io edit automationcontrollers <instance name>",
"spec.ee_pull_credentials_secret=ee-pull-secret",
"ssh [username]@[host_ip_address]",
"ssh [email protected]",
"sudo subscription-manager register --auto-attach",
"sudo subscription-manager register",
"subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms for RHEL 8 subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms for RHEL 9",
"sudo dnf upgrade -y",
"sudo dnf install -y ansible-core",
"receptor-ca.crt work-public-key.pem receptor.key install_receptor.yml inventory.yml group_vars/all.yml requirements.yml",
"ansible-galaxy collection install -r requirements.yml",
"all: hosts: remote-execution: ansible_host: 10.0.0.6 ansible_user: <username> # user provided ansible_ssh_private_key_file: ~/.ssh/<id_rsa>",
"ansible_ssh_common_args: <your ssh ProxyCommand setting>",
"ansible-galaxy collection install ansible.receptor",
"ansible-galaxy install -r requirements.yml",
"sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp",
"ansible-playbook -i inventory.yml install_receptor.yml"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/assembly-controller-instances |
Chapter 1. Architecture overview | Chapter 1. Architecture overview OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. To learn more about OpenShift Container Platform and Kubernetes, see product architecture . 1.1. Glossary of common terms for OpenShift Container Platform architecture This glossary defines common terms that are used in the architecture content. access policies A set of roles that dictate how users, applications, and entities within a cluster interact with one another. An access policy increases cluster security. admission plugins Admission plugins enforce security policies, resource limitations, or configuration requirements. authentication To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication to ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate with the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. bootstrap A temporary machine that runs minimal Kubernetes and deploys the OpenShift Container Platform control plane. certificate signing requests (CSRs) A resource requests a denoted signer to sign a certificate. This request might get approved or denied. Cluster Version Operator (CVO) An Operator that checks with the OpenShift Container Platform Update Service to see the valid updates and update paths based on current component versions and information in the graph. compute nodes Nodes that are responsible for executing workloads for cluster users. Compute nodes are also known as worker nodes. configuration drift A situation where the configuration on a node does not match what the machine config specifies. containers Lightweight and executable images that consist of software and all of its dependencies. Because containers virtualize the operating system, you can run containers anywhere, such as data centers, public or private clouds, and local hosts. container orchestration engine Software that automates the deployment, management, scaling, and networking of containers. container workloads Applications that are packaged and deployed in containers. control groups (cgroups) Partitions sets of processes into groups to manage and limit the resources processes consume. control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the life cycle of containers. Control planes are also known as control plane machines. CRI-O A Kubernetes native container runtime implementation that integrates with the operating system to deliver an efficient Kubernetes experience. deployment A Kubernetes resource object that maintains the life cycle of an application. Dockerfile A text file that contains the user commands to perform on a terminal to assemble the image. hosted control planes A OpenShift Container Platform feature that enables hosting a control plane on the OpenShift Container Platform cluster from its data plane and workers. This model performs the following actions: Optimize infrastructure costs required for the control planes. Improve the cluster creation time. Enable hosting the control plane using the Kubernetes native high level primitives. For example, deployments and stateful sets. Allow a strong network segmentation between the control plane and workloads. hybrid cloud deployments Deployments that deliver a consistent platform across bare metal, virtual, private, and public cloud environments. This offers speed, agility, and portability. Ignition A utility that RHCOS uses to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. kubernetes manifest Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemon sets. Machine Config Daemon (MCD) A daemon that regularly checks the nodes for configuration drift. Machine Config Operator (MCO) An Operator that applies the new configuration to your cluster machines. machine config pools (MCP) A group of machines, such as control plane components or user workloads, that are based on the resources that they handle. metadata Additional information about cluster deployment artifacts. microservices An approach to writing software. Applications can be separated into the smallest components, independent from each other by using microservices. mirror registry A registry that holds the mirror of OpenShift Container Platform images. monolithic applications Applications that are self-contained, built, and packaged as a single piece. namespaces A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. networking Network information of OpenShift Container Platform cluster. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OpenShift CLI ( oc ) A command line tool to run OpenShift Container Platform commands on the terminal. OpenShift Dedicated A managed RHEL OpenShift Container Platform offering on Amazon Web Services (AWS) and Google Cloud Platform (GCP). OpenShift Dedicated focuses on building and scaling applications. OpenShift Update Service (OSUS) For clusters with internet access, Red Hat Enterprise Linux (RHEL) provides over-the-air updates by using an OpenShift update service as a hosted service located behind public APIs. OpenShift image registry A registry provided by OpenShift Container Platform to manage images. Operator The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. OperatorHub A platform that contains various OpenShift Container Platform Operators to install. Operator Lifecycle Manager (OLM) OLM helps you to install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way. OSTree An upgrade system for Linux-based operating systems that performs atomic upgrades of complete file system trees. OSTree tracks meaningful changes to the file system tree using an addressable object store, and is designed to complement existing package management systems. over-the-air (OTA) updates The OpenShift Container Platform Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. private registry OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their private container images. public registry OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their public container images. RHEL OpenShift Container Platform Cluster Manager A managed service where you can install, modify, operate, and upgrade your OpenShift Container Platform clusters. RHEL Quay Container Registry A Quay.io container registry that serves most of the container images and Operators to OpenShift Container Platform clusters. replication controllers An asset that indicates how many pod replicas are required to run at a time. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have only access to resources required to execute their roles. route Routes expose a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance. scaling The increasing or decreasing of resource capacity. service A service exposes a running application on a set of pods. Source-to-Image (S2I) image An image created based on the programming language of the application source code in OpenShift Container Platform to deploy applications. storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Telemetry A component to collect information such as size, health, and status of OpenShift Container Platform. template A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. user-provisioned infrastructure You can install OpenShift Container Platform on the infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. web console A user interface (UI) to manage OpenShift Container Platform. worker node Nodes that are responsible for executing workloads for cluster users. Worker nodes are also known as compute nodes. Additional resources For more information on networking, see OpenShift Container Platform networking . For more information on storage, see OpenShift Container Platform storage . For more information on authentication, see OpenShift Container Platform authentication . For more information on Operator Lifecycle Manager (OLM), see OLM . For more information on over-the-air (OTA) updates, see Introduction to OpenShift updates . 1.2. About installation and updates As a cluster administrator, you can use the OpenShift Container Platform installation program to install and deploy a cluster by using one of the following methods: Installer-provisioned infrastructure User-provisioned infrastructure 1.3. About the control plane The control plane manages the worker nodes and the pods in your cluster. You can configure nodes with the use of machine config pools (MCPs). MCPs are groups of machines, such as control plane components or user workloads, that are based on the resources that they handle. OpenShift Container Platform assigns different roles to hosts. These roles define the function of a machine in a cluster. The cluster contains definitions for the standard control plane and worker role types. You can use Operators to package, deploy, and manage services on the control plane. Operators are important components in OpenShift Container Platform because they provide the following services: Perform health checks Provide ways to watch applications Manage over-the-air updates Ensure applications stay in the specified state Additional resources Hosted control planes overview 1.4. About containerized applications for developers As a developer, you can use different tools, methods, and formats to develop your containerized application based on your unique requirements, for example: Use various build-tool, base-image, and registry options to build a simple container application. Use supporting components such as OperatorHub and templates to develop your application. Package and deploy your application as an Operator. You can also create a Kubernetes manifest and store it in a Git repository. Kubernetes works on basic units called pods. A pod is a single instance of a running process in your cluster. Pods can contain one or more containers. You can create a service by grouping a set of pods and their access policies. Services provide permanent internal IP addresses and host names for other applications to use as pods are created and destroyed. Kubernetes defines workloads based on the type of your application. 1.5. About Red Hat Enterprise Linux CoreOS (RHCOS) and Ignition As a cluster administrator, you can perform the following Red Hat Enterprise Linux CoreOS (RHCOS) tasks: Learn about the generation of single-purpose container operating system technology . Choose how to configure Red Hat Enterprise Linux CoreOS (RHCOS) Choose how to deploy Red Hat Enterprise Linux CoreOS (RHCOS): Installer-provisioned deployment User-provisioned deployment The OpenShift Container Platform installation program creates the Ignition configuration files that you need to deploy your cluster. Red Hat Enterprise Linux CoreOS (RHCOS) uses Ignition during the initial configuration to perform common disk tasks, such as partitioning, formatting, writing files, and configuring users. During the first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. You can learn how Ignition works , the process for a Red Hat Enterprise Linux CoreOS (RHCOS) machine in an OpenShift Container Platform cluster, view Ignition configuration files, and change Ignition configuration after an installation. 1.6. About admission plugins You can use admission plugins to regulate how OpenShift Container Platform functions. After a resource request is authenticated and authorized, admission plugins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to. Admission plugins are used to enforce security policies, resource limitations, configuration requirements, and other settings. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/architecture/architecture-overview |
Chapter 6. Known issues affecting required infrastructure components | Chapter 6. Known issues affecting required infrastructure components There are no known issues affecting infrastructure components required by this release. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/release_notes_for_node.js_22/known-issues-affecting-required-infrastructure-components |
Data Grid REST API | Data Grid REST API Red Hat Data Grid 8.4 Configure and interact with the Data Grid REST API Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_rest_api/index |
Chapter 5. PersistentVolume [v1] | Chapter 5. PersistentVolume [v1] Description PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeSpec is the specification of a persistent volume. status object PersistentVolumeStatus is the current status of a persistent volume. 5.1.1. .spec Description PersistentVolumeSpec is the specification of a persistent volume. Type object Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. claimRef object ObjectReference contains enough information to let you inspect or modify the referred object. csi object Represents storage that is managed by an external CSI volume driver (Beta feature) fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. local object Local represents directly-attached storage with node affinity (Beta feature) mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. nodeAffinity object VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos object Represents a StorageOS persistent volume resource. volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. vsphereVolume object Represents a vSphere volume resource. 5.1.2. .spec.awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 5.1.3. .spec.azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. Possible enum values: - "None" - "ReadOnly" - "ReadWrite" diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared Possible enum values: - "Dedicated" - "Managed" - "Shared" readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 5.1.4. .spec.azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key secretNamespace string secretNamespace is the namespace of the secret that contains Azure Storage Account Name and Key default is the same as the Pod shareName string shareName is the azure Share Name 5.1.5. .spec.cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace user string user is Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 5.1.6. .spec.cephfs.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.7. .spec.cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 5.1.8. .spec.cinder.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.9. .spec.claimRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 5.1.10. .spec.csi Description Represents storage that is managed by an external CSI volume driver (Beta feature) Type object Required driver volumeHandle Property Type Description controllerExpandSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace controllerPublishSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace driver string driver is the name of the driver to use for this volume. Required. fsType string fsType to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". nodeExpandSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace nodePublishSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace nodeStageSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace readOnly boolean readOnly value to pass to ControllerPublishVolumeRequest. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes of the volume to publish. volumeHandle string volumeHandle is the unique volume name returned by the CSI volume plugin's CreateVolume to refer to the volume on all subsequent calls. Required. 5.1.11. .spec.csi.controllerExpandSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.12. .spec.csi.controllerPublishSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.13. .spec.csi.nodeExpandSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.14. .spec.csi.nodePublishSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.15. .spec.csi.nodeStageSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.16. .spec.fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 5.1.17. .spec.flexVolume Description FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace 5.1.18. .spec.flexVolume.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.19. .spec.flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 5.1.20. .spec.gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 5.1.21. .spec.glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod endpointsNamespace string endpointsNamespace is the namespace that contains Glusterfs endpoint. If this field is empty, the EndpointNamespace defaults to the same namespace as the bound PVC. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 5.1.22. .spec.hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Possible enum values: - "" For backwards compatible, leave it empty if unset - "BlockDevice" A block device must exist at the given path - "CharDevice" A character device must exist at the given path - "Directory" A directory must exist at the given path - "DirectoryOrCreate" If nothing exists at the given path, an empty directory will be created there as needed with file mode 0755, having the same group and ownership with Kubelet. - "File" A file must exist at the given path - "FileOrCreate" If nothing exists at the given path, an empty file will be created there as needed with file mode 0644, having the same group and ownership with Kubelet. - "Socket" A UNIX socket must exist at the given path 5.1.23. .spec.iscsi Description ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is Target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun is iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 5.1.24. .spec.iscsi.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.25. .spec.local Description Local represents directly-attached storage with node affinity (Beta feature) Type object Required path Property Type Description fsType string fsType is the filesystem type to mount. It applies only when the Path is a block device. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default value is to auto-select a filesystem if unspecified. path string path of the full path to the volume on the node. It can be either a directory or block device (disk, partition, ... ). 5.1.26. .spec.nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 5.1.27. .spec.nodeAffinity Description VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. Type object Property Type Description required object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 5.1.28. .spec.nodeAffinity.required Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 5.1.29. .spec.nodeAffinity.required.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 5.1.30. .spec.nodeAffinity.required.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 5.1.31. .spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 5.1.32. .spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 5.1.33. .spec.nodeAffinity.required.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 5.1.34. .spec.nodeAffinity.required.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 5.1.35. .spec.photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 5.1.36. .spec.portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 5.1.37. .spec.quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 5.1.38. .spec.rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 5.1.39. .spec.rbd.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.40. .spec.scaleIO Description ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs" gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace sslEnabled boolean sslEnabled is the flag to enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 5.1.41. .spec.scaleIO.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.42. .spec.storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object ObjectReference contains enough information to let you inspect or modify the referred object. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 5.1.43. .spec.storageos.secretRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 5.1.44. .spec.vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 5.1.45. .status Description PersistentVolumeStatus is the current status of a persistent volume. Type object Property Type Description message string message is a human-readable message indicating details about why the volume is in this state. phase string phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#phase Possible enum values: - "Available" used for PersistentVolumes that are not yet bound Available volumes are held by the binder and matched to PersistentVolumeClaims - "Bound" used for PersistentVolumes that are bound - "Failed" used for PersistentVolumes that failed to be correctly recycled or deleted after being released from a claim - "Pending" used for PersistentVolumes that are not available - "Released" used for PersistentVolumes where the bound PersistentVolumeClaim was deleted released volumes must be recycled before becoming available again this phase is used by the persistent volume claim binder to signal to another process to reclaim the resource reason string reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI. 5.2. API endpoints The following API endpoints are available: /api/v1/persistentvolumes DELETE : delete collection of PersistentVolume GET : list or watch objects of kind PersistentVolume POST : create a PersistentVolume /api/v1/watch/persistentvolumes GET : watch individual changes to a list of PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/persistentvolumes/{name} DELETE : delete a PersistentVolume GET : read the specified PersistentVolume PATCH : partially update the specified PersistentVolume PUT : replace the specified PersistentVolume /api/v1/watch/persistentvolumes/{name} GET : watch changes to an object of kind PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/persistentvolumes/{name}/status GET : read status of the specified PersistentVolume PATCH : partially update status of the specified PersistentVolume PUT : replace status of the specified PersistentVolume 5.2.1. /api/v1/persistentvolumes Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PersistentVolume Table 5.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.3. Body parameters Parameter Type Description body DeleteOptions schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PersistentVolume Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeList schema 401 - Unauthorized Empty HTTP method POST Description create a PersistentVolume Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.8. Body parameters Parameter Type Description body PersistentVolume schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 202 - Accepted PersistentVolume schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/persistentvolumes Table 5.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/persistentvolumes/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the PersistentVolume Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PersistentVolume Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 202 - Accepted PersistentVolume schema 401 - Unauthorized Empty HTTP method GET Description read the specified PersistentVolume Table 5.17. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PersistentVolume Table 5.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.19. Body parameters Parameter Type Description body Patch schema Table 5.20. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PersistentVolume Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. Body parameters Parameter Type Description body PersistentVolume schema Table 5.23. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/persistentvolumes/{name} Table 5.24. Global path parameters Parameter Type Description name string name of the PersistentVolume Table 5.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/persistentvolumes/{name}/status Table 5.27. Global path parameters Parameter Type Description name string name of the PersistentVolume Table 5.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PersistentVolume Table 5.29. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PersistentVolume Table 5.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.31. Body parameters Parameter Type Description body Patch schema Table 5.32. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PersistentVolume Table 5.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.34. Body parameters Parameter Type Description body PersistentVolume schema Table 5.35. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/persistentvolume-v1 |
Chapter 3. Requirements for Installing Red Hat Ceph Storage | Chapter 3. Requirements for Installing Red Hat Ceph Storage Figure 3.1. Prerequisite Workflow Before installing Red Hat Ceph Storage, review the following requirements and prepare each Monitor, OSD, Metadata Server, and client nodes accordingly. Note To know about Red Hat Ceph Storage releases and corresponding Red Hat Ceph Storage package versions, see What are the Red Hat Ceph Storage releases and corresponding Ceph package versions article on the Red Hat Customer Portal. 3.1. Prerequisites Verify the hardware meets the minimum requirements for Red Hat Ceph Storage 4. 3.2. Requirements checklist for installing Red Hat Ceph Storage Task Required Section Recommendation Verifying the operating system version Yes Section 3.3, "Operating system requirements for Red Hat Ceph Storage" Registering Ceph nodes Yes Section 3.4, "Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions" Enabling Ceph software repositories Yes Section 3.5, "Enabling the Red Hat Ceph Storage repositories" Using a RAID controller with OSD nodes No Section 2.6, "Considerations for using a RAID controller with OSD nodes" Enabling write-back caches on a RAID controller might result in increased small I/O write throughput for OSD nodes. Configuring the network Yes Section 3.6, "Verifying the network configuration for Red Hat Ceph Storage" At minimum, a public network is required. However, a private network for cluster communication is recommended. Configuring a firewall No Section 3.7, "Configuring a firewall for Red Hat Ceph Storage" A firewall can increase the level of trust for a network. Creating an Ansible user Yes Section 3.8, "Creating an Ansible user with sudo access" Creating the Ansible user is required on all Ceph nodes. Enabling password-less SSH Yes Section 3.9, "Enabling password-less SSH for Ansible" Required for Ansible. Note By default, ceph-ansible installs NTP/chronyd as a requirement. If NTP/chronyd is customized, refer to Configuring the Network Time Protocol for Red Hat Ceph Storage in Manually Installing Red Hat Ceph Storage section to understand how NTP/chronyd must be configured to function properly with Ceph. 3.3. Operating system requirements for Red Hat Ceph Storage Red Hat Enterprise Linux entitlements are included in the Red Hat Ceph Storage subscription. The initial release of Red Hat Ceph Storage 4 is supported on Red Hat Enterprise Linux 7.7 or Red Hat Enterprise Linux 8.1. The current version of Red Hat Ceph Storage 4.3 is supported on Red Hat Enterprise Linux 7.9, 8.2 EUS, 8.4 EUS, 8.5, 8.6, 8.7, 8.8. Red Hat Ceph Storage 4 is supported on RPM-based deployments or container-based deployments. Important Deploying Red Hat Ceph Storage 4 in containers running on Red Hat Enterprise Linux 7, deploys Red Hat Ceph Storage 4 running on Red Hat Enterprise Linux 8 container image. Use the same operating system version, architecture, and deployment type across all nodes. For example, do not use a mixture of nodes with both AMD64 and Intel 64 architectures, a mixture of nodes with both Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8 operating systems, or a mixture of nodes with both RPM-based deployments and container-based deployments. Important Red Hat does not support clusters with heterogeneous architectures, operating system versions, or deployment types. SELinux By default, SELinux is set to Enforcing mode and the ceph-selinux packages are installed. For additional information on SELinux please see the Data Security and Hardening Guide , Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide , and Red Hat Enterprise Linux 8 Using SELinux Guide . Additional Resources The documentation set for Red Hat Enterprise Linux 8 is available at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/ The documentation set for Red Hat Enterprise Linux 7 is available at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/ . Return to requirements checklist 3.4. Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions Register each Red Hat Ceph Storage node to the Content Delivery Network (CDN) and attach the appropriate subscription so that the node has access to software repositories. Each Red Hat Ceph Storage node must be able to access the full Red Hat Enterprise Linux 8 base content and the extras repository content. Perform the following steps on all bare-metal and container nodes in the storage cluster, unless otherwise noted. Note For bare-metal Red Hat Ceph Storage nodes that cannot access the Internet during the installation, provide the software content by using the Red Hat Satellite server. Alternatively, mount a local Red Hat Enterprise Linux 8 Server ISO image and point the Red Hat Ceph Storage nodes to the ISO image. For additional details, contact Red Hat Support . For more information on registering Ceph nodes with the Red Hat Satellite server, see the How to Register Ceph with Satellite 6 and How to Register Ceph with Satellite 5 articles on the Red Hat Customer Portal. Prerequisites A valid Red Hat subscription. Red Hat Ceph Storage nodes must be able to connect to the Internet. Root-level access to the Red Hat Ceph Storage nodes. Procedure For container deployments only, when the Red Hat Ceph Storage nodes do NOT have access to the Internet during deployment. You must follow these steps first on a node with Internet access: Start a local container registry: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Verify registry.redhat.io is in the container registry search path. Open for editing the /etc/containers/registries.conf file: If registry.redhat.io is not included in the file, add it: Pull the Red Hat Ceph Storage 4 image, Prometheus image, and Dashboard image from the Red Hat Customer Portal: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Note Red Hat Enterprise Linux 7 and 8 both use the same container image, based on Red Hat Enterprise Linux 8. Tag the image: The Prometheus image tag version is v4.6 for Red Hat Ceph Storage 4.2. Red Hat Enterprise Linux 7 Replace LOCAL_NODE_FQDN with your local host FQDN. Red Hat Enterprise Linux 8 Replace LOCAL_NODE_FQDN with your local host FQDN. Edit the /etc/containers/registries.conf file and add the node's FQDN with the port in the file, and save: Note This step must be done on all storage cluster nodes that access the local Docker registry. Push the image to the local Docker registry you started: Red Hat Enterprise Linux 7 Replace LOCAL_NODE_FQDN with your local host FQDN. Red Hat Enterprise Linux 8 Replace LOCAL_NODE_FQDN with your local host FQDN. For Red Hat Enterprise Linux 7, restart the docker service: Note See the Installing a Red Hat Ceph Storage cluster for an example of the all.yml file when the Red Hat Ceph Storage nodes do NOT have access to the Internet during deployment. For all deployments, bare-metal or in containers : Register the node, and when prompted, enter the appropriate Red Hat Customer Portal credentials: Pull the latest subscription data from the CDN: List all available subscriptions for Red Hat Ceph Storage: Copy the Pool ID from the list of available subscriptions for Red Hat Ceph Storage. Attach the subscription: Replace POOL_ID with the Pool ID identified in the step. Disable the default software repositories, and enable the server and the extras repositories on the respective version of Red Hat Enterprise Linux: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Update the system to receive the latest packages. For Red Hat Enterprise Linux 7: For Red Hat Enterprise Linux 8: Additional Resources See the Using and Configuring Red Hat Subscription Manager guide for Red Hat Subscription Management. See the Enabling the Red Hat Ceph Storage repositories . Return to requirements checklist 3.5. Enabling the Red Hat Ceph Storage repositories Before you can install Red Hat Ceph Storage, you must choose an installation method. Red Hat Ceph Storage supports two installation methods: Content Delivery Network (CDN) For Ceph Storage clusters with Ceph nodes that can connect directly to the internet, use Red Hat Subscription Manager to enable the required Ceph repository. Local Repository For Ceph Storage clusters where security measures preclude nodes from accessing the internet, install Red Hat Ceph Storage 4 from a single software build delivered as an ISO image, which will allow you to install local repositories. Prerequisites Valid customer subscription. For CDN installations: Red Hat Ceph Storage nodes must be able to connect to the internet. Register the cluster nodes with CDN . If enabled, then disable the Extra Packages for Enterprise Linux (EPEL) software repository: Procedure For CDN installations: On the Ansible administration node , enable the Red Hat Ceph Storage 4 Tools repository and Ansible repository: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 By default, Red Hat Ceph Storage repositories are enabled by ceph-ansible on the respective nodes. To manually enable the repositories: Note Do not enable these repositories on containerized deployments as they are not needed. On the Ceph Monitor nodes , enable the Red Hat Ceph Storage 4 Monitor repository: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 On the Ceph OSD nodes , enable the Red Hat Ceph Storage 4 OSD repository: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Enable the Red Hat Ceph Storage 4 Tools repository on the following node types: RBD mirroring , Ceph clients , Ceph Object Gateways , Metadata Servers , NFS , iSCSI gateways , and Dashboard servers . Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 For ISO installations: Log in to the Red Hat Customer Portal. Click Downloads to visit the Software & Download center. In the Red Hat Ceph Storage area, click Download Software to download the latest version of the software. Additional Resources The Using and Configuring Red Hat Subscription Manager guide for Red Hat Subscription Management 1 Return to requirements checklist 3.6. Verifying the network configuration for Red Hat Ceph Storage All Red Hat Ceph Storage nodes require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes. You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network. Configure the network interface settings and ensure to make the changes persistent. Important Red Hat does not recommend using a single network interface card for both a public and private network. Prerequisites Network interface card connected to the network. Procedure Do the following steps on all Red Hat Ceph Storage nodes in the storage cluster, as the root user. Verify the following settings are in the /etc/sysconfig/network-scripts/ifcfg-* file corresponding the public-facing network interface card: The BOOTPROTO parameter is set to none for static IP addresses. The ONBOOT parameter must be set to yes . If it is set to no , the Ceph storage cluster might fail to peer on reboot. If you intend to use IPv6 addressing, you must set the IPv6 parameters such as IPV6INIT to yes , except the IPV6_FAILURE_FATAL parameter. Also, edit the Ceph configuration file, /etc/ceph/ceph.conf , to instruct Ceph to use IPv6, otherwise, Ceph uses IPv4. Additional Resources For details on configuring network interface scripts for Red Hat Enterprise Linux 8, see the Configuring ip networking with ifcfg files chapter in the Configuring and managing networking guide for Red Hat Enterprise Linux 8. For more information on network configuration see the Ceph network configuration section in the Configuration Guide for Red Hat Ceph Storage 4. Return to requirements checklist 3.7. Configuring a firewall for Red Hat Ceph Storage Red Hat Ceph Storage uses the firewalld service. The firewalld service contains the list of ports for each daemon. The Ceph Monitor daemons use ports 3300 and 6789 for communication within the Ceph storage cluster. On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300 : One for communicating with clients and monitors over the public network One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network The Ceph Manager ( ceph-mgr ) daemons use ports in range 6800-7300 . Consider colocating the ceph-mgr daemons with Ceph Monitors on same nodes. The Ceph Metadata Server nodes ( ceph-mds ) use port range 6800-7300 . The Ceph Object Gateway nodes are configured by Ansible to use port 8080 by default. However, you can change the default port, for example to port 80 . To use the SSL/TLS service, open port 443 . The following steps are optional if firewalld is enabled. By default, ceph-ansible includes the below setting in group_vars/all.yml , which automatically opens the appropriate ports: Prerequisite Network hardware is connected. Having root or sudo access to all nodes in the storage cluster. Procedure On all nodes in the storage cluster, start the firewalld service. Enable it to run on boot, and ensure that it is running: On all monitor nodes, open port 3300 and 6789 on the public network: To limit access based on the source address: Replace IP_ADDRESS with the network address of the Monitor node. NETMASK_PREFIX with the netmask in CIDR notation. Example On all OSD nodes, open ports 6800-7300 on the public network: If you have a separate cluster network, repeat the commands with the appropriate zone. On all Ceph Manager ( ceph-mgr ) nodes, open ports 6800-7300 on the public network: If you have a separate cluster network, repeat the commands with the appropriate zone. On all Ceph Metadata Server ( ceph-mds ) nodes, open ports 6800-7300 on the public network: If you have a separate cluster network, repeat the commands with the appropriate zone. On all Ceph Object Gateway nodes, open the relevant port or ports on the public network. To open the default Ansible configured port of 8080 : To limit access based on the source address: Replace IP_ADDRESS with the network address of the Monitor node. NETMASK_PREFIX with the netmask in CIDR notation. Example Optionally, if you installed Ceph Object Gateway using Ansible and changed the default port that Ansible configures the Ceph Object Gateway to use from 8080 , for example, to port 80 , then open this port: To limit access based on the source address, run the following commands: Replace IP_ADDRESS with the network address of the Monitor node. NETMASK_PREFIX with the netmask in CIDR notation. Example Optional. To use SSL/TLS, open port 443 : To limit access based on the source address, run the following commands: Replace IP_ADDRESS with the network address of the Monitor node. NETMASK_PREFIX with the netmask in CIDR notation. Example Additional Resources For more information about public and cluster network, see Verifying the Network Configuration for Red Hat Ceph Storage . For additional details on firewalld , see the Using and configuring firewalls chapter in the Securing networks guide for Red Hat Enterprise Linux 8. Return to requirements checklist 3.8. Creating an Ansible user with sudo access Ansible must be able to log into all the Red Hat Ceph Storage (RHCS) nodes as a user that has root privileges to install software and create configuration files without prompting for a password. You must create an Ansible user with password-less root access on all nodes in the storage cluster when deploying and configuring a Red Hat Ceph Storage cluster with Ansible. Prerequisite Having root or sudo access to all nodes in the storage cluster. Procedure Log into the node as the root user: Replace HOST_NAME with the host name of the Ceph node. Example Enter the root password when prompted. Create a new Ansible user: Replace USER_NAME with the new user name for the Ansible user. Example Important Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks. Set a new password for this user: Replace USER_NAME with the new user name for the Ansible user. Example Enter the new password twice when prompted. Configure sudo access for the newly created user: Replace USER_NAME with the new user name for the Ansible user. Example Assign the correct file permissions to the new file: Replace USER_NAME with the new user name for the Ansible user. Example Additional Resources The Managing user accounts section in the Configuring basic system settings guide Red Hat Enterprise Linux 8 Return to requirements checklist 3.9. Enabling password-less SSH for Ansible Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password. Note This procedure is not required if installing Red Hat Ceph Storage using the Cockpit web-based interface. This is because the Cockpit Ceph Installer generates its own SSH key. Instructions for copying the Cockpit SSH key to all nodes in the cluster are in the chapter Installing Red Hat Ceph Storage using the Cockpit web interface . Prerequisites Access to the Ansible administration node. Creating an Ansible user with sudo access . Procedure Generate the SSH key pair, accept the default file name and leave the passphrase empty: Copy the public key to all nodes in the storage cluster: Replace USER_NAME with the new user name for the Ansible user. HOST_NAME with the host name of the Ceph node. Example Create the user's SSH config file: Open for editing the config file. Set values for the Hostname and User options for each node in the storage cluster: Replace HOST_NAME with the host name of the Ceph node. USER_NAME with the new user name for the Ansible user. Example Important By configuring the ~/.ssh/config file you do not have to specify the -u USER_NAME option each time you execute the ansible-playbook command. Set the correct file permissions for the ~/.ssh/config file: Additional Resources The ssh_config(5) manual page. See the Using secure communications between two systems with OpenSSH chapter in the Securing networks for Red Hat Enterprise Linux 8. Return to requirements checklist | [
"docker run -d -p 5000:5000 --restart=always --name registry registry:2",
"podman run -d -p 5000:5000 --restart=always --name registry registry:2",
"[registries.search] registries = [ 'registry.access.redhat.com', 'registry.fedoraproject.org', 'registry.centos.org', 'docker.io']",
"[registries.search] registries = ['registry.redhat.io', 'registry.access.redhat.com', 'registry.fedoraproject.org', 'registry.centos.org', 'docker.io']",
"docker pull registry.redhat.io/rhceph/rhceph-4-rhel8:latest docker pull registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 docker pull registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8:latest docker pull registry.redhat.io/openshift4/ose-prometheus:v4.6 docker pull registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6",
"podman pull registry.redhat.io/rhceph/rhceph-4-rhel8:latest podman pull registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 podman pull registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8:latest podman pull registry.redhat.io/openshift4/ose-prometheus:v4.6 podman pull registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6",
"docker tag registry.redhat.io/rhceph/rhceph-4-rhel8:latest LOCAL_NODE_FQDN :5000/rhceph/rhceph-4-rhel8:latest # docker tag registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus-node-exporter:v4.6 # docker tag registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8:latest LOCAL_NODE_FQDN :5000/rhceph/rhceph-4-dashboard-rhel8:latest # docker tag registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6 LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus-alertmanager:v4.6 # docker tag registry.redhat.io/openshift4/ose-prometheus:v4.6 LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus:v4.6",
"podman tag registry.redhat.io/rhceph/rhceph-4-rhel8:latest LOCAL_NODE_FQDN :5000/rhceph/rhceph-4-rhel8:latest # podman tag registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus-node-exporter:v4.6 # podman tag registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8:latest LOCAL_NODE_FQDN :5000/rhceph/rhceph-4-dashboard-rhel8:latest # podman tag registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6 LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus-alertmanager:v4.6 # podman tag registry.redhat.io/openshift4/ose-prometheus:v4.6 LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus:v4.6",
"[registries.insecure] registries = [' LOCAL_NODE_FQDN :5000']",
"docker push --remove-signatures LOCAL_NODE_FQDN :5000/rhceph/rhceph-4-rhel8 # docker push --remove-signatures LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus-node-exporter:v4.6 # docker push --remove-signatures LOCAL_NODE_FQDN :5000/rhceph/rhceph-4-dashboard-rhel8 # docker push --remove-signatures LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus-alertmanager:v4.6 # docker push --remove-signatures LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus:v4.6",
"podman push --remove-signatures LOCAL_NODE_FQDN :5000/rhceph/rhceph-4-rhel8 # podman push --remove-signatures LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus-node-exporter:v4.6 # podman push --remove-signatures LOCAL_NODE_FQDN :5000/rhceph/rhceph-4-dashboard-rhel8 # podman push --remove-signatures LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus-alertmanager:v4.6 # podman push --remove-signatures LOCAL_NODE_FQDN :5000/openshift4/ose-prometheus:v4.6",
"systemctl restart docker",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --all --matches=\"*Ceph*\"",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms",
"yum update",
"dnf update",
"yum install yum-utils vim -y yum-config-manager --disable epel",
"subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms",
"subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhel-7-server-rhceph-4-mon-rpms",
"subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhel-7-server-rhceph-4-osd-rpms",
"subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms",
"subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms",
"configure_firewall: True",
"systemctl enable firewalld systemctl start firewalld systemctl status firewalld",
"firewall-cmd --zone=public --add-port=3300/tcp firewall-cmd --zone=public --add-port=3300/tcp --permanent firewall-cmd --zone=public --add-port=6789/tcp firewall-cmd --zone=public --add-port=6789/tcp --permanent firewall-cmd --permanent --add-service=ceph-mon firewall-cmd --add-service=ceph-mon",
"firewall-cmd --zone=public --add-rich-rule='rule family=ipv4 source address= IP_ADDRESS / NETMASK_PREFIX port protocol=tcp port=6789 accept' --permanent",
"firewall-cmd --zone=public --add-rich-rule='rule family=ipv4 source address=192.168.0.11/24 port protocol=tcp port=6789 accept' --permanent",
"firewall-cmd --zone=public --add-port=6800-7300/tcp firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent firewall-cmd --permanent --add-service=ceph firewall-cmd --add-service=ceph",
"firewall-cmd --zone=public --add-port=6800-7300/tcp firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent",
"firewall-cmd --zone=public --add-port=6800-7300/tcp firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent",
"firewall-cmd --zone=public --add-port=8080/tcp firewall-cmd --zone=public --add-port=8080/tcp --permanent",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" IP_ADDRESS / NETMASK_PREFIX \" port protocol=\"tcp\" port=\"8080\" accept\"",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" IP_ADDRESS / NETMASK_PREFIX \" port protocol=\"tcp\" port=\"8080\" accept\" --permanent",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\"192.168.0.31/24\" port protocol=\"tcp\" port=\"8080\" accept\"",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\"192.168.0.31/24\" port protocol=\"tcp\" port=\"8080\" accept\" --permanent",
"firewall-cmd --zone=public --add-port=80/tcp firewall-cmd --zone=public --add-port=80/tcp --permanent",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" IP_ADDRESS / NETMASK_PREFIX \" port protocol=\"tcp\" port=\"80\" accept\"",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" IP_ADDRESS / NETMASK_PREFIX \" port protocol=\"tcp\" port=\"80\" accept\" --permanent",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\"192.168.0.31/24\" port protocol=\"tcp\" port=\"80\" accept\"",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\"192.168.0.31/24\" port protocol=\"tcp\" port=\"80\" accept\" --permanent",
"firewall-cmd --zone=public --add-port=443/tcp firewall-cmd --zone=public --add-port=443/tcp --permanent",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" IP_ADDRESS / NETMASK_PREFIX \" port protocol=\"tcp\" port=\"443\" accept\"",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" IP_ADDRESS / NETMASK_PREFIX \" port protocol=\"tcp\" port=\"443\" accept\" --permanent",
"firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\"192.168.0.31/24\" port protocol=\"tcp\" port=\"443\" accept\" firewall-cmd --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\"192.168.0.31/24\" port protocol=\"tcp\" port=\"443\" accept\" --permanent",
"ssh root@ HOST_NAME",
"ssh root@mon01",
"adduser USER_NAME",
"adduser admin",
"passwd USER_NAME",
"passwd admin",
"cat << EOF >/etc/sudoers.d/ USER_NAME USDUSER_NAME ALL = (root) NOPASSWD:ALL EOF",
"cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF",
"chmod 0440 /etc/sudoers.d/ USER_NAME",
"chmod 0440 /etc/sudoers.d/admin",
"[ansible@admin ~]USD ssh-keygen",
"ssh-copy-id USER_NAME @ HOST_NAME",
"[ansible@admin ~]USD ssh-copy-id ceph-admin@ceph-mon01",
"[ansible@admin ~]USD touch ~/.ssh/config",
"Host node1 Hostname HOST_NAME User USER_NAME Host node2 Hostname HOST_NAME User USER_NAME",
"Host node1 Hostname monitor User admin Host node2 Hostname osd User admin Host node3 Hostname gateway User admin",
"[admin@admin ~]USD chmod 600 ~/.ssh/config"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/requirements-for-installing-rhcs |
Chapter 4. Configuration Hooks | Chapter 4. Configuration Hooks The configuration hooks provide a method to inject your own configuration functions into the Overcloud deployment process. This includes hooks for injecting custom configuration before and after the main Overcloud services configuration and hook for modifying and including Puppet-based configuration. 4.1. First Boot: Customizing First Boot Configuration The director provides a mechanism to perform configuration on all nodes upon the initial creation of the Overcloud. The director achieves this through cloud-init , which you can call using the OS::TripleO::NodeUserData resource type. In this example, update the nameserver with a custom IP address on all nodes. First, create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a specific nameserver. You can use the OS::TripleO::MultipartMime resource type to send the configuration script. Create an environment file ( /home/stack/templates/firstboot.yaml ) that registers your Heat template as the OS::TripleO::NodeUserData resource type. To add the first boot configuration, add the environment file to the stack along with your other environment files when first creating the Overcloud. For example: The -e applies the environment file to the Overcloud stack. This adds the configuration to all nodes when they are first created and boot for the first time. Subsequent inclusions of these templates, such as updating the Overcloud stack, does not run these scripts. Important You can only register the OS::TripleO::NodeUserData to one Heat template. Subsequent usage overrides the Heat template to use. 4.2. Pre-Configuration: Customizing Specific Overcloud Roles Important versions of this document used the OS::TripleO::Tasks::*PreConfig resources to provide pre-configuration hooks on a per role basis. The director's Heat template collection requires dedicated use of these hooks, which means you should not use them for custom use. Instead, use the OS::TripleO::*ExtraConfigPre hooks outlined below. The Overcloud uses Puppet for the core configuration of OpenStack components. The director provides a set of hooks to provide custom configuration for specific node roles after the first boot completes and before the core configuration begins. These hooks include: OS::TripleO::ControllerExtraConfigPre Additional configuration applied to Controller nodes before the core Puppet configuration. OS::TripleO::ComputeExtraConfigPre Additional configuration applied to Compute nodes before the core Puppet configuration. OS::TripleO::CephStorageExtraConfigPre Additional configuration applied to Ceph Storage nodes before the core Puppet configuration. OS::TripleO::ObjectStorageExtraConfigPre Additional configuration applied to Object Storage nodes before the core Puppet configuration. OS::TripleO::BlockStorageExtraConfigPre Additional configuration applied to Block Storage nodes before the core Puppet configuration. OS::TripleO::[ROLE]ExtraConfigPre Additional configuration applied to custom nodes before the core Puppet configuration. Replace [ROLE] with the composable role name. In this example, you first create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to write to a node's resolv.conf with a variable nameserver. In this example, the resources section contains the following parameters: CustomExtraConfigPre This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeploymentPre This executes a software configuration, which is the software configuration from the CustomExtraConfigPre resource. Note the following: The config parameter makes a reference to the CustomExtraConfigPre resource so Heat knows what configuration to apply. The server parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . input_values contains a parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. Create an environment file ( /home/stack/templates/pre_config.yaml ) that registers your Heat template to the role-based resource type. For example, to apply only to Controller nodes, use the ControllerExtraConfigPre hook: To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all Controller nodes before the core configuration begins on either the initial Overcloud creation or subsequent updates. Important You can only register each resource to only one Heat template per hook. Subsequent usage overrides the Heat template to use. 4.3. Pre-Configuration: Customizing All Overcloud Roles The Overcloud uses Puppet for the core configuration of OpenStack components. The director provides a hook to configure all node types after the first boot completes and before the core configuration begins: OS::TripleO::NodeExtraConfig Additional configuration applied to all nodes roles before the core Puppet configuration. In this example, create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a variable nameserver. In this example, the resources section contains the following parameters: CustomExtraConfigPre This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeploymentPre This executes a software configuration, which is the software configuration from the CustomExtraConfigPre resource. Note the following: The config parameter makes a reference to the CustomExtraConfigPre resource so Heat knows what configuration to apply. The server parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, we only apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . The input_values parameter contains a sub-parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. , create an environment file ( /home/stack/templates/pre_config.yaml ) that registers your heat template as the OS::TripleO::NodeExtraConfig resource type. To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all nodes before the core configuration begins on either the initial Overcloud creation or subsequent updates. Important You can only register the OS::TripleO::NodeExtraConfig to only one Heat template. Subsequent usage overrides the Heat template to use. 4.4. Post-Configuration: Customizing All Overcloud Roles Important versions of this document used the OS::TripleO::Tasks::*PostConfig resources to provide post-configuration hooks on a per role basis. The director's Heat template collection requires dedicated use of these hooks, which means you should not use them for custom use. Instead, use the OS::TripleO::NodeExtraConfigPost hook outlined below. A situation might occur where you have completed the creation of your Overcloud but want to add additional configuration to all roles, either on initial creation or on a subsequent update of the Overcloud. In this case, you use the following post-configuration hook: OS::TripleO::NodeExtraConfigPost Additional configuration applied to all nodes roles after the core Puppet configuration. In this example, you first create a basic heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a variable nameserver. In this example, the resources section contains the following: CustomExtraConfig This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeployments This executes a software configuration, which is the software configuration from the CustomExtraConfig resource. Note the following: The config parameter makes a reference to the CustomExtraConfig resource so Heat knows what configuration to apply. The servers parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, we apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . input_values contains a parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. Create an environment file ( /home/stack/templates/post_config.yaml ) that registers your Heat template as the OS::TripleO::NodeExtraConfigPost: resource type. To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all nodes after the core configuration completes on either initial Overcloud creation or subsequent updates. Important You can only register the OS::TripleO::NodeExtraConfigPost to only one Heat template. Subsequent usage overrides the Heat template to use. 4.5. Puppet: Customizing Hieradata for Roles The Heat template collection contains a set of parameters to pass extra configuration to certain node types. These parameters save the configuration as hieradata for the node's Puppet configuration. These parameters are: ControllerExtraConfig Configuration to add to all Controller nodes. ComputeExtraConfig Configuration to add to all Compute nodes. BlockStorageExtraConfig Configuration to add to all Block Storage nodes. ObjectStorageExtraConfig Configuration to add to all Object Storage nodes. CephStorageExtraConfig Configuration to add to all Ceph Storage nodes. [ROLE]ExtraConfig Configuration to add to a composable role. Replace [ROLE] with the composable role name. ExtraConfig Configuration to add to all nodes. To add extra configuration to the post-deployment configuration process, create an environment file that contains these parameters in the parameter_defaults section. For example, to increase the reserved memory for Compute hosts to 1024 MB and set the VNC keymap to Japanese: Include this environment file when running openstack overcloud deploy . Important You can only define each parameter once. Subsequent usage overrides values. 4.6. Puppet: Customizing Hieradata for Individual Nodes You can set Puppet hieradata for individual nodes using the Heat template collection. To accomplish this, acquire the system UUID saved as part of the introspection data for a node: This outputs a system UUID. For example: Use this system UUID in an environment file that defines node-specific hieradata and registers the per_node.yaml template to a pre-configuration hook. For example: Include this environment file when running openstack overcloud deploy . The per_node.yaml template generates a set of heiradata files on nodes that correspond to each system UUID and contains the hieradata you defined. If a UUID is not defined, the resulting hieradata file is empty. In the example, the per_node.yaml template runs on all Compute nodes (as per the OS::TripleO::ComputeExtraConfigPre hook), but only the Compute node with system UUID F5055C6C-477F-47FB-AFE5-95C6928C407F receives hieradata. This provides a method of tailoring each node to specific requirements. For more information about NodeDataLookup, see Configuring Ceph Storage Cluster Setting in the Deploying an Overcloud with Containerized Red Hat Ceph guide. 4.7. Puppet: Applying Custom Manifests In certain circumstances, you might need to install and configure some additional components to your Overcloud nodes. You can achieve this with a custom Puppet manifest that applies to nodes after the main configuration completes. As a basic example, you might intend to install motd to each node. The process for accomplishing this is to first create a Heat template ( /home/stack/templates/custom_puppet_config.yaml ) that launches Puppet configuration. This includes the /home/stack/templates/motd.pp within the template and passes it to nodes for configuration. The motd.pp file itself contains the Puppet classes to install and configure motd . Create an environment file ( /home/stack/templates/puppet_post_config.yaml ) that registers your heat template as the OS::TripleO::NodeExtraConfigPost: resource type. Include this environment file along with your other environment files when creating or updating the Overcloud stack: This applies the configuration from motd.pp to all nodes in the Overcloud. | [
"heat_template_version: 2014-10-16 description: > Extra hostname configuration resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: nameserver_config} nameserver_config: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash echo \"nameserver 192.168.1.1\" >> /etc/resolv.conf outputs: OS::stack_id: value: {get_resource: userdata}",
"resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yaml",
"openstack overcloud deploy --templates -e /home/stack/templates/firstboot.yaml",
"heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: server: type: json nameserver_ip: type: string DeployIdentifier: type: string resources: CustomExtraConfigPre: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" > /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeploymentPre: type: OS::Heat::SoftwareDeployment properties: server: {get_param: server} config: {get_resource: CustomExtraConfigPre} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier} outputs: deploy_stdout: description: Deployment reference, used to trigger pre-deploy on changes value: {get_attr: [CustomExtraDeploymentPre, deploy_stdout]}",
"resource_registry: OS::TripleO::ControllerExtraConfigPre: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1",
"openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml",
"heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: server: type: string nameserver_ip: type: string DeployIdentifier: type: string resources: CustomExtraConfigPre: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" >> /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeploymentPre: type: OS::Heat::SoftwareDeployment properties: server: {get_param: server} config: {get_resource: CustomExtraConfigPre} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier} outputs: deploy_stdout: description: Deployment reference, used to trigger pre-deploy on changes value: {get_attr: [CustomExtraDeploymentPre, deploy_stdout]}",
"resource_registry: OS::TripleO::NodeExtraConfig: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1",
"openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml",
"heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: servers: type: json nameserver_ip: type: string DeployIdentifier: type: string EndpointMap: default: {} type: json resources: CustomExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" >> /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeployments: type: OS::Heat::SoftwareDeploymentGroup properties: servers: {get_param: servers} config: {get_resource: CustomExtraConfig} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier}",
"resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1",
"openstack overcloud deploy --templates -e /home/stack/templates/post_config.yaml",
"parameter_defaults: ComputeExtraConfig: nova::compute::reserved_host_memory: 1024 nova::compute::vnc_keymap: ja",
"openstack baremetal introspection data save 9dcc87ae-4c6d-4ede-81a5-9b20d7dc4a14 | jq .extra.system.product.uuid",
"\"F5055C6C-477F-47FB-AFE5-95C6928C407F\"",
"resource_registry: OS::TripleO::ComputeExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml parameter_defaults: NodeDataLookup: '{\"F5055C6C-477F-47FB-AFE5-95C6928C407F\": {\"nova::compute::vcpu_pin_set\": [ \"2\", \"3\" ]}}'",
"heat_template_version: 2014-10-16 description: > Run Puppet extra configuration to set new MOTD parameters: servers: type: json resources: ExtraPuppetConfig: type: OS::Heat::SoftwareConfig properties: config: {get_file: motd.pp} group: puppet options: enable_hiera: True enable_facter: False ExtraPuppetDeployments: type: OS::Heat::SoftwareDeploymentGroup properties: config: {get_resource: ExtraPuppetConfig} servers: {get_param: servers}",
"resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/custom_puppet_config.yaml",
"openstack overcloud deploy --templates -e /home/stack/templates/puppet_post_config.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/chap-configuration_hooks |
Red Hat build of OpenTelemetry | Red Hat build of OpenTelemetry OpenShift Container Platform 4.15 Configuring and using the Red Hat build of OpenTelemetry in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/index |
Chapter 1. Red Hat High Availability Add-On Configuration and Management Reference Overview | Chapter 1. Red Hat High Availability Add-On Configuration and Management Reference Overview This document provides descriptions of the options and features that the Red Hat High Availability Add-On using Pacemaker supports. For a step by step basic configuration example, see Red Hat High Availability Add-On Administration . You can configure a Red Hat High Availability Add-On cluster with the pcs configuration interface or with the pcsd GUI interface. 1.1. New and Changed Features This section lists features of the Red Hat High Availability Add-On that are new since the initial release of Red Hat Enterprise Linux 7. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 7.1 Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and changes. The pcs resource cleanup command can now reset the resource status and failcount for all resources, as documented in Section 6.11, "Cluster Resources Cleanup" . You can specify a lifetime parameter for the pcs resource move command, as documented in Section 8.1, "Manually Moving Resources Around the Cluster" . As of Red Hat Enterprise Linux 7.1, you can use the pcs acl command to set permissions for local users to allow read-only or read-write access to the cluster configuration by using access control lists (ACLs). For information on ACLs, see Section 4.5, "Setting User Permissions" . Section 7.2.3, "Ordered Resource Sets" and Section 7.3, "Colocation of Resources" have been extensively updated and clarified. Section 6.1, "Resource Creation" documents the disabled parameter of the pcs resource create command, to indicate that the resource being created is not started automatically. Section 10.1, "Configuring Quorum Options" documents the new cluster quorum unblock feature, which prevents the cluster from waiting for all nodes when establishing quorum. Section 6.1, "Resource Creation" documents the before and after parameters of the pcs resource create command, which can be used to configure resource group ordering. As of the Red Hat Enterprise Linux 7.1 release, you can backup the cluster configuration in a tarball and restore the cluster configuration files on all nodes from backup with the backup and restore options of the pcs config command. For information on this feature, see Section 3.8, "Backing Up and Restoring a Cluster Configuration" . Small clarifications have been made throughout this document. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 7.2 Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and changes. You can now use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. For information on this command, see Section 8.1.2, "Moving a Resource to its Preferred Node" . Section 13.2, "Event Notification with Monitoring Resources" has been modified and expanded to better document how to configure the ClusterMon resource to execute an external program to determine what to do with cluster notifications. When configuring fencing for redundant power supplies, you now are only required to define each device once and to specify that both devices are required to fence the node. For information on configuring fencing for redundant power supplies, see Section 5.10, "Configuring Fencing for Redundant Power Supplies" . This document now provides a procedure for adding a node to an existing cluster in Section 4.4.3, "Adding Cluster Nodes" . The new resource-discovery location constraint option allows you to indicate whether Pacemaker should perform resource discovery on a node for a specified resource, as documented in Table 7.1, "Simple Location Constraint Options" . Small clarifications and corrections have been made throughout this document. 1.1.3. New and Changed Features for Red Hat Enterprise Linux 7.3 Red Hat Enterprise Linux 7.3 includes the following documentation and feature updates and changes. Section 9.4, "The pacemaker_remote Service" , has been wholly rewritten for this version of the document. You can configure Pacemaker alerts by means of alert agents, which are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. Pacemaker alert agents are described in Section 13.1, "Pacemaker Alert Agents (Red Hat Enterprise Linux 7.3 and later)" . New quorum administration commands are supported with this release which allow you to display the quorum status and to change the expected_votes parameter. These commands are described in Section 10.2, "Quorum Administration Commands (Red Hat Enterprise Linux 7.3 and Later)" . You can now modify general quorum options for your cluster with the pcs quorum update command, as described in Section 10.3, "Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later)" . You can configure a separate quorum device which acts as a third-party arbitration device for the cluster. The primary use of this feature is to allow a cluster to sustain more node failures than standard quorum rules allow. This feature is provided for technical preview only. For information on quorum devices, see Section 10.5, "Quorum Devices" . Red Hat Enterprise Linux release 7.3 provides the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. This feature is provided for technical preview only. For information on the Booth cluster ticket manager, see Chapter 14, Configuring Multi-Site Clusters with Pacemaker . When configuring a KVM guest node running a the pacemaker_remote service, you can include guest nodes in groups, which allows you to group a storage device, file system, and VM. For information on configuring KVM guest nodes, see Section 9.4.5, "Configuration Overview: KVM Guest Node" . Additionally, small clarifications and corrections have been made throughout this document. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 7.4 Red Hat Enterprise Linux 7.4 includes the following documentation and feature updates and changes. Red Hat Enterprise Linux release 7.4 provides full support for the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. For information on the Booth cluster ticket manager, see Chapter 14, Configuring Multi-Site Clusters with Pacemaker . Red Hat Enterprise Linux 7.4 provides full support for the ability to configure a separate quorum device which acts as a third-party arbitration device for the cluster. The primary use of this feature is to allow a cluster to sustain more node failures than standard quorum rules allow. For information on quorum devices, see Section 10.5, "Quorum Devices" . You can now specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For information on configuring fencing levels, see Section 5.9, "Configuring Fencing Levels" . Red Hat Enterprise Linux 7.4 supports the NodeUtilization resource agent, which can detect the system parameters of available CPU, host memory availability, and hypervisor memory availability and add these parameters into the CIB. For information on this resource agent, see Section 9.6.5, "The NodeUtilization Resource Agent (Red Hat Enterprise Linux 7.4 and later)" . For Red Hat Enterprise Linux 7.4, the cluster node add-guest and the cluster node remove-guest commands replace the cluster remote-node add and cluster remote-node remove commands. The pcs cluster node add-guest command sets up the authkey for guest nodes and the pcs cluster node add-remote command sets up the authkey for remote nodes. For updated guest and remote node configuration procedures, see Section 9.3, "Configuring a Virtual Domain as a Resource" . Red Hat Enterprise Linux 7.4 supports the systemd resource-agents-deps target. This allows you to configure the appropriate startup order for a cluster that includes resources with dependencies that are not themselves managed by the cluster, as described in Section 9.7, "Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later)" . The format for the command to create a resource as a master/slave clone has changed for this release. For information on creating a master/slave clone, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" . 1.1.5. New and Changed Features for Red Hat Enterprise Linux 7.5 Red Hat Enterprise Linux 7.5 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux 7.5, you can use the pcs_snmp_agent daemon to query a Pacemaker cluster for data by means of SNMP. For information on querying a cluster with SNMP, see Section 9.8, "Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later)" . 1.1.6. New and Changed Features for Red Hat Enterprise Linux 7.8 Red Hat Enterprise Linux 7.8 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux 7.8, you can configure Pacemaker so that when a node shuts down cleanly, the resources attached to the node will be locked to the node and unable to start elsewhere until they start again when the node that has shut down rejoins the cluster. This allows you to power down nodes during maintenance windows when service outages are acceptable without causing that node's resources to fail over to other nodes in the cluster. For information on configuring resources to remain stopped on clean node shutdown, see Section 9.9, " Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later) " . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-overview-haar |
Chapter 3. Release Information | Chapter 3. Release Information These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform. Notes for updates released during the support lifecycle of this Red Hat OpenStack Platform release will appear in the advisory text associated with each update. 3.1. Red Hat OpenStack Platform 16.0 GA These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform. 3.1.1. Bug Fix These bugs were fixed in this release of Red Hat OpenStack Platform: BZ# 1716335 In Red Hat OpenStack Platform 16.0, live migrations with OVN enabled now succeed, as the flag, live_migration_wait_for_vif_plug, is enabled by default. Previously, live migrations failed, because the system was waiting for OpenStack Networking (neutron) to send vif_plugged notifications. BZ# 1758302 Previously, the regular expression for the oslo.util library was not updated, and it failed to recognize the output format from a newer version of the emulator, qemu (version 4.1.0). This fix in Red Hat OpenStack 16.0 updates the regular expression, and the oslo.util.imageutils library now functions properly. BZ# 1769868 Previously, the mesh network infrastructure was configured incorrectly for the message router, QDR, and this caused the AMQP-1.0 message bus on the Service Telemetry Framework (STF) client not to function. This fix corrects the configuration for the qdrouterd daemon on all overcloud nodes, and the STF client now works properly. BZ# 1775246 The NUMATopologyFilter is now disabled when rebuilding instances. Previously, the filter would always execute and the rebuild would only succeed if a host had enough additional capacity for a second instance using the new image and existing flavor. This was incorrect and unnecessary behavior. 3.1.2. Enhancements This release of Red Hat OpenStack Platform features the following enhancements: BZ# 1222414 With this enhancement, support for live migration of instances with a NUMA topology has been added. Previously, this action was disabled by default. It could be enabled using the '[workarounds] enable_numa_live_migration' config option, but this defaulted to False because live migrating such instances resulted in them being moved to the destination host without updating any of the underlying NUMA guest-to-host mappings or the resource usage. With the new NUMA-aware live migration feature, if the instance cannot fit on the destination, the live migration will be attempted on an alternate destination if the request is set up to have alternates. If the instance can fit on the destination, the NUMA guest-to-host mappings will be re-calculated to reflect its new host, and its resource usage updated. BZ# 1328124 Red Hat OpenStack Platform 16.0 director, now supports multi-compute cell deployments. With this enhancement, your cloud is better positioned for scaling out, because each individual cell has its own database and message queue on a cell controller and reduces the load on the central control plane. For more information, see "Scaling deployments with Compute cells" in the "Instances and Images" guide. BZ# 1360970 With this enhancement, support for live migration of instances with attached SR-IOV-based neutron interfaces has been added. Neutron SR-IOV interfaces can be grouped into two categories: direct mode and indirect mode. Direct mode SR-IOV interfaces are directly attached to the guest and exposed to the guest OS. Indirect mode SR-IOV interfaces have a software interface, for example, a macvtap, between the guest and the SR-IOV device. This feature enables transparent live migration for instances with indirect mode SR-IOV devices. As there is no generic way to copy hardware state during a live migration, direct mode migration is not transparent to the guest. For direct mode interfaces, mimic the workflow already in place for suspend and resume. For example, with SR-IOV devices, detach the direct mode interfaces before migration and re-attach them after the migration. As a result, instances with direct mode SR-IOV ports lose network connectivity during a migration unless a bond with a live migratable interface is created within the guest. Previously, it was not possible to live migrate instances with SR-IOV-based network interfaces. This was problematic as live migration is frequently used for host maintenance and similar actions. Previously, the instance had to be cold migrated which involves downtime for the guest. This enhancement results in the live migration of instances with SR-IOV-based network interfaces. BZ# 1463838 In Red Hat OpenStack Platform 16.0, it is now possible to specify QoS minimum bandwidth rules when creating network interfaces. This enhancement ensures that the instance is guaranteed a specified value of a network's available bandwidth. Currently, the only supported operations are resize and cold migrate. BZ# 1545700 The Red Hat OpenStack Platform Block Storage service (cinder) now automatically changes the encryption keys when cloning volumes. Note, that this feature currently does not support using Red Hat Ceph Storage as a cinder back end. BZ# 1545855 In Red Hat OpenStack Platform 16.0, you are now able to push, list, delete, and show (show metadata) images on the local registry. To push images from remote repository to the main repository: To list the contents of the repository: To delete images: To show metadata for an image: BZ# 1593057 With this enhancement, overcloud node deletion requires user confirmation before the action will be performed to reduce the likelihood that the action is performed unintentionally. The openstack overcloud node delete <node> command requires a Y/n confirmation before the action executes. You can bypass this by adding --yes to the command line. BZ# 1601926 Starting with this update, OSP deployments have full encryption between all the OVN services. All OVN clients (ovn-controller, neutron-server and ovn-metadata-agent) now connect to the OVSDB server using Mutual TLS encryption. BZ# 1625244 The Placement service has been extracted from the Compute (nova) service. It is now deployed and managed by the director, and runs as an additional container on the undercloud and on overcloud controller nodes. BZ# 1628541 In the Red Hat OpenStack Platform 16.0 dashboard (horizon), there is now a new form for changing a user's password. This form automatically appears when a user tries to sign on with an expired password. BZ# 1649264 The Red Hat OpenStack Platform Orchestration service (heat) now includes a new resource type, OS::Glance::WebImage, used for creating an Image service (glance) image from a URL using the Glance v2 API. This new resource type replaces an earlier one, OS::Glance::Image. BZ# 1653834 This enhancement adds the boolean parameter NovaComputeEnableKsm . The parameter enables the ksm and ksmtuned service on compute nodes. You can set NovaComputeEnableKsm for each Compute role. Default: False . BZ# 1666973 In Red Hat OpenStack Platform 16.0, you can now add custom Red Hat Ceph Storage configuration settings to any section of ceph.conf. Previously, custom settings were allowed only in the [global] section of ceph.conf. BZ# 1689816 In Red Hat OpenStack Platform 16.0, a new Orchestration service (heat) deployment parameter is available that enables administrators to turn on the nova metadata service on cell controllers: This new parameter automatically directs traffic from the OVN metadata agent on the cell computes to the nova metadata API service hosted on the cell controllers. Depending on the RHOSP topology, the ability to run the metadata service on cell controllers can reduce the traffic on the central control plane. BZ# 1691025 You can now use the Octavia API to create a VIP access control list (ACL) to limit incoming traffic to a listener to a set of allowed source IP addresses (CIDRs). Any other incoming traffic is rejected. For more information, see "Secure a load balancer with an access control list" in the "Networking Guide." BZ# 1693372 With this enhancement, you can schedule dedicated (pinned) and shared (unpinned) instances on the same Compute node using the following parameters: NovaComputeCpuDedicatedSet - A comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. Replaces the NovaVcpuPinSet parameter, which is now deprecated. NovaComputeCpuSharedSet - A comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUS that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy, hw:emulator_threads_policy=share . Note: This option previously existed but its purpose has been extended with this feature. It is no longer necessary to use host aggregates to ensure these instance types run on separate hosts. Also, the [DEFAULT] reserved_host_cpus config option is no longer necessary and can be unset. To upgrade: For hosts that were previously used for pinned instances, the value of NovaVcpuPinSet should be migrated to NovaComputeCpuDedicatedSet . For hosts that were previously used for unpinned instances, the value of NovaVcpuPinSet should be migrated to NovaComputeCpuSharedSet . If there is no value set for NovaVcpuPinSet , then all host cores should be assigned to either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet , depending on the type of instance running there. Once the upgrade is complete, it is possible to start setting both options on the same host. However, to do this, the host should be drained of instances as nova will not start when cores for an unpinned instance are not listed in NovaComputeCpuSharedSet and vice versa. BZ# 1696663 This update allows you to configure NUMA affinity for most neutron networks. This helps you ensure that instances are placed on the same host NUMA node as the NIC providing external connectivity to the vSwitch. You can configure NUMA affinity on networks that use: --'provider:network_type' of 'flat' or 'vlan' and a 'provider:physical_network' (L2 networks) or --'provider:network_type' of 'vxlan' , 'gre' or 'geneve' (L3 networks). BZ# 1700396 In Red Hat OpenStack Platform 16.0, you can now use director to specify an availability zone for the Block Storage service (cinder) back end type. BZ# 1767481 Previously, when Novajoin lost its connection to the IPA server, it would immediately attempt to reconnect. Consequently, timing issues could arise and prevent the connection from being re-established. With this update, you can use retry_delay to set the number of seconds to wait before retrying the IPA server connection. As a result, this is expected to help mitigate the timing issues. BZ# 1775575 You can now configure PCI NUMA affinity on an instance-level basis. This is required to configure NUMA affinity for instances with SR-IOV-based network interfaces. Previously, NUMA affinity was only configurable at a host-level basis for PCI passthrough devices. BZ# 1784806 In Red Hat Openstack Platform 16.0, a deployment enhancement eases configuring OVS-DPDK by automatically deriving the Orchestration service (heat) parameters required for the compute node on which OVS-DPDK is deployed. The Workflow service (mistral) has been enhanced to read heat templates and introspection data to automatically derive the necessary values for the heat parameters, NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet. 3.1.3. Technology Preview The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/ . BZ# 1228474 After a Red Hat OpenStack Platform 16.0 director deployment, the Identity service (keystone) now has a new default role, reader, which the other OpenStack services have not yet implemented. The reader role in keystone should not be used in a production environment, because the role is in technology preview and incorrectly grants privileges that users assigned to the role should not have, such as the ability to create volumes. BZ# 1288155 Defining multiple route tables and assigning routes to particular tables is a technology preview in Red Hat OpenStack Platform 16.0. Policy-based routing uses route tables where, on a host with multiple links, you can send traffic through a particular interface depending on the source address. You can also define route rules on a per-interface basis, as shown in this example: BZ# 1375207 Previously, when using Red Hat Ceph Storage as a back end for both the Block Storage service (cinder) volumes and backups, any attempt to perform a full backup- after the first full backup- instead resulted in an incremental backup without any warning. In Red Hat OpenStack Platform 16.0, a technology preview has fixed this issue. BZ# 1459187 In Red Hat OpenStack Platform 16.0, a technology preview has been added to the Bare Metal Provisioning service (ironic) for deploying the overcloud on an IPv6 provisioning network. For more information, see "Configuring a custom IPv6 provisioning network," in the Bare Metal Provisioning guide. BZ# 1474394 In Red Hat OpenStack Platform 16.0, a technology preview has been added for the Bare Metal Provisioning service (ironic) deploying over an IPv6 provisioning network for BMaaS (Bare Metal as-a-Service) tenants. BZ# 1575079 In Red Hat OpenStack Platform 16.0, a technology preview has been added for the Shared File Systems service (manila) for IPv6 to work in the CephFS NFS driver. This feature requires Red Hat Ceph Storage 4.1. BZ# 1593828 In Red Hat OpenStack Platform 16.0, a technology preview has been added for booting bare metal machines from virtual media using the Bare Metal Provisioning service (ironic). If the baseboard management controller (BMC) for a machine supports Redfish hardware management protocol and virtual media service, ironic can instruct the BMC to pull a bootable image and "insert" it into a virtual drive on a node. The node can then boot from that virtual drive into the operating system residing on the image. Ironic hardware types based on the Redfish API support deploy, rescue (with a limitation), and boot (user) images over virtual media. The major advantage of virtual media boot is that the insecure and unreliable TFTP image transfer phase of the PXE boot protocol suite is replaced by secure HTTP transport. BZ# 1600967 In Red Hat OpenStack Platform 16.0, a Workflow service (mistral) task is in technology preview that allows you to implement password rotation by doing the following: Execute the rotate-password workflow to generate new passwords and store them in the plan environment. Redeploy your overcloud. You can also obtain your passwords after you have changed them. To implement password rotation, follow these steps: Note The workflow task modifies the default passwords. The task does not modify passwords that are specified in a user-provided environment file. Execute the new workflow task to regenerate the passwords: This command generates new passwords for all passwords except for BarbicanSimpleCryptoKek and KeystoneFernet* and KeystoneCredential*. There are special procedures to rotate these passwords. It is also possible to specify specific passwords to be rotated. The following command rotates only the specified passwords. Redeploy your overcloud: To retrieve the passwords, including the newly generated ones, follow these steps: Run the following command: You should see output from the command, similar to the following: In the earlier example output, the value of State is RUNNING. State should eventually read SUCCESS. Re-check the value of State: When the value of State is SUCCESS, you can retrieve passwords: You should see output similar to the following: BZ# 1621701 In Red Hat OpenStack Platform 16.0, a technology preview is added to the OpenStack Bare Metal service (ironic) to configure ML2 networking-ansible functionality with Arista Extensible Operating System (Arista EOS) switches. For more information, see "Enabling networking-ansible ML2 functionality," in the Bare Metal Provisioning guide. BZ# 1622233 In Red Hat OpenStack Platform 16.0, a technology preview has been added to modify switch ports to put them into trunking mode and assign more than one VLAN to them. BZ# 1623152 In Red Hat OpenStack Platform 16.0, a technology preview has been added to the Orchestration service (heat) for rsyslog changes: Rsyslog is configured to collect and forward container logs to be functionally equivalent to the fluentd installation. Administrators can configure rsyslog log forwarding in the same way as fluentd. BZ# 1628061 In Red Hat OpenStack Platform 16.0, you can use director to include in-flight validations in the service template. This feature is a technology preview in RHOSP 16.0. Additions can be inserted at the end of the step to be checked, or at the beginning of the step. In this example, a validation is performed to ensure that the rabbitmq service is running after its deployment: Heat enables you to include existing validations from the openstack-tripleo-validations roles: You can find the definition of the rabbitmq-limits role here: https://opendev.org/openstack/tripleo-validations/src/branch/stable/train/roles/rabbitmq_limits/tasks/main.yml Here is an example of using the existing service health check: BZ# 1699449 Red Hat OpenStack Platform director now offers a technology preview for fence_redfish, a fencing agent for the Redfish API. BZ# 1700083 In Red Hat OpenStack Platform 16.0, a technology preview has been added for the Bare Metal Provisioning service (ironic) to work with Intel Speed Select processors. BZ# 1703956 In Red Hat OpenStack Platform 16.0, the Load-balancing service (octavia) now has a technology preview for UDP protocol. BZ# 1706896 In Red Hat OpenStack Platform 16.0, a technology preview has been added to the Image service (glance) that pre-caches images so that operators can warm the cache before they boot an instance. BZ# 1710089 Director has added the overcloud undercloud minion install command that you can use to configure an additional host to augment the Undercloud services. BZ# 1710092 Director now provides the ability to deploy an additional node that you can use to add additional heat-engine resources for deployment related actions. BZ# 1710093 Red Hat OpenStack Platform director now enables you to deploy an additional node that can be used to add additional Bare Metal Provisioning conductor service resources for system provisioning during deployments. BZ# 1710634 In Red Hat OpenStack Platform 16.0, a technology preview has been added to the Orchestration service (heat). A new parameter, NovaSchedulerQueryImageType, has been added that controls the Compute service (nova) placement and scheduler components query placement for image type (scheduler/query_placement_for_image_type_support). When set to true (the default), NovaSchedulerQueryImageType excludes compute nodes that do not support the disk format of the image used in a boot request. For example, the libvirt driver uses Red Hat Ceph Storage as an ephemeral back end, and does not support qcow2 images (without an expensive conversion step). In this case, enabling NovaSchedulerQueryImageType ensures that the scheduler does not send requests to boot a qcow2 image to compute nodes that use Red Hat Ceph Storage. BZ# 1749483 You can now forward the traffic from a TCP, UDP, or other protocol port of a floating IP address to a TCP, UDP, or other protocol port associated to one of the fixed IP addresses of a neutron port. Forwarded traffic is managed by an extension to the neutron API and by an OpenStack Networking plug-in. A floating IP address can have more than one forwarding definition configured. However, you cannot forward traffic for IP addresses that have a pre-existing association to an OpenStack Networking port. Traffic can only be forwarded for floating IP addresses that are managed by centralized routers on the network (legacy, HA, and DVR+HA). To forward traffic for a port of a floating IP address, use the following OpenStack Networking plug-in command: --internal-ip-address <internal-ip-address> The fixed, IPv4, internal IP address of the neutron port that will receive the forwarded traffic. --port <port> The name or ID of the neutron port that will receive the forwarded traffic. --internal-protocol-port <port-number> The protocol port number of the neutron, fixed IP address that will receive the forwarded traffic. --external-protocol-port <port-number> The protocol port number of the port of the floating IP address that will forward its traffic. --protocol <protocol> The protocol that the port of the floating IP address uses (for example, TCP, UDP). <floating-ip> The floating IP (IP address or ID) of the port that will forward its traffic. Here is an example: 3.1.4. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 1481814 Previously, when an encrypted Block Storage service (cinder) volume image was deleted, its corresponding key was not deleted. In Red Hat OpenStack Platform 16.0, this issue has been resolved. When the Image service deletes a cinder volume image, it also deletes the key for the image. BZ# 1783044 With the general availability of Red Hat Ceph Storage version 4, you can now install ceph-ansible from the rhceph-4-tools-for-rhel-8-x86_64-rpms repository. 3.1.5. Known Issues These known issues exist in Red Hat OpenStack Platform at this time: BZ# 1574431 There is a known issue for the Block Storage service (cinder) where quota commands do not work as expected. The cinder CLI allows users to successfully create quota entries without checking for a valid project ID. Quota entries that the CLI creates without valid project IDs are dummy records that contain invalid data. Until this issue is fixed, CLI users should make sure to specify a valid project ID when creating quota entries, and monitor cinder for dummy records. BZ# 1647005 Nova-compute ironic driver tries to update BM node while the node is being cleaned up. The cleaning takes approximately five minutes but nova-compute attempts to update the node for approximately two minutes. After timeout, nova-compute stops and puts nova instance into ERROR state. As a workaround, set the following configuration option for nova-compute service: As a result, nova-compute continues to attempt to update BM node longer and eventually succeeds. BZ# 1734301 Currently, the OVN load balancer does not open new connections when fetching data from members. The load balancer modifies the destination address and destination port and sends request packets to the members. As a result, it is not possible to define an IPv6 member while using an IPv4 load balancer address and vice versa. There is currently no workaround for this issue. BZ# 1769880 There is a known issue where migrations from ML2/OVS to OVN fail. The failure is caused by the new protective mechanism in Red Hat OpenStack Platform director to prevent upgrades while changing mechanism drivers. For the workaround, see "Preparing for the migration," in the "Networking with Open Virtual Network" guide. BZ# 1779221 Red Hat OpenStack Platform deployments that use the Linux bridge ML2 driver and agent are unprotected against Address Resolution Protocol (ARP) spoofing. The version of Ethernet bridge frame table administration (ebtables) that is part of Red Hat Enterprise Linux 8 is incompatible with the Linux bridge ML2 driver. The Linux Bridge ML2 driver and agent were deprecated in Red Hat OpenStack Platform 11, and should not be used. Red Hat recommends that you use instead the ML2 Open Virtual Network (OVN) driver and services, the default deployed by Red Hat OpenStack Platform director. BZ# 1789822 Replacement of an overcloud Controller might cause swift rings to become inconsistent across nodes. This can result in decreased availability of Object Storage service. If this happens, log in to the previously existing Controller node using SSH, deploy the updated rings, and restart the Object Storage containers: BZ# 1790467 There is a known issue where Red Hat OpenStack Platform 16.0, where metadata information required for configuring OpenStack instances is not available, and they might be started without connectivity. An ordering issue causes the haproxy wrappers not to be updated for the ovn_metadata_agent. A possible workaround is for the cloud operator to run the following Ansible command to restart the ovn_metadata_agent on select nodes after the update, to ensure that the ovn_metadata_agent is using an updated version of the haproxy wrapper script: In the earlier Ansible command, nodes may be a single node (for example, compute-0 ), all computes (for example, compute* ) or "all" . As the ovn_metadata_agent is most commonly found on compute nodes, the following Ansible command restarts the agent for all compute nodes in the cloud: After you restart the ovn_metadata_agent services, they use the updated haproxy wrapper script, which enables them to provide metadata to VMs when they are started. Affected VMs already running should behave normally when they are restarted after the workaround has been applied. BZ# 1793166 There is a known issue in Red Hat OpenStack 16.0, where KVM guests do not start on IBM POWER8 systems unless the simultaneous multithreading (SMT) control is disabled. SMT is not disabled automatically. The workaround is to execute sudo ppc64_cpu --smt=off on any IBM POWER8 compute nodes after deploying the overcloud, and any subsequent reboots. BZ# 1793440 In Red Hat OpenStack 16.0, there is a known issue where the command, "openstack network agent list," intermittently indicates that the OVN agents are down, when the agents are actually alive and the cloud is operational. The affected agents are: OVN Controller agent, OVN Metadata agent, and OVN Controller Gateway agent. There is currently no workaround for this issue. You should ignore the output of the "openstack network agent list" command. BZ# 1794328 There is a known issue where Red Hat OpenStack Platform 16.0 overcloud installs fail, when the Load-balancing service (octavia) is configured with a composable role. Currently, there is no identified workaround for this issue. For more information, see the BZ# itself: https://bugzilla.redhat.com/show_bug.cgi?id=1794328 . BZ# 1795165 There is a known issue for OpenStack Networking (neutron) where all instances created inherit the dns_domain value associated with the network, and not the dns_domain value configured for an internal DNS. The cause of this issue is that the network dns_domain attribute overrides the neutron dns_domain config option. To avoid this issue, do not set the dns_domain attribute for the network, if you want to use the internal DNS feature. BZ# 1795688 To allow Placement services deployed on the Controller node to be accessed by the neutron_api service, as required when using the Novacontrol role, add the following hieradata configuration to your Controller environment file: For more information on using Puppet to customizing hieradata for roles, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/advanced_overcloud_customization/index#sect-Customizing_Puppet_Configuration_Data . Note This configuration is required when deploying an overcloud with a custom role where Placement is not running on the same nodes as neutron_api. BZ# 1795956 There is a known issue for the Red Hat OpenStack Platform Load-balancing service: the containers octavia_api and octavia_driver_agent fail to start when rebooting a node. The cause for this issue is that the directory, /var/run/octavia, does not exist when the node is rebooted. To fix this issue, add the following line to the file, /etc/tmpfiles.d/var-run-octavia.conf: BZ# 1796215 In Red Hat OpenStack Platform 16.0, there is a known issue when ansible-playbook can sometimes fail during configuration of the overcloud nodes. The cause for the failure is the tripleo-admin user is not authorized for ssh. Furthermore, an openstack overcloud deploy command argument, --stack-only , no longer runs the enable ssh admin workflow to authorize the tripleo-admin user. The workaround is to use the openstack overcloud admin authorize command to run the enable ssh admin workflow on its own when using --stack-only and the manual config-download commands. for more information, see "Separating the provisioning and configuration processes" in the Director Installation and Usage guide. BZ# 1797047 The manila access-list feature requires Red Hat Ceph Storage 4.1 or later. Red Hat Ceph Storage 4.0 has a packaging issue. As a result, customers cannot use manila access-list. Share creation works, but without manila access-list, the share is unusable. Consequently, customers cannot use the Shared File System service with CephFS via NFS. For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1797075 . BZ# 1797892 There is a known issue in Red Hat OpenStack Platform 16.0, when nodes experiencing hard (ungraceful) shutdowns put containers- that were previously running---in a "Created" state in podman when the node is turned back on. The reason for this issue is that the metadata agent fails to spawn a new container because it already exists in the "Created" state. The haproxy side-car container wrapper script expects containers to be in only the "Exited" state, and does not cleanup containers in the "Created" state. The possible workaround is for the cloud operator to run the following Ansible ad-hoc command to clean up all haproxy containers in the "Created" state. You must run this Ansible ad-hoc command from the undercloud on particular node, on a group of nodes, or on the whole cluster: In the earlier Ansible ad-hoc command, nodes can be a single host from the inventory, a group of hosts, or "all". Here is an example of running the command on compute-0 : After running the Ansible ad-hoc command, the metadata-agent should then spawn a new container for the given network. 3.1.6. Removed Functionality BZ# 1518222 In Red Hat OpenStack Platform 16.0, a part of the Telemetry service, the ceilometer client (that was deprecated in an earlier RHOSP release) is no longer supported and has been removed. Note that ceilometer continues to be a part of RHOSP as an agent-only service (no client and no API). BZ# 1631508 In Red Hat OpenStack Platform 16.0, the controller-v6.yaml file is removed. The routes that were defined in controller-v6.yaml are now defined in controller.yaml. (The controller.yaml file is a NIC configuration file that is rendered from values set in roles_data.yaml.) versions of Red Hat OpenStack director, included two routes: one for IPv6 on the External network (default) and one for IPv4 on the Control Plane. To use both default routes, make sure that the controller definition in roles_data.yaml contains both networks in default_route_networks (for example, default_route_networks: ['External', 'ControlPlane'] ). BZ# 1712981 The Data Processing service (sahara) is deprecated in Red Hat OpenStack Platform (RHOSP) 15 and removed in RHOSP 16.0. Red Hat continues to offer support for the Data Processing service in RHOSP versions 13 and 15. BZ# 1754560 In Red Hat OpenStack Platform 16.0, the Elastic Compute Cloud (EC2) API is no longer supported. The EC2 API support is now deprecated in director and will be removed in a future RHOSP release. BZ# 1764894 In Red Hat OpenStack Platform 16.0, the following environment file has been removed: /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-bootstrap-environment-rhel.yaml . This environment file was previously used when using pre-provisioned nodes. It was deprecated in a RHOSP release, and now it has been removed. BZ# 1795271 In Red Hat OpenStack Platform 16.0, ephemeral disk encryption is deprecated. Bug fixes and support will be provided through the end of the 16.0 life cycle but no new feature enhancements will be made. 3.2. Red Hat OpenStack Platform 16.0.1 Maintenance Release These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform. 3.2.1. Enhancements This release of Red Hat OpenStack Platform features the following enhancements: BZ# 1784222 With this update, the pcs service now restricts listening to the InternalApi network by default. BZ# 1790752 Previously, when using Red Hat Ceph Storage as a back end for both the Block Storage service (cinder) volumes and backups, any attempt to perform a full backup- after the first full backup- instead resulted in an incremental backup without any warning. In Red Hat OpenStack Platform 16.0.1, the fix for this issue is fully supported. 3.2.2. Known Issues These known issues exist in Red Hat OpenStack Platform at this time: BZ# 1769880 There is a known issue where migrations from ML2/OVS to OVN fail. The failure is caused by the new protective mechanism in Red Hat OpenStack Platform director to prevent upgrades while changing mechanism drivers. For the workaround, see "Preparing for the migration" in the Networking with Open Virtual Network guide: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/networking_with_open_virtual_network/migrating-ml2ovs-to-ovn#preparing_for_the_migration BZ# 1790467 There is a known issue in Red Hat OpenStack Platform 16.0, where metadata information required for configuring OpenStack instances is not available, and instances might be started without connectivity. An ordering issue causes the haproxy wrappers not to be updated for the ovn_metadata_agent service. Workaround: Run the following Ansible command to restart the ovn_metadata_agent service on select nodes after the update to ensure that the ovn_metadata_agent service uses an updated version of the haproxy wrapper script: ansible -b <nodes> -i /usr/bin/tripleo-ansible-inventory -m shell -a "status=`sudo systemctl is-active tripleo_ovn_metadata_agent ; if test \"USDstatus\" == \"active\"; then sudo systemctl restart tripleo_ovn_metadata_agent; echo restarted; fi"` In this command, nodes can be a single node (for example, compute-0 ), all Compute nodes (for example, compute* ) or "all" . After you restart the ovn_metadata_agent services, the services use the updated haproxy wrapper script and can provide metadata to VMs at startup. After you apply the workaround, affected VMs that are already running behave normally after a restart. BZ# 1793440 In Red Hat OpenStack 16.0, there is a known issue where the command openstack network agent list intermittently indicates that the OVN agents are down, when the agents are actually alive and the cloud is operational. The affected agents are: OVN Controller agent, OVN Metadata agent, and OVN Controller Gateway agent. There is currently no workaround for this issue. Ignore the output of the "openstack network agent list" command. BZ# 1795165 There is a known issue for OpenStack Networking (neutron) where all instances created inherit the dns_domain value associated with the network, and not the dns_domain value configured for an internal DNS. The cause of this issue is that the network dns_domain attribute overrides the neutron dns_domain config option. To avoid this issue, do not set the dns_domain attribute for the network if you want to use the internal DNS feature. BZ# 1795688 To allow the neutron_api service to access Placement services on Controller nodes, for example, when you use the Novacontrol role, add the following hieradata configuration to your Controller environment file: For more information about using Puppet to customize hieradata for roles, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/advanced_overcloud_customization/index#sect-Customizing_Puppet_Configuration_Data . Note: This configuration is required when deploying an overcloud with a custom role where Placement is not running on the same nodes as neutron_api. BZ# 1797892 There is a known issue in Red Hat OpenStack Platform 16.0, when nodes that experience a hard shutdown put containers that were previously running into a Created state in podman when the node reboots. As a workaround, you can run the following Ansible command to clean all haproxy containers in the Created state: ansible -b <nodes> -i /usr/bin/tripleo-ansible-inventory -m shell -a "podman ps -a --format {{'{{'}}.ID{{'}}'}} -f name=haproxy,status=created | xargs podman rm -f || :" Replace <nodes> with a single host from the inventory, a group of hosts, or all . After you run this command, the metadata-agent spawns a new container for the given network. BZ# 1802573 There is a known issue where Mistral containers do not restart during minor updates and the update prepare times out after 10 hours. The workaround is to restart the containers manually. BZ# 1804848 There is a known issue when all of the following conditions exist: (0) You are using the OpenStack Train release (or code from master (Ussuri development)) (1) cinder_encryption_key_id and cinder_encryption_key_deletion_policy are not included in the non_inheritable_image_properties setting in nova.conf. These properties are not included by default. (2) A user has created a volume of an encrypted volume-type in the Block Storage service (cinder). For example, Volume-1. (3) Using the Block Storage service, the user has uploaded the encrypted volume as an image to the Image service (glance). For example, Image-1. (4) Using the Compute service (nova), the user has attempted to boot a server from the image directly. Note: this is an unsupported action, the supported workflow is to use the image to boot-from-volume. (5) Although an unsupported action, if a user does (4), it currently results in a server in status ACTIVE but which is unusable because the operating system cannot be found. (6) Using the Compute service, the user requests the createImage action on the unusable server, resulting in the creation of Image-2. (7) Using the Image service, the user deletes Image-2 which has inherited the cinder_encryption_key_* properties from Image-1 and the encryption key is deleted. As a result, Image-1 is rendered non-decryptable so that it can no longer be used in the normal boot-from-volume workflow. The workaround for this issue is to add the cinder_encryption_key_id,cinder_encryption_key_deletion_policy properties to the non_inheritable_image_properties option in the [DEFAULT] section of nova.conf. Image-2 can be deleted and the encryption key used by Image-1 remains available. 3.3. Red Hat OpenStack Platform 16.0.2 Maintenance Release These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform. 3.3.1. Enhancements This release of Red Hat OpenStack Platform features the following enhancements: BZ# 1653834 This enhancement adds the Boolean parameter NovaComputeEnableKsm . The parameter enables the ksm and ksmtuned service on compute nodes. You can set NovaComputeEnableKsm for each Compute role. The default value is`False`. BZ# 1695898 Director operations involving the RADOS gateway no longer require interaction with puppet-ceph. Previously, tripleo-heat-templates had a dependency on puppet-ceph for the RADOS gateway component deployment. The move to tripleo-ansible eliminates this dependency. BZ# 1696717 This feature enables the Red Hat OpenStack Platform director to deploy the Shared File System (manila) with an external Ceph Storage cluster. In this type of deployment, Ganesha still runs on the Controller nodes that Pacemaker manages using an active-passive configuration. This feature is supported with Ceph Storage 4.1 or later. BZ# 1749483 In the second maintenance release of Red Hat OpenStack Platform 16.0, IP port forwarding for OVS/ML2 has moved from technical preview to being fully supported. For more information, see the floating ip port forwarding create command in the Command Line Interface Reference . BZ# 1777052 The Service Telemetry Framework (STF) release v1.0 is now available for general availability. STF provides the core components for a monitoring application framework for Red Hat OpenStack Platform (RHOSP). It is a data storage component deployed as an application on top of OpenShift 4.x and is managed by the Operator Lifecycle Manager. Data transport for metrics and events is provided using AMQ Interconnect. The release of STF v1.0 replaces and deprecates the Technology Preview version. BZ# 1790753 This update makes it possible for the Block Storage service (cinder) to attach Ceph RADOS block device (RBD) volumes to multiple instances simultaneously. BZ# 1790754 With this update, you can now enable Red Hat Ceph Storage Dashboard with the Red Hat OpenStack Platform director. The Red Hat Storage Ceph Dashboard is a built-in, web-based Ceph management and monitoring application to visualise and monitor various aspects in your cluster. Ceph Dashboard requires Red Hat Ceph Storage 4.1 or later. BZ# 1798917 A new Red Hat OpenStack Platform Orchestration service (heat) parameter controls whether the Block Storage service (cinder) flattens RADOS block device (RBD) volumes created from snapshots. Flattening a volume removes its dependency on the snapshot. If you set the value of CinderRbdFlattenVolumeFromSnapshot to true, cinder flattens RBD volumes. The default value of CinderRbdFlattenVolumeFromSnapshot and the cinder RBD driver is false . 3.3.2. Technology Preview The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/ . BZ# 1703956 In Red Hat OpenStack Platform 16.0, the Load-balancing service (octavia) now has a technology preview for UDP protocol. 3.3.3. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 1823835 RHOSP 16.0 works only with RHEL 8.1. Ensure that all the hosts of your OSP deployment are pinned to RHEL 8.1 before running the update. See "Locking the environment to a Red Hat Enterprise Linux release" [1] in the guide "Keeping Red Hat OpenStack Platform Updated." [1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/keeping_red_hat_openstack_platform_updated/preparing-for-a-minor-update#locking-the-environment-to-a-red-hat-enterprise-linux-release 3.3.4. Known Issues These known issues exist in Red Hat OpenStack Platform at this time: BZ# 1795956 There is a known issue for the Red Hat OpenStack Platform Load-balancing service: the containers octavia_api and octavia_driver_agent fail to start when rebooting a node. The cause for this issue is that the directory, /var/run/octavia, does not exist when the node is rebooted. To fix this issue, add the following line to the file, /etc/tmpfiles.d/var-run-octavia.conf: d /var/run/octavia 0755 root root - - BZ# 1824093 A Grafana Ceph 4.1 dependency causes Ceph dashboard bugs. The Ceph dashboard requires Ceph 4.1 and a Grafana container based on ceph4-rhel8. Presently, Red Hat supports ceph3-rhel7.3. This discrepancy causes the following dashboard bugs: When you navigate to Pools > Overall Performance , Grafana returns the following error: When you view a pool's performance details ( Pools > select a pool from the list > Performance Details ) the Grafana bar is displayed along with other graphs and values, but it should not be there. These bugs will be fixed after rebasing to a newer Grafana version. BZ# 1837558 Because of a core OVN bug, virtual machines with floating IP (FIP) addresses cannot route to other networks in an ML2/OVN deployment with distributed virtual routing (DVR) enabled. Core OVN sets a bad hop when routing SNAT IPv4 traffic from a VM with a floating ip with DVR enabled. Instead of the gateway IP, OVN sets the destination IP. As a result, the router sends an ARP request for an unknown IP instead of routing it to the gateway. Before deploying a new overcloud with ML2/OVN, disable DVR by setting NeutronEnableDVR: false in an environment file. If you have ML2/OVN in an existing deployment, perform the following steps: Set the enable_distributed_floating_ip parameter in the [ovs] section of neutron.conf to False. You should also set NeutronEnableDVR: false in an environment file used in any re-deployments so that the re-deployment does not re-enable DVR. Update the floating IP that requires external SNAT to work through the Neutron API (for example, by changing its description). Note Disabling DVR causes traffic to be centralized. All L3 traffic goes through the controller/network nodes. This may affect scale, data plane performance, and throughput. | [
"sudo openstack tripleo container image push docker.io/library/centos",
"openstack tripleo container image list",
"sudo openstack tripleo container image delete",
"openstack tripleo container image show",
"parameter_defaults: NovaLocalMetadataPerCell: True",
"network_config: - type: route_table name: custom table_id: 200 - type: interface name: em1 use_dhcp: false addresses: - ip_netmask: 192.0.2.1/24 routes: - ip_netmask: 10.1.3.0/24 next_hop: 192.0.2.5 table: 200 # Use table ID or table name rules: - rule: \"iif em1 table 200\" comment: \"Route incoming traffic to em1 with table 200\" - rule: \"from 192.0.2.0/24 table 200\" comment: \"Route all traffic from 192.0.2.0/24 with table 200\" - rule: \"add blackhole from 172.19.40.0/24 table 200\" - rule: \"add unreachable iif em1 from 192.168.1.0/24\"",
"source ./stackrc openstack workflow execution create tripleo.plan_management.v1.rotate_passwords '{\"container\": \"overcloud\"}'",
"openstack workflow execution create tripleo.plan_management.v1.rotate_passwords '{\"container\": \"overcloud\", \"password_list\": [\"BarbicanPassword\", \"SaharaPassword\", \"ManilaPassword\"]}'",
"./overcloud-deploy.sh",
"openstack workflow execution create tripleo.plan_management.v1.get_passwords '{\"container\": \"overcloud\"}'",
"+--------------------+---------------------------------------------+ | Field | Value | +--------------------+---------------------------------------------+ | ID | edcf9103-e1a8-42f9-85c1-e505c055e0ed | | Workflow ID | 8aa2ac9b-22ee-4e7d-8240-877237ef0d0a | | Workflow name | tripleo.plan_management.v1.rotate_passwords | | Workflow namespace | | | Description | | | Task Execution ID | <none> | | Root Execution ID | <none> | | State | RUNNING | | State info | None | | Created at | 2020-01-22 15:47:57 | | Updated at | 2020-01-22 15:47:57 | +--------------------+---------------------------------------------+",
"openstack workflow execution show edcf9103-e1a8-42f9-85c1-e505c055e0ed",
"+--------------------+---------------------------------------------+ | Field | Value | +--------------------+---------------------------------------------+ | ID | edcf9103-e1a8-42f9-85c1-e505c055e0ed | | Workflow ID | 8aa2ac9b-22ee-4e7d-8240-877237ef0d0a | | Workflow name | tripleo.plan_management.v1.rotate_passwords | | Workflow namespace | | | Description | | | Task Execution ID | <none> | | Root Execution ID | <none> | | State | SUCCESS | | State info | None | | Created at | 2020-01-22 15:47:57 | | Updated at | 2020-01-22 15:48:39 | +--------------------+---------------------------------------------+",
"openstack workflow execution output show edcf9103-e1a8-42f9-85c1-e505c055e0ed",
"{ \"status\": \"SUCCESS\", \"message\": { \"AdminPassword\": \"FSn0sS1aAHp8YK2fU5niM3rxu\", \"AdminToken\": \"dTP0Wdy7DtblG80M54r4a2yoC\", \"AodhPassword\": \"fB5NQdRe37BaBVEWDHVuj4etk\", \"BarbicanPassword\": \"rn7yk7KPafKw2PWN71MvXpnBt\", \"BarbicanSimpleCryptoKek\": \"lrC3sGlV7-D7-V_PI4vbDfF1Ujm5OjnAVFcnihOpbCg=\", \"CeilometerMeteringSecret\": \"DQ69HdlJobhnGWoBC0jM3drPF\", \"CeilometerPassword\": \"qI6xOpofuiXZnG95iUe8Oxv5d\", \"CephAdminKey\": \"AQDGVPpdAAAAABAAZMP56/VY+zCVcDT81+TOjg==\", \"CephClientKey\": \"AQDGVPpdAAAAABAAanYtA0ggpcoCbS1nLeDN7w==\", \"CephClusterFSID\": \"141a5ede-21b4-11ea-8132-52540031f76b\", \"CephDashboardAdminPassword\": \"AQDGVPpdAAAAABAAKhsx630YKDhQrocS4o4KzA==\", \"CephGrafanaAdminPassword\": \"AQDGVPpdAAAAABAAKBojG+CO72B0TdBRR0paEg==\", \"CephManilaClientKey\": \"AQDGVPpdAAAAABAAA1TVHrTVCC8xQ4skG4+d5A==\" } }",
"deploy_steps_tasks: # rabbitmq container is supposed to be started during step 1 # so we want to ensure it's running during step 2 - name: validate rabbitmq state when: step|int == 2 tags: - opendev-validation - opendev-validation-rabbitmq wait_for_connection: host: {get_param: [ServiceNetMap, RabbitmqNetwork]} port: 5672 delay: 10",
"deploy_steps_tasks: - name: some validation when: step|int == 2 tags: - opendev-validation - opendev-validation-rabbitmq include_role: role: rabbitmq-limits # We can pass vars to included role, in this example # we override the default min_fd_limit value: vars: min_fd_limit: 32768",
"deploy_steps_tasks: # rabbitmq container is supposed to be started during step 1 # so we want to ensure it's running during step 2 - name: validate rabbitmq state when: step|int == 2 tags: - opendev-validation - opendev-validation-rabbitmq command: > podman exec rabbitmq /openstack/healthcheck",
"openstack floating ip port forwarding create --internal-ip-address <internal-ip-address> --port <port> --internal-protocol-port <port-number> --external-protocol-port <port-number> --protocol <protocol> <floating-ip>",
"openstack floating ip port forwarding create --internal-ip-address 192.168.1.2 --port f7a08fe4-e79e-4b67-bbb8-a5002455a493 --internal-protocol-port 18343 --external-protocol-port 8343 --protocol tcp 10.0.0.100",
"[ironic] api_max_retries = 180",
"(undercloud) [stack@undercloud-0 ~]USD source stackrc (undercloud) [stack@undercloud-0 ~]USD nova list | 3fab687e-99c2-4e66-805f-3106fb41d868 | controller-1 | ACTIVE | - | Running | ctlplane=192.168.24.17 | | a87276ea-8682-4f27-9426-6b272955b486 | controller-2 | ACTIVE | - | Running | ctlplane=192.168.24.38 | | a000b156-9adc-4d37-8169-c1af7800788b | controller-3 | ACTIVE | - | Running | ctlplane=192.168.24.35 | (undercloud) [stack@undercloud-0 ~]USD for ip in 192.168.24.17 192.168.24.38 192.168.24.35; do ssh USDip 'sudo podman restart swift_copy_rings ; sudo podman restart USD(sudo podman ps -a --format=\"{{.Names}}\" --filter=\"name=swift_*\")'; done",
"`ansible -b <nodes> -i /usr/bin/tripleo-ansible-inventory -m shell -a \"status=`sudo systemctl is-active tripleo_ovn_metadata_agent`; if test \\\"USDstatus\\\" == \\\"active\\\"; then sudo systemctl restart tripleo_ovn_metadata_agent; echo restarted; fi\"`",
"`ansible -b compute* -i /usr/bin/tripleo-ansible-inventory -m shell -a \"status=`sudo systemctl is-active tripleo_ovn_metadata_agent`; if test \\\"USDstatus\\\" == \\\"active\\\"; then sudo systemctl restart tripleo_ovn_metadata_agent; echo restarted; fi\"`",
"service_config_settings: placement: neutron::server::placement::password: <Nova password> neutron::server::placement::www_authenticate_uri: <Keystone Internal API URL> neutron::server::placement::project_domain_name: 'Default' neutron::server::placement::project_name: 'service' neutron::server::placement::user_domain_name: 'Default' neutron::server::placement::username: nova neutron::server::placement::auth_url: <Keystone Internal API URL> neutron::server::placement::auth_type: 'password' neutron::server::placement::region_name: <Keystone Region>",
"d /var/run/octavia 0755 root root - -",
"`ansible -b <nodes> -i /usr/bin/tripleo-ansible-inventory -m shell -a \"podman ps -a --format {{'{{'}}.ID{{'}}'}} -f name=haproxy,status=created | xargs podman rm -f || :\"`",
"`ansible -b compute-0 -i /usr/bin/tripleo-ansible-inventory -m shell -a \"podman ps -a --format {{'{{'}}.ID{{'}}'}} -f name=haproxy,status=created | xargs podman rm -f || :\"`",
"service_config_settings: placement: neutron::server::placement::password: <Nova password> neutron::server::placement::www_authenticate_uri: <Keystone Internal API URL> neutron::server::placement::project_domain_name: 'Default' neutron::server::placement::project_name: 'service' neutron::server::placement::user_domain_name: 'Default' neutron::server::placement::username: nova neutron::server::placement::auth_url: <Keystone Internal API URL> neutron::server::placement::auth_type: 'password' neutron::server::placement::region_name: <Keystone Region>",
"TypeError: l.c[t.type] is undefined true"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/release_notes/chap-release_notes |
Chapter 22. Message Grouping | Chapter 22. Message Grouping A message group is a group of messages that share certain characteristics: All messages in a message group are grouped under a common group ID. This means that they can be identified with a common group property. All messages in a message group are serially processed and consumed by the same consumer, irrespective of the number of customers on the queue. This means that a specific message group with a unique group id is always processed by one consumer when the consumer opens it. If the consumer closes the message group, then the entire message group is directed to another consumer in the queue. Message groups are especially useful when there is a need for messages with a certain value of the property, such as group ID, to be processed serially by a single consumer. Important Message grouping will not work as expected if the queue has paging enabled. Be sure to disable paging before configuring a queue for message grouping. For information about configuring message grouping within a cluster of messaging servers, see Clustered Message Grouping in Part III, Configuring Multiple Messaging Systems . 22.1. Configuring Message Groups Using the Core API The property _AMQ_GROUP_ID is used to identify a message group using the Core API on the client side. To pick a random unique message group identifier, you can also set the auto-group property to true on the SessionFactory . 22.2. Configuring Message Groups Using Jakarta Messaging The property JMSXGroupID is used to identify a message group for Jakarta Messaging clients. If you wish to send a message group with different messages to one consumer, you can set the same JMSXGroupID for different messages. Message message = ... message.setStringProperty("JMSXGroupID", "Group-0"); producer.send(message); message = ... message.setStringProperty("JMSXGroupID", "Group-0"); producer.send(message); An alternative approach is to use the one of the following attributes of the connection-factory to be used by the client: auto-group or group-id . When auto-group is set to true , the connection-factory will begin to use a random unique message group identifier for all messages sent through it. You can use the management CLI to set the auto-group attribute. The group-id attribute will set the property JMSXGroupID to the specified value for all messages sent through the connection factory. To set a specific group-id on the connection factory, use the management CLI. | [
"Message message = message.setStringProperty(\"JMSXGroupID\", \"Group-0\"); producer.send(message); message = message.setStringProperty(\"JMSXGroupID\", \"Group-0\"); producer.send(message);",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=auto-group,value=true)",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=group-id,value=\"Group-0\")"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/about_message_grouping |
Chapter 26. Using snapshots on Stratis file systems | Chapter 26. Using snapshots on Stratis file systems You can use snapshots on Stratis file systems to capture file system state at arbitrary times and restore it in the future. 26.1. Characteristics of Stratis snapshots In Stratis, a snapshot is a regular Stratis file system created as a copy of another Stratis file system. The snapshot initially contains the same file content as the original file system, but can change as the snapshot is modified. Whatever changes you make to the snapshot will not be reflected in the original file system. The current snapshot implementation in Stratis is characterized by the following: A snapshot of a file system is another file system. A snapshot and its origin are not linked in lifetime. A snapshotted file system can live longer than the file system it was created from. A file system does not have to be mounted to create a snapshot from it. Each snapshot uses around half a gigabyte of actual backing storage, which is needed for the XFS log. 26.2. Creating a Stratis snapshot You can create a Stratis file system as a snapshot of an existing Stratis file system. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis file system. See Creating a Stratis file system . Procedure Create a Stratis snapshot: Additional resources stratis(8) man page on your system 26.3. Accessing the content of a Stratis snapshot You can mount a snapshot of a Stratis file system to make it accessible for read and write operations. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis file system . Procedure To access the snapshot, mount it as a regular file system from the /dev/stratis/ my-pool / directory: Additional resources Mounting a Stratis file system mount(8) man page on your system 26.4. Reverting a Stratis file system to a snapshot You can revert the content of a Stratis file system to the state captured in a Stratis snapshot. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis snapshot . Procedure Optional: Back up the current state of the file system to be able to access it later: Unmount and remove the original file system: Create a copy of the snapshot under the name of the original file system: Mount the snapshot, which is now accessible with the same name as the original file system: The content of the file system named my-fs is now identical to the snapshot my-fs-snapshot . Additional resources stratis(8) man page on your system 26.5. Removing a Stratis snapshot You can remove a Stratis snapshot from a pool. Data on the snapshot are lost. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis snapshot . Procedure Unmount the snapshot: Destroy the snapshot: Additional resources stratis(8) man page on your system | [
"stratis fs snapshot my-pool my-fs my-fs-snapshot",
"mount /dev/stratis/ my-pool / my-fs-snapshot mount-point",
"stratis filesystem snapshot my-pool my-fs my-fs-backup",
"umount /dev/stratis/ my-pool / my-fs stratis filesystem destroy my-pool my-fs",
"stratis filesystem snapshot my-pool my-fs-snapshot my-fs",
"mount /dev/stratis/ my-pool / my-fs mount-point",
"umount /dev/stratis/ my-pool / my-fs-snapshot",
"stratis filesystem destroy my-pool my-fs-snapshot"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/using-snapshots-on-stratis-file-systems |
Chapter 1. Viewing your reports on Red Hat Ansible Automation Platform | Chapter 1. Viewing your reports on Red Hat Ansible Automation Platform The reports feature on the Red Hat Ansible Automation Platform provides users with a visual overview of their automation efforts across different teams using Ansible. Each report is designed to help users monitor the status of their automation environment, be it the frequency of playbook runs or the status of hosts affected by various job templates. For example, you can use your reports to: View the number of hosts affected by a job template View the number changes made to hosts by a job template View the frequency of a job template run, and the rate of job templates that succeed or fail to run 1.1. Reviewing your reports To view reports about your Ansible automation environment, proceed with the following steps: Procedure Log in to console.redhat.com and navigate to the Ansible Automation Platform. Click Reports on the side navigation panel. Select a report from the results to view it. Each report presents data to monitor your Ansible automation environment. Use the filter toolbar on each report to adjust your graph view. Note We are constantly adding new reports to the system. If you have ideas for new reports that would be helpful for your team, please contact your account representative or log a feature enhancement for Insights for Ansible Automation Platform. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/viewing_reports_about_your_ansible_automation_environment/assembly-insights-reports |
Chapter 13. Installing on vSphere | Chapter 13. Installing on vSphere The Assisted Installer integrates the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling. 13.1. Adding hosts on vSphere You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc vSphere CLI tool. The following procedure demonstrates adding hosts with the govc CLI tool. To use the online vSphere Client, refer to the documentation for vSphere. To add hosts on vSphere with the vSphere govc CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines. Prerequisites You are using vSphere 7.0.2 or higher. You have the vSphere govc CLI tool installed and configured. You have set clusterSet disk.EnableUUID to TRUE in vSphere. You have created a cluster in the Assisted Installer web console, or You have created an Assisted Installer cluster profile and infrastructure environment with the API. You have exported your infrastructure environment ID in your shell as USDINFRA_ENV_ID . Procedure Configure the discovery image if you want it to boot with an ignition file. In Cluster details , select vSphere from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional. In Host discovery , click the Add hosts button and select the provisioning type. Add an SSH public key so that you can connect to the vSphere VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access . In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu. Select the required discovery image ISO. Note Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot. In Networking , select Cluster-managed networking or User-managed networking : Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Note The proxy username and password must be URL-encoded. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy or the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates and add the additional certificates. Optional: Configure the discovery image if you want to boot it with an ignition file. For more information, see Additional Resources . Click Generate Discovery ISO . Copy the Discovery ISO URL . Download the discovery ISO: USD wget - O vsphere-discovery-image.iso <discovery_url> Replace <discovery_url> with the Discovery ISO URL from the preceding step. On the command line, power off and delete any preexisting virtual machines: USD for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. Remove preexisting ISO images from the data store, if there are any: USD govc datastore.rm -ds <iso_datastore> <image> Replace <iso_datastore> with the name of the data store. Replace image with the name of the ISO image. Upload the Assisted Installer discovery ISO: USD govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso Replace <iso_datastore> with the name of the data store. Note All nodes in the cluster must boot from the discovery image. Boot three to five control plane nodes: USD govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=16 \ -m=32768 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com See vm.create for details. Note The foregoing example illustrates the minimum required resources for control plane nodes. Boot at least two worker nodes: USD govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=4 \ -m=8192 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com See vm.create for details. Note The foregoing example illustrates the minimum required resources for worker nodes. Ensure the VMs are running: USD govc ls /<datacenter>/vm/<folder_name> Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. After 2 minutes, shut down the VMs: USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. Set the disk.EnableUUID setting to TRUE : USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.EnableUUID=TRUE done Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. Note You must set disk.EnableUUID to TRUE on all of the nodes to enable autoscaling with vSphere. Restart the VMs: USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done Replace <datacenter> with the name of the data center. Replace <folder_name> with the name of the VM inventory folder. Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status. Select roles if needed. In Networking , clear the Allocate IPs via DHCP server checkbox. Set the API VIP address. Set the Ingress VIP address. Continue with the installation procedure. Additional resources Configuring the discovery image 13.2. vSphere postinstallation configuration using the CLI After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually: vCenter username vCenter password vCenter address vCenter cluster Data center Data store Folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . Procedure Generate a base64-encoded username and password for vCenter: USD echo -n "<vcenter_username>" | base64 -w0 Replace <vcenter_username> with your vCenter username. USD echo -n "<vcenter_password>" | base64 -w0 Replace <vcenter_password> with your vCenter password. Backup the vSphere credentials: USD oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml Edit the vSphere credentials: USD cp creds_backup.yaml vsphere-creds.yaml USD vi vsphere-creds.yaml apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: "2022-01-25T17:39:50Z" name: vsphere-creds namespace: kube-system resourceVersion: "2437" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque Replace <vcenter_address> with the vCenter address. Replace <vcenter_username_encoded> with the base64-encoded version of your vSphere username. Replace <vcenter_password_encoded> with the base64-encoded version of your vSphere password. Replace the vSphere credentials: USD oc replace -f vsphere-creds.yaml Redeploy the kube-controller-manager pods: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Backup the vSphere cloud provider configuration: USD oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml Edit the cloud provider configuration: USD cp cloud-provider-config_backup.yaml cloud-provider-config.yaml USD vi cloud-provider-config.yaml apiVersion: v1 data: config: | [Global] secret-name = "vsphere-creds" secret-namespace = "kube-system" insecure-flag = "1" [Workspace] server = "<vcenter_address>" datacenter = "<datacenter>" default-datastore = "<datastore>" folder = "/<datacenter>/vm/<folder>" [VirtualCenter "<vcenter_address>"] datacenters = "<datacenter>" kind: ConfigMap metadata: creationTimestamp: "2022-01-25T17:40:49Z" name: cloud-provider-config namespace: openshift-config resourceVersion: "2070" uid: 80bb8618-bf25-442b-b023-b31311918507 Replace <vcenter_address> with the vCenter address. Replace <datacenter> with the name of the data center. Replace <datastore> with the name of the data store. Replace <folder> with the folder containing the cluster VMs. Apply the cloud provider configuration: USD oc apply -f cloud-provider-config.yaml Taint the nodes with the uninitialized taint: Important Follow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later. Identify the nodes to taint: USD oc get nodes Run the following command for each node: USD oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Replace <node_name> with the name of the node. Example USD oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f USD oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Back up the infrastructures configuration: USD oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup Edit the infrastructures configuration: USD cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml USD vi infrastructures.config.openshift.io.yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: "2022-05-07T10:19:55Z" generation: 1 name: cluster resourceVersion: "536" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: "/<data_center>/path/to/folder" networks: - "VM Network" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: "" Replace <vcenter_address> with your vCenter address. Replace <datacenter> with the name of your vCenter data center. Replace <datastore> with the name of your vCenter data store. Replace <folder> with the folder containing the cluster VMs. Replace <vcenter_cluster> with the vSphere vCenter cluster where OpenShift Container Platform is installed. Apply the infrastructures configuration: USD oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true 13.3. vSphere postinstallation configuration using the web console After installing an OpenShift Container Platform cluster by using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually: vCenter address vCenter cluster vCenter username vCenter password Data center Default data store Virtual machine folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . Procedure In the Administrator perspective, navigate to Home Overview . Under Status , click vSphere connection to open the vSphere connection configuration wizard. In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui . In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed. Important This step is mandatory if you installed OpenShift Container Platform 4.13 or later. In the Username field, enter your vSphere vCenter username. In the Password field, enter your vSphere vCenter password. Warning The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable. In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter . In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename . Warning Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes . In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder. Click Save Configuration . This updates the cloud-provider-config file in the openshift-config namespace, and starts the configuration process. Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy . Verification The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected. Follow the steps below to monitor the configuration process. Check that the configuration process completed successfully: In the Administrator perspective, navigate to Home > Overview . Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed. Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed. A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again. Check that you are able to bind PersistentVolumeClaims objects by performing the following steps: Create a StorageClass object using the following YAML: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate Create a PersistentVolumeClaims object using the following YAML: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a PersistentVolumeClaims object, navigate to Storage PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console. | [
"wget - O vsphere-discovery-image.iso <discovery_url>",
"for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done",
"govc datastore.rm -ds <iso_datastore> <image>",
"govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso",
"govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=16 -m=32768 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com",
"govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=4 -m=8192 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com",
"govc ls /<datacenter>/vm/<folder_name>",
"for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done",
"for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.EnableUUID=TRUE done",
"for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done",
"echo -n \"<vcenter_username>\" | base64 -w0",
"echo -n \"<vcenter_password>\" | base64 -w0",
"oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml",
"cp creds_backup.yaml vsphere-creds.yaml",
"vi vsphere-creds.yaml",
"apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: \"2022-01-25T17:39:50Z\" name: vsphere-creds namespace: kube-system resourceVersion: \"2437\" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque",
"oc replace -f vsphere-creds.yaml",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml",
"cp cloud-provider-config_backup.yaml cloud-provider-config.yaml",
"vi cloud-provider-config.yaml",
"apiVersion: v1 data: config: | [Global] secret-name = \"vsphere-creds\" secret-namespace = \"kube-system\" insecure-flag = \"1\" [Workspace] server = \"<vcenter_address>\" datacenter = \"<datacenter>\" default-datastore = \"<datastore>\" folder = \"/<datacenter>/vm/<folder>\" [VirtualCenter \"<vcenter_address>\"] datacenters = \"<datacenter>\" kind: ConfigMap metadata: creationTimestamp: \"2022-01-25T17:40:49Z\" name: cloud-provider-config namespace: openshift-config resourceVersion: \"2070\" uid: 80bb8618-bf25-442b-b023-b31311918507",
"oc apply -f cloud-provider-config.yaml",
"oc get nodes",
"oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule",
"oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule",
"oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup",
"cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml",
"vi infrastructures.config.openshift.io.yaml",
"apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: \"2022-05-07T10:19:55Z\" generation: 1 name: cluster resourceVersion: \"536\" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: \"/<data_center>/path/to/folder\" networks: - \"VM Network\" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: \"\"",
"oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/installing-on-vsphere |
Chapter 2. Differences from upstream OpenJDK 11 | Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.20/rn-openjdk-diff-from-upstream |
Chapter 52. JMS | Chapter 52. JMS Both producer and consumer are supported This component allows messages to be sent to (or consumed from) a JMS Queue or Topic. It uses Spring's JMS support for declarative transactions, including Spring's JmsTemplate for sending and a MessageListenerContainer for consuming. 52.1. Dependencies When using jms with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jms-starter</artifactId> </dependency> Note Using ActiveMQ If you are using Apache ActiveMQ , you should prefer the ActiveMQ component as it has been optimized for ActiveMQ. All of the options and samples on this page are also valid for the ActiveMQ component. Note Transacted and caching See section Transactions and Cache Levels below if you are using transactions with JMS as it can impact performance. Note Request/Reply over JMS Make sure to read the section Request/reply over JMS further below on this page for important notes about request/reply, as Camel offers a number of options to configure for performance, and clustered environments. 52.2. URI format Where destinationName is a JMS queue or topic name. By default, the destinationName is interpreted as a queue name. For example, to connect to the queue, FOO.BAR use: You can include the optional queue: prefix, if you prefer: To connect to a topic, you must include the topic: prefix. For example, to connect to the topic, Stocks.Prices , use: You append query options to the URI by using the following format, ?option=value&option=value&... 52.2.1. Using ActiveMQ The JMS component reuses Spring 2's JmsTemplate for sending messages. This is not ideal for use in a non-J2EE container and typically requires some caching in the JMS provider to avoid poor performance . If you intend to use Apache ActiveMQ as your message broker, the recommendation is that you do one of the following: Use the ActiveMQ component, which is already optimized to use ActiveMQ efficiently Use the PoolingConnectionFactory in ActiveMQ. 52.2.2. Transactions and Cache Levels If you are consuming messages and using transactions ( transacted=true ) then the default settings for cache level can impact performance. If you are using XA transactions then you cannot cache as it can cause the XA transaction to not work properly. If you are not using XA, then you should consider caching as it speeds up performance, such as setting cacheLevelName=CACHE_CONSUMER . The default setting for cacheLevelName is CACHE_AUTO . This default auto detects the mode and sets the cache level accordingly to: CACHE_CONSUMER if transacted=false CACHE_NONE if transacted=true So you can say the default setting is conservative. Consider using cacheLevelName=CACHE_CONSUMER if you are using non-XA transactions. 52.2.3. Durable Subscriptions with JMS 1.1 If you wish to use durable topic subscriptions, you need to specify both clientId and durableSubscriptionName . The value of the clientId must be unique and can only be used by a single JMS connection instance in your entire network. Note If you are using the Apache ActiveMQ Classic , you may prefer to use a feature called Virtual Topic. This should remove the necessity of having a unique clientId . You can consult the specific documentation for Artemis or for ActiveMQ Classic for details about how to leverage this feature. You can find more details about durable messaging for ActiveMQ Classic here . 52.2.3.1. Durable Subscriptions with JMS 2.0 If you wish to use durable topic subscriptions, you need to specify the durableSubscriptionName . 52.2.4. Message Header Mapping When using message headers, the JMS specification states that header names must be valid Java identifiers. So try to name your headers to be valid Java identifiers. One benefit of doing this is that you can then use your headers inside a JMS Selector (whose SQL92 syntax mandates Java identifier syntax for headers). A simple strategy for mapping header names is used by default. The strategy is to replace any dots and hyphens in the header name as shown below and to reverse the replacement when the header name is restored from a JMS message sent over the wire. What does this mean? No more losing method names to invoke on a bean component, no more losing the filename header for the File Component, and so on. The current header name strategy for accepting header names in Camel is as follows: Dots are replaced by `DOT` and the replacement is reversed when Camel consume the message Hyphen is replaced by `HYPHEN` and the replacement is reversed when Camel consumes the message You can configure many different properties on the JMS endpoint, which map to properties on the JMSConfiguration object. Note Mapping to Spring JMS Many of these properties map to properties on Spring JMS, which Camel uses for sending and receiving messages. So you can get more information about these properties by consulting the relevant Spring documentation. 52.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 52.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 52.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 52.4. Component Options The JMS component supports 98 options, which are listed below. Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. The clientId option is compulsory with JMS 1.1 durable topic subscriptions, because the client ID is used to control which client messages have to be stored for. With JMS 2.0 clients, clientId may be omitted, which creates a 'global' subscription. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured for a JMS 1.1 durable subscription, and may be configured for JMS 2.0, to create a private durable subscription. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer picks up the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id, if the client ID is configured. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request/reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowAutoWiredConnectionFactory (advanced) Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true boolean allowAutoWiredDestinationResolver (advanced) Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) To use a shared JMS configuration. JmsConfiguration destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean queueBrowseStrategy (advanced) To use a custom QueueBrowseStrategy when browsing queues. QueueBrowseStrategy receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 52.5. Endpoint Options The JMS endpoint is configured using URI syntax: with the following path and query parameters: 52.5.1. Path Parameters (2 parameters) Name Description Default Type destinationType (common) The kind of destination to use. Enum values: queue topic temp-queue temp-topic queue String destinationName (common) Required Name of the queue or topic to use as destination. String 52.5.2. Query Parameters (95 parameters) Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer picks up the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request/reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 52.6. Samples JMS is used in many examples for other components as well. But we provide a few samples below to get started. 52.6.1. Receiving from JMS In the following sample we configure a route that receives JMS messages and routes the message to a POJO: from("jms:queue:foo"). to("bean:myBusinessLogic"); You can of course use any of the EIP patterns so the route can be context based. For example, here's how to filter an order topic for the big spenders: from("jms:topic:OrdersTopic"). filter().method("myBean", "isGoldCustomer"). to("jms:queue:BigSpendersQueue"); 52.6.2. Sending to JMS In the sample below we poll a file folder and send the file content to a JMS topic. As we want the content of the file as a TextMessage instead of a BytesMessage , we need to convert the body to a String : from("file://orders"). convertBodyTo(String.class). to("jms:topic:OrdersTopic"); 52.6.3. Using Annotations Camel also has annotations so you can use POJO Consuming and POJO Producing. 52.6.4. Spring DSL sample The preceding examples use the Java DSL. Camel also supports Spring XML DSL. Here is the big spender sample using Spring DSL: <route> <from uri="jms:topic:OrdersTopic"/> <filter> <method ref="myBean" method="isGoldCustomer"/> <to uri="jms:queue:BigSpendersQueue"/> </filter> </route> 52.6.5. Other samples JMS appears in many of the examples for other components and EIP patterns, as well in this Camel documentation. So feel free to browse the documentation. 52.6.6. Using JMS as a Dead Letter Queue storing Exchange Normally, when using JMS as the transport, it only transfers the body and headers as the payload. If you want to use JMS with a Dead Letter Channel , using a JMS queue as the Dead Letter Queue, then normally the caused Exception is not stored in the JMS message. You can, however, use the transferExchange option on the JMS dead letter queue to instruct Camel to store the entire Exchange in the queue as a javax.jms.ObjectMessage that holds a org.apache.camel.support.DefaultExchangeHolder . This allows you to consume from the Dead Letter Queue and retrieve the caused exception from the Exchange property with the key Exchange.EXCEPTION_CAUGHT . The demo below illustrates this: // setup error handler to use JMS as queue and store the entire Exchange errorHandler(deadLetterChannel("jms:queue:dead?transferExchange=true")); Then you can consume from the JMS queue and analyze the problem: from("jms:queue:dead").to("bean:myErrorAnalyzer"); // and in our bean String body = exchange.getIn().getBody(); Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); // the cause message is String problem = cause.getMessage(); 52.6.7. Using JMS as a Dead Letter Channel storing error only You can use JMS to store the cause error message or to store a custom body, which you can initialize yourself. The following example uses the Message Translator EIP to do a transformation on the failed exchange before it is moved to the JMS dead letter queue: // we sent it to a seda dead queue first errorHandler(deadLetterChannel("seda:dead")); // and on the seda dead queue we can do the custom transformation before its sent to the JMS queue from("seda:dead").transform(exceptionMessage()).to("jms:queue:dead"); Here we only store the original cause error message in the transform. You can, however, use any Expression to send whatever you like. For example, you can invoke a method on a Bean or use a custom processor. 52.7. Message Mapping between JMS and Camel Camel automatically maps messages between javax.jms.Message and org.apache.camel.Message . When sending a JMS message, Camel converts the message body to the following JMS message types: Body Type JMS Message Comment String javax.jms.TextMessage org.w3c.dom.Node javax.jms.TextMessage The DOM will be converted to String . Map javax.jms.MapMessage java.io.Serializable javax.jms.ObjectMessage byte[] javax.jms.BytesMessage java.io.File javax.jms.BytesMessage java.io.Reader javax.jms.BytesMessage java.io.InputStream javax.jms.BytesMessage java.nio.ByteBuffer javax.jms.BytesMessage When receiving a JMS message, Camel converts the JMS message to the following body type: JMS Message Body Type javax.jms.TextMessage String javax.jms.BytesMessage byte[] javax.jms.MapMessage Map<String, Object> javax.jms.ObjectMessage Object 52.7.1. Disabling auto-mapping of JMS messages You can use the mapJmsMessage option to disable the auto-mapping above. If disabled, Camel will not try to map the received JMS message, but instead uses it directly as the payload. This allows you to avoid the overhead of mapping and let Camel just pass through the JMS message. For instance, it even allows you to route javax.jms.ObjectMessage JMS messages with classes you do not have on the classpath. 52.7.2. Using a custom MessageConverter You can use the messageConverter option to do the mapping yourself in a Spring org.springframework.jms.support.converter.MessageConverter class. For example, in the route below we use a custom message converter when sending a message to the JMS order queue: from("file://inbox/order").to("jms:queue:order?messageConverter=#myMessageConverter"); You can also use a custom message converter when consuming from a JMS destination. 52.7.3. Controlling the mapping strategy selected You can use the jmsMessageType option on the endpoint URL to force a specific message type for all messages. In the route below, we poll files from a folder and send them as javax.jms.TextMessage as we have forced the JMS producer endpoint to use text messages: from("file://inbox/order").to("jms:queue:order?jmsMessageType=Text"); You can also specify the message type to use for each message by setting the header with the key CamelJmsMessageType . For example: from("file://inbox/order").setHeader("CamelJmsMessageType", JmsMessageType.Text).to("jms:queue:order"); The possible values are defined in the enum class, org.apache.camel.jms.JmsMessageType . 52.8. Message format when sending The exchange that is sent over the JMS wire must conform to the JMS Message spec . For the exchange.in.header the following rules apply for the header keys : Keys starting with JMS or JMSX are reserved. exchange.in.headers keys must be literals and all be valid Java identifiers (do not use dots in the key name). Camel replaces dots & hyphens and the reverse when when consuming JMS messages: . is replaced by `DOT` and the reverse replacement when Camel consumes the message. - is replaced by `HYPHEN` and the reverse replacement when Camel consumes the message. See also the option jmsKeyFormatStrategy , which allows use of your own custom strategy for formatting keys. For the exchange.in.header , the following rules apply for the header values : The values must be primitives or their counter objects (such as Integer , Long , Character ). The types, String , CharSequence , Date , BigDecimal and BigInteger are all converted to their toString() representation. All other types are dropped. Camel will log with category org.apache.camel.component.jms.JmsBinding at DEBUG level if it drops a given header value. For example: 52.9. Message format when receiving Camel adds the following properties to the Exchange when it receives a message: Property Type Description org.apache.camel.jms.replyDestination javax.jms.Destination The reply destination. Camel adds the following JMS properties to the In message headers when it receives a JMS message: Header Type Description JMSCorrelationID String The JMS correlation ID. JMSDeliveryMode int The JMS delivery mode. JMSDestination javax.jms.Destination The JMS destination. JMSExpiration long The JMS expiration. JMSMessageID String The JMS unique message ID. JMSPriority int The JMS priority (with 0 as the lowest priority and 9 as the highest). JMSRedelivered boolean Is the JMS message redelivered. JMSReplyTo javax.jms.Destination The JMS reply-to destination. JMSTimestamp long The JMS timestamp. JMSType String The JMS type. JMSXGroupID String The JMS group ID. As all the above information is standard JMS you can check the JMS documentation for further details. 52.10. About using Camel to send and receive messages and JMSReplyTo The JMS component is complex and you have to pay close attention to how it works in some cases. So this is a short summary of some of the areas/pitfalls to look for. When Camel sends a message using its JMSProducer , it checks the following conditions: The message exchange pattern, Whether a JMSReplyTo was set in the endpoint or in the message headers, Whether any of the following options have been set on the JMS endpoint: disableReplyTo , preserveMessageQos , explicitQosEnabled . All this can be a tad complex to understand and configure to support your use case. 52.10.1. JmsProducer The JmsProducer behaves as follows, depending on configuration: Exchange Pattern Other options Description InOut - Camel will expect a reply, set a temporary JMSReplyTo , and after sending the message, it will start to listen for the reply message on the temporary queue. InOut JMSReplyTo is set Camel will expect a reply and, after sending the message, it will start to listen for the reply message on the specified JMSReplyTo queue. InOnly - Camel will send the message and not expect a reply. InOnly JMSReplyTo is set By default, Camel discards the JMSReplyTo destination and clears the JMSReplyTo header before sending the message. Camel then sends the message and does not expect a reply. Camel logs this in the log at WARN level (changed to DEBUG level from Camel 2.6 onwards. You can use preserveMessageQuo=true to instruct Camel to keep the JMSReplyTo . In all situations the JmsProducer does not expect any reply and thus continue after sending the message. 52.10.2. JmsConsumer The JmsConsumer behaves as follows, depending on configuration: Exchange Pattern Other options Description InOut - Camel will send the reply back to the JMSReplyTo queue. InOnly - Camel will not send a reply back, as the pattern is InOnly . - disableReplyTo=true This option suppresses replies. So pay attention to the message exchange pattern set on your exchanges. If you send a message to a JMS destination in the middle of your route you can specify the exchange pattern to use, see more at Request Reply. This is useful if you want to send an InOnly message to a JMS topic: from("activemq:queue:in") .to("bean:validateOrder") .to(ExchangePattern.InOnly, "activemq:topic:order") .to("bean:handleOrder"); 52.11. Reuse endpoint and send to different destinations computed at runtime If you need to send messages to a lot of different JMS destinations, it makes sense to reuse a JMS endpoint and specify the real destination in a message header. This allows Camel to reuse the same endpoint, but send to different destinations. This greatly reduces the number of endpoints created and economizes on memory and thread resources. You can specify the destination in the following headers: Header Type Description CamelJmsDestination javax.jms.Destination A destination object. CamelJmsDestinationName String The destination name. For example, the following route shows how you can compute a destination at run time and use it to override the destination appearing in the JMS URL: from("file://inbox") .to("bean:computeDestination") .to("activemq:queue:dummy"); The queue name, dummy , is just a placeholder. It must be provided as part of the JMS endpoint URL, but it will be ignored in this example. In the computeDestination bean, specify the real destination by setting the CamelJmsDestinationName header as follows: public void setJmsHeader(Exchange exchange) { String id = .... exchange.getIn().setHeader("CamelJmsDestinationName", "order:" + id"); } Then Camel will read this header and use it as the destination instead of the one configured on the endpoint. So, in this example Camel sends the message to activemq:queue:order:2 , assuming the id value was 2. If both the CamelJmsDestination and the CamelJmsDestinationName headers are set, CamelJmsDestination takes priority. Keep in mind that the JMS producer removes both CamelJmsDestination and CamelJmsDestinationName headers from the exchange and do not propagate them to the created JMS message in order to avoid the accidental loops in the routes (in scenarios when the message will be forwarded to the another JMS endpoint). 52.12. Configuring different JMS providers You can configure your JMS provider in Spring XML as follows: Basically, you can configure as many JMS component instances as you wish and give them a unique name using the id attribute . The preceding example configures an activemq component. You could do the same to configure MQSeries, TibCo, BEA, Sonic and so on. Once you have a named JMS component, you can then refer to endpoints within that component using URIs. For example for the component name, activemq , you can then refer to destinations using the URI format, activemq:[queue:|topic:]destinationName . You can use the same approach for all other JMS providers. This works by the SpringCamelContext lazily fetching components from the spring context for the scheme name you use for Endpoint URIs and having the Component resolve the endpoint URIs. 52.12.1. Using JNDI to find the ConnectionFactory If you are using a J2EE container, you might need to look up JNDI to find the JMS ConnectionFactory rather than use the usual <bean> mechanism in Spring. You can do this using Spring's factory bean or the new Spring XML namespace. For example: <bean id="weblogic" class="org.apache.camel.component.jms.JmsComponent"> <property name="connectionFactory" ref="myConnectionFactory"/> </bean> <jee:jndi-lookup id="myConnectionFactory" jndi-name="jms/connectionFactory"/> See The jee schema in the Spring reference documentation for more details about JNDI lookup. 52.13. Concurrent Consuming A common requirement with JMS is to consume messages concurrently in multiple threads in order to make an application more responsive. You can set the concurrentConsumers option to specify the number of threads servicing the JMS endpoint, as follows: from("jms:SomeQueue?concurrentConsumers=20"). bean(MyClass.class); You can configure this option in one of the following ways: On the JmsComponent , On the endpoint URI or, By invoking setConcurrentConsumers() directly on the JmsEndpoint . 52.13.1. Concurrent Consuming with async consumer Notice that each concurrent consumer will only pickup the available message from the JMS broker, when the current message has been fully processed. You can set the option asyncConsumer=true to let the consumer pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). See more details in the table on top of the page about the asyncConsumer option. from("jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true"). bean(MyClass.class); 52.14. Request/reply over JMS Camel supports request/reply over JMS. In essence the MEP of the Exchange should be InOut when you send a message to a JMS queue. Camel offers a number of options to configure request/reply over JMS that influence performance and clustered environments. The table below summaries the options. Option Performance Cluster Description Temporary Fast Yes A temporary queue is used as reply queue, and automatic created by Camel. To use this do not specify a replyTo queue name. And you can optionally configure replyToType=Temporary to make it stand out that temporary queues are in use. Shared Slow Yes A shared persistent queue is used as reply queue. The queue must be created beforehand, although some brokers can create them on the fly such as Apache ActiveMQ. To use this you must specify the replyTo queue name. And you can optionally configure replyToType=Shared to make it stand out that shared queues are in use. A shared queue can be used in a clustered environment with multiple nodes running this Camel application at the same time. All using the same shared reply queue. This is possible because JMS Message selectors are used to correlate expected reply messages; this impacts performance though. JMS Message selectors is slower, and therefore not as fast as Temporary or Exclusive queues. See further below how to tweak this for better performance. Exclusive Fast No (*Yes) An exclusive persistent queue is used as reply queue. The queue must be created beforehand, although some brokers can create them on the fly such as Apache ActiveMQ. To use this you must specify the replyTo queue name. And you must configure replyToType=Exclusive to instruct Camel to use exclusive queues, as Shared is used by default, if a replyTo queue name was configured. When using exclusive reply queues, then JMS Message selectors are not in use, and therefore other applications must not use this queue as well. An exclusive queue cannot be used in a clustered environment with multiple nodes running this Camel application at the same time; as we do not have control if the reply queue comes back to the same node that sent the request message; that is why shared queues use JMS Message selectors to make sure of this. Though if you configure each Exclusive reply queue with an unique name per node, then you can run this in a clustered environment. As then the reply message will be sent back to that queue for the given node, that awaits the reply message. replyToConcurrentConsumers Fast Yes Allows to process reply messages concurrently using concurrent message listeners in use. You can specify a range using the replyToConcurrentConsumers and replyToMaxConcurrentConsumers options. Notice: That using Shared reply queues may not work as well with concurrent listeners, so use this option with care. replyToMaxConcurrentConsumers Fast Yes Allows to process reply messages concurrently using concurrent message listeners in use. You can specify a range using the replyToConcurrentConsumers and replyToMaxConcurrentConsumers options. Notice: That using Shared reply queues may not work as well with concurrent listeners, so use this option with care. The JmsProducer detects the InOut and provides a JMSReplyTo header with the reply destination to be used. By default Camel uses a temporary queue, but you can use the replyTo option on the endpoint to specify a fixed reply queue (see more below about fixed reply queue). Camel will automatically setup a consumer which listen on the reply queue, so you should not do anything. This consumer is a Spring DefaultMessageListenerContainer which listen for replies. However it's fixed to 1 concurrent consumer. That means replies will be processed in sequence as there are only 1 thread to process the replies. You can configure the listener to use concurrent threads using the replyToConcurrentConsumers and replyToMaxConcurrentConsumers options. This allows you to easier configure this in Camel as shown below: from(xxx) .inOut().to("activemq:queue:foo?replyToConcurrentConsumers=5") .to(yyy) .to(zzz); In this route we instruct Camel to route replies asynchronously using a thread pool with 5 threads. 52.14.1. Request/reply over JMS and using a shared fixed reply queue You can use a fixed reply queue when doing request/reply over JMS as shown in the example below. from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar") .to(yyy) In this example the fixed reply queue named "bar" is used. By default Camel assumes the queue is shared when using fixed reply queues, and therefore it uses a JMSSelector to only pickup the expected reply messages (eg based on the JMSCorrelationID ). See section for exclusive fixed reply queues. That means its not as fast as temporary queues. You can speedup how often Camel will pull for reply messages using the receiveTimeout option. By default its 1000 millis. So to make it faster you can set it to 250 millis to pull 4 times per second as shown: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&receiveTimeout=250") .to(yyy) Notice this will cause the Camel to send pull requests to the message broker more frequent, and thus require more network traffic. It is generally recommended to use temporary queues if possible. 52.14.2. Request/reply over JMS and using an exclusive fixed reply queue In the example, Camel would anticipate the fixed reply queue named "bar" was shared, and thus it uses a JMSSelector to only consume reply messages which it expects. However there is a drawback doing this as the JMS selector is slower. Also the consumer on the reply queue is slower to update with new JMS selector ids. In fact it only updates when the receiveTimeout option times out, which by default is 1 second. So in theory the reply messages could take up till about 1 sec to be detected. On the other hand if the fixed reply queue is exclusive to the Camel reply consumer, then we can avoid using the JMS selectors, and thus be more performant. In fact as fast as using temporary queues. There is the ReplyToType option which you can configure to Exclusive to tell Camel that the reply queue is exclusive as shown in the example below: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") .to(yyy) Mind that the queue must be exclusive to each and every endpoint. So if you have two routes, then they each need an unique reply queue as shown in the example: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") .to(yyy) from(aaa) .inOut().to("activemq:queue:order?replyTo=order.reply&replyToType=Exclusive") .to(bbb) The same applies if you run in a clustered environment. Then each node in the cluster must use an unique reply queue name. As otherwise each node in the cluster may pickup messages which was intended as a reply on another node. For clustered environments its recommended to use shared reply queues instead. 52.15. Synchronizing clocks between senders and receivers When doing messaging between systems, its desirable that the systems have synchronized clocks. For example when sending a JMS message, then you can set a time to live value on the message. Then the receiver can inspect this value, and determine if the message is already expired, and thus drop the message instead of consume and process it. However this requires that both sender and receiver have synchronized clocks. If you are using ActiveMQ then you can use the timestamp plugin to synchronize clocks. 52.16. About time to live Read first above about synchronized clocks. When you do request/reply (InOut) over JMS with Camel then Camel uses a timeout on the sender side, which is default 20 seconds from the requestTimeout option. You can control this by setting a higher/lower value. However the time to live value is still set on the message being send. So that requires the clocks to be synchronized between the systems. If they are not, then you may want to disable the time to live value being set. This is now possible using the disableTimeToLive option from Camel 2.8 onwards. So if you set this option to disableTimeToLive=true , then Camel does not set any time to live value when sending JMS messages. But the request timeout is still active. So for example if you do request/reply over JMS and have disabled time to live, then Camel will still use a timeout by 20 seconds (the requestTimeout option). That option can of course also be configured. So the two options requestTimeout and disableTimeToLive gives you fine grained control when doing request/reply. You can provide a header in the message to override and use as the request timeout value instead of the endpoint configured value. For example: from("direct:someWhere") .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") .to("bean:processReply"); In the route above we have a endpoint configured requestTimeout of 30 seconds. So Camel will wait up till 30 seconds for that reply message to come back on the bar queue. If no reply message is received then a org.apache.camel.ExchangeTimedOutException is set on the Exchange and Camel continues routing the message, which would then fail due the exception, and Camel's error handler reacts. If you want to use a per message timeout value, you can set the header with key org.apache.camel.component.jms.JmsConstants#JMS_REQUEST_TIMEOUT which has constant value "CamelJmsRequestTimeout" with a timeout value as long type. For example we can use a bean to compute the timeout value per individual message, such as calling the "whatIsTheTimeout" method on the service bean as shown below: from("direct:someWhere") .setHeader("CamelJmsRequestTimeout", method(ServiceBean.class, "whatIsTheTimeout")) .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") .to("bean:processReply"); When you do fire and forget (InOut) over JMS with Camel then Camel by default does not set any time to live value on the message. You can configure a value by using the timeToLive option. For example to indicate a 5 sec., you set timeToLive=5000 . The option disableTimeToLive can be used to force disabling the time to live, also for InOnly messaging. The requestTimeout option is not being used for InOnly messaging. 52.17. Enabling Transacted Consumption A common requirement is to consume from a queue in a transaction and then process the message using the Camel route. To do this, just ensure that you set the following properties on the component/endpoint: transacted = true transactionManager = a Transsaction Manager - typically the JmsTransactionManager See the Transactional Client EIP pattern for further details. Transactions and [Request Reply] over JMS When using Request Reply over JMS you cannot use a single transaction; JMS will not send any messages until a commit is performed, so the server side won't receive anything at all until the transaction commits. Therefore to use Request Reply you must commit a transaction after sending the request and then use a separate transaction for receiving the response. To address this issue the JMS component uses different properties to specify transaction use for oneway messaging and request reply messaging: The transacted property applies only to the InOnly message Exchange Pattern (MEP). You can leverage the DMLC transacted session API using the following properties on component/endpoint: transacted = true lazyCreateTransactionManager = false The benefit of doing so is that the cacheLevel setting will be honored when using local transactions without a configured TransactionManager. When a TransactionManager is configured, no caching happens at DMLC level and it is necessary to rely on a pooled connection factory. For more details about this kind of setup, see here and here . 52.18. Using JMSReplyTo for late replies When using Camel as a JMS listener, it sets an Exchange property with the value of the ReplyTo javax.jms.Destination object, having the key ReplyTo . You can obtain this Destination as follows: Destination replyDestination = exchange.getIn().getHeader(JmsConstants.JMS_REPLY_DESTINATION, Destination.class); And then later use it to send a reply using regular JMS or Camel. // we need to pass in the JMS component, and in this sample we use ActiveMQ JmsEndpoint endpoint = JmsEndpoint.newInstance(replyDestination, activeMQComponent); // now we have the endpoint we can use regular Camel API to send a message to it template.sendBody(endpoint, "Here is the late reply."); A different solution to sending a reply is to provide the replyDestination object in the same Exchange property when sending. Camel will then pick up this property and use it for the real destination. The endpoint URI must include a dummy destination, however. For example: // we pretend to send it to some non existing dummy queue template.send("activemq:queue:dummy, new Processor() { public void process(Exchange exchange) throws Exception { // and here we override the destination with the ReplyTo destination object so the message is sent to there instead of dummy exchange.getIn().setHeader(JmsConstants.JMS_DESTINATION, replyDestination); exchange.getIn().setBody("Here is the late reply."); } } 52.19. Using a request timeout In the sample below we send a Request Reply style message Exchange (we use the requestBody method = InOut ) to the slow queue for further processing in Camel and we wait for a return reply: 52.20. Sending an InOnly message and keeping the JMSReplyTo header When sending to a JMS destination using camel-jms the producer will use the MEP to detect if its InOnly or InOut messaging. However there can be times where you want to send an InOnly message but keeping the JMSReplyTo header. To do so you have to instruct Camel to keep it, otherwise the JMSReplyTo header will be dropped. For example to send an InOnly message to the foo queue, but with a JMSReplyTo with bar queue you can do as follows: template.send("activemq:queue:foo?preserveMessageQos=true", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setBody("World"); exchange.getIn().setHeader("JMSReplyTo", "bar"); } }); Notice we use preserveMessageQos=true to instruct Camel to keep the JMSReplyTo header. 52.21. Setting JMS provider options on the destination Some JMS providers, like IBM's WebSphere MQ need options to be set on the JMS destination. For example, you may need to specify the targetClient option. Since targetClient is a WebSphere MQ option and not a Camel URI option, you need to set that on the JMS destination name like so: // ... .setHeader("CamelJmsDestinationName", constant("queue:///MY_QUEUE?targetClient=1")) .to("wmq:queue:MY_QUEUE?useMessageIDAsCorrelationID=true"); Some versions of WMQ won't accept this option on the destination name and you will get an exception like: A workaround is to use a custom DestinationResolver: JmsComponent wmq = new JmsComponent(connectionFactory); wmq.setDestinationResolver(new DestinationResolver() { public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException { MQQueueSession wmqSession = (MQQueueSession) session; return wmqSession.createQueue("queue:///" + destinationName + "?targetClient=1"); } }); 52.22. Spring Boot Auto-Configuration The component supports 99 options, which are listed below. Name Description Default Type camel.component.jms.accept-messages-while-stopping Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false Boolean camel.component.jms.acknowledgement-mode-name The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. AUTO_ACKNOWLEDGE String camel.component.jms.allow-additional-headers This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String camel.component.jms.allow-auto-wired-connection-factory Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true Boolean camel.component.jms.allow-auto-wired-destination-resolver Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true Boolean camel.component.jms.allow-null-body Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true Boolean camel.component.jms.allow-reply-manager-quick-stop Whether the DefaultMessageListenerContainer used in the reply managers for request/reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false Boolean camel.component.jms.allow-serialized-headers Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.jms.always-copy-message If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false Boolean camel.component.jms.artemis-consumer-priority Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). Integer camel.component.jms.artemis-streaming-enabled Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false Boolean camel.component.jms.async-consumer Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer picks up the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false Boolean camel.component.jms.async-start-listener Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false Boolean camel.component.jms.async-stop-listener Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false Boolean camel.component.jms.auto-startup Specifies whether the consumer container should auto-startup. true Boolean camel.component.jms.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jms.cache-level Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. Integer camel.component.jms.cache-level-name Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. CACHE_AUTO String camel.component.jms.client-id Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1 String camel.component.jms.concurrent-consumers Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 Integer camel.component.jms.configuration To use a shared JMS configuration. The option is a org.apache.camel.component.jms.JmsConfiguration type. JmsConfiguration camel.component.jms.connection-factory The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a javax.jms.ConnectionFactory type. ConnectionFactory camel.component.jms.consumer-type The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. ConsumerType camel.component.jms.correlation-property When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String camel.component.jms.default-task-executor-type Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. DefaultTaskExecutorType camel.component.jms.delivery-delay Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 Long camel.component.jms.delivery-mode Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Integer camel.component.jms.delivery-persistent Specifies whether persistent delivery is used by default. true Boolean camel.component.jms.destination-resolver A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). The option is a org.springframework.jms.support.destination.DestinationResolver type. DestinationResolver camel.component.jms.disable-reply-to Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false Boolean camel.component.jms.disable-time-to-live Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false Boolean camel.component.jms.durable-subscription-name The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String camel.component.jms.eager-loading-of-properties Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false Boolean camel.component.jms.eager-poison-body If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String camel.component.jms.enabled Whether to enable auto configuration of the jms component. This is enabled by default. Boolean camel.component.jms.error-handler Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. The option is a org.springframework.util.ErrorHandler type. ErrorHandler camel.component.jms.error-handler-log-stack-trace Allows to control whether stacktraces should be logged or not, by the default errorHandler. true Boolean camel.component.jms.error-handler-logging-level Allows to configure the default errorHandler logging level for logging uncaught exceptions. LoggingLevel camel.component.jms.exception-listener Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. The option is a javax.jms.ExceptionListener type. ExceptionListener camel.component.jms.explicit-qos-enabled Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean camel.component.jms.expose-listener-session Specifies whether the listener session should be exposed when consuming messages. false Boolean camel.component.jms.force-send-original-message When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false Boolean camel.component.jms.format-date-headers-to-iso8601 Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false Boolean camel.component.jms.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.jms.idle-consumer-limit Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 Integer camel.component.jms.idle-task-execution-limit Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 Integer camel.component.jms.include-all-j-m-s-x-properties Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false Boolean camel.component.jms.include-sent-j-m-s-message-i-d Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false Boolean camel.component.jms.jms-key-format-strategy Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. JmsKeyFormatStrategy camel.component.jms.jms-message-type Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. JmsMessageType camel.component.jms.lazy-create-transaction-manager If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true Boolean camel.component.jms.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jms.map-jms-message Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true Boolean camel.component.jms.max-concurrent-consumers Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. Integer camel.component.jms.max-messages-per-task The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 Integer camel.component.jms.message-converter To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. The option is a org.springframework.jms.support.converter.MessageConverter type. MessageConverter camel.component.jms.message-created-strategy To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.jms.MessageCreatedStrategy type. MessageCreatedStrategy camel.component.jms.message-id-enabled When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true Boolean camel.component.jms.message-listener-container-factory Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. The option is a org.apache.camel.component.jms.MessageListenerContainerFactory type. MessageListenerContainerFactory camel.component.jms.message-timestamp-enabled Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true Boolean camel.component.jms.password Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.jms.preserve-message-qos Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false Boolean camel.component.jms.priority Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. 4 Integer camel.component.jms.pub-sub-no-local Specifies whether to inhibit the delivery of messages published by its own connection. false Boolean camel.component.jms.queue-browse-strategy To use a custom QueueBrowseStrategy when browsing queues. The option is a org.apache.camel.component.jms.QueueBrowseStrategy type. QueueBrowseStrategy camel.component.jms.receive-timeout The timeout for receiving messages (in milliseconds). The option is a long type. 1000 Long camel.component.jms.recovery-interval Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. The option is a long type. 5000 Long camel.component.jms.reply-to Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String camel.component.jms.reply-to-cache-level-name Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. String camel.component.jms.reply-to-concurrent-consumers Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 Integer camel.component.jms.reply-to-delivery-persistent Specifies whether to use persistent delivery by default for replies. true Boolean camel.component.jms.reply-to-destination-selector-name Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String camel.component.jms.reply-to-max-concurrent-consumers Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. Integer camel.component.jms.reply-to-on-timeout-max-concurrent-consumers Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 Integer camel.component.jms.reply-to-override Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String camel.component.jms.reply-to-same-destination-allowed Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false Boolean camel.component.jms.reply-to-type Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. ReplyToType camel.component.jms.request-timeout The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. The option is a long type. 20000 Long camel.component.jms.request-timeout-checker-interval Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. The option is a long type. 1000 Long camel.component.jms.selector Sets the JMS selector to use. String camel.component.jms.stream-message-type-enabled Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false Boolean camel.component.jms.subscription-durable Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false Boolean camel.component.jms.subscription-name Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String camel.component.jms.subscription-shared Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false Boolean camel.component.jms.synchronous Sets whether synchronous processing should be strictly used. false Boolean camel.component.jms.task-executor Allows you to specify a custom task executor for consuming messages. The option is a org.springframework.core.task.TaskExecutor type. TaskExecutor camel.component.jms.test-connection-on-startup Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false Boolean camel.component.jms.time-to-live When sending messages, specifies the time-to-live of the message (in milliseconds). -1 Long camel.component.jms.transacted Specifies whether to use transacted mode. false Boolean camel.component.jms.transacted-in-out Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false Boolean camel.component.jms.transaction-manager The Spring transaction manager to use. The option is a org.springframework.transaction.PlatformTransactionManager type. PlatformTransactionManager camel.component.jms.transaction-name The name of the transaction to use. String camel.component.jms.transaction-timeout The timeout value of the transaction (in seconds), if using transacted mode. -1 Integer camel.component.jms.transfer-exception If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false Boolean camel.component.jms.transfer-exchange You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false Boolean camel.component.jms.use-message-i-d-as-correlation-i-d Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false Boolean camel.component.jms.username Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.jms.wait-for-provision-correlation-to-be-updated-counter Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 Integer camel.component.jms.wait-for-provision-correlation-to-be-updated-thread-sleeping-time Interval in millis to sleep each time while waiting for provisional correlation id to be updated. The option is a long type. 100 Long | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jms-starter</artifactId> </dependency>",
"jms:[queue:|topic:]destinationName[?options]",
"jms:FOO.BAR",
"jms:queue:FOO.BAR",
"jms:topic:Stocks.Prices",
"jms:destinationType:destinationName",
"from(\"jms:queue:foo\"). to(\"bean:myBusinessLogic\");",
"from(\"jms:topic:OrdersTopic\"). filter().method(\"myBean\", \"isGoldCustomer\"). to(\"jms:queue:BigSpendersQueue\");",
"from(\"file://orders\"). convertBodyTo(String.class). to(\"jms:topic:OrdersTopic\");",
"<route> <from uri=\"jms:topic:OrdersTopic\"/> <filter> <method ref=\"myBean\" method=\"isGoldCustomer\"/> <to uri=\"jms:queue:BigSpendersQueue\"/> </filter> </route>",
"// setup error handler to use JMS as queue and store the entire Exchange errorHandler(deadLetterChannel(\"jms:queue:dead?transferExchange=true\"));",
"from(\"jms:queue:dead\").to(\"bean:myErrorAnalyzer\"); // and in our bean String body = exchange.getIn().getBody(); Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); // the cause message is String problem = cause.getMessage();",
"// we sent it to a seda dead queue first errorHandler(deadLetterChannel(\"seda:dead\")); // and on the seda dead queue we can do the custom transformation before its sent to the JMS queue from(\"seda:dead\").transform(exceptionMessage()).to(\"jms:queue:dead\");",
"from(\"file://inbox/order\").to(\"jms:queue:order?messageConverter=#myMessageConverter\");",
"from(\"file://inbox/order\").to(\"jms:queue:order?jmsMessageType=Text\");",
"from(\"file://inbox/order\").setHeader(\"CamelJmsMessageType\", JmsMessageType.Text).to(\"jms:queue:order\");",
"2008-07-09 06:43:04,046 [main ] DEBUG JmsBinding - Ignoring non primitive header: order of class: org.apache.camel.component.jms.issues.DummyOrder with value: DummyOrder{orderId=333, itemId=4444, quantity=2}",
"from(\"activemq:queue:in\") .to(\"bean:validateOrder\") .to(ExchangePattern.InOnly, \"activemq:topic:order\") .to(\"bean:handleOrder\");",
"from(\"file://inbox\") .to(\"bean:computeDestination\") .to(\"activemq:queue:dummy\");",
"public void setJmsHeader(Exchange exchange) { String id = . exchange.getIn().setHeader(\"CamelJmsDestinationName\", \"order:\" + id\"); }",
"<bean id=\"weblogic\" class=\"org.apache.camel.component.jms.JmsComponent\"> <property name=\"connectionFactory\" ref=\"myConnectionFactory\"/> </bean> <jee:jndi-lookup id=\"myConnectionFactory\" jndi-name=\"jms/connectionFactory\"/>",
"from(\"jms:SomeQueue?concurrentConsumers=20\"). bean(MyClass.class);",
"from(\"jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true\"). bean(MyClass.class);",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyToConcurrentConsumers=5\") .to(yyy) .to(zzz);",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar\") .to(yyy)",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&receiveTimeout=250\") .to(yyy)",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&replyToType=Exclusive\") .to(yyy)",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&replyToType=Exclusive\") .to(yyy) from(aaa) .inOut().to(\"activemq:queue:order?replyTo=order.reply&replyToType=Exclusive\") .to(bbb)",
"from(\"direct:someWhere\") .to(\"jms:queue:foo?replyTo=bar&requestTimeout=30s\") .to(\"bean:processReply\");",
"from(\"direct:someWhere\") .setHeader(\"CamelJmsRequestTimeout\", method(ServiceBean.class, \"whatIsTheTimeout\")) .to(\"jms:queue:foo?replyTo=bar&requestTimeout=30s\") .to(\"bean:processReply\");",
"Destination replyDestination = exchange.getIn().getHeader(JmsConstants.JMS_REPLY_DESTINATION, Destination.class);",
"// we need to pass in the JMS component, and in this sample we use ActiveMQ JmsEndpoint endpoint = JmsEndpoint.newInstance(replyDestination, activeMQComponent); // now we have the endpoint we can use regular Camel API to send a message to it template.sendBody(endpoint, \"Here is the late reply.\");",
"// we pretend to send it to some non existing dummy queue template.send(\"activemq:queue:dummy, new Processor() { public void process(Exchange exchange) throws Exception { // and here we override the destination with the ReplyTo destination object so the message is sent to there instead of dummy exchange.getIn().setHeader(JmsConstants.JMS_DESTINATION, replyDestination); exchange.getIn().setBody(\"Here is the late reply.\"); } }",
"template.send(\"activemq:queue:foo?preserveMessageQos=true\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setBody(\"World\"); exchange.getIn().setHeader(\"JMSReplyTo\", \"bar\"); } });",
"// .setHeader(\"CamelJmsDestinationName\", constant(\"queue:///MY_QUEUE?targetClient=1\")) .to(\"wmq:queue:MY_QUEUE?useMessageIDAsCorrelationID=true\");",
"com.ibm.msg.client.jms.DetailedJMSException: JMSCC0005: The specified value 'MY_QUEUE?targetClient=1' is not allowed for 'XMSC_DESTINATION_NAME'",
"JmsComponent wmq = new JmsComponent(connectionFactory); wmq.setDestinationResolver(new DestinationResolver() { public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException { MQQueueSession wmqSession = (MQQueueSession) session; return wmqSession.createQueue(\"queue:///\" + destinationName + \"?targetClient=1\"); } });"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jms-component-starter |
Chapter 9. Performing advanced builds | Chapter 9. Performing advanced builds The following sections provide instructions for advanced build operations including setting build resources and maximum duration, assigning builds to nodes, chaining builds, build pruning, and build run policies. 9.1. Setting build resources By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited. Procedure You can limit resource use in two ways: Limit resource use by specifying resource limits in the default container limits of a project. Limit resource use by specifying resource limits as part of the build configuration. ** In the following example, each of the resources , cpu , and memory parameters are optional: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: resources: limits: cpu: "100m" 1 memory: "256Mi" 2 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : resources: requests: 1 cpu: "100m" memory: "256Mi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the build process. Otherwise, build pod creation will fail, citing a failure to satisfy quota. 9.2. Setting maximum duration When defining a BuildConfig object, you can define its maximum duration by setting the completionDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced. The maximum duration is counted from the time when a build pod gets scheduled in the system, and defines how long it can be active, including the time needed to pull the builder image. After reaching the specified timeout, the build is terminated by OpenShift Container Platform. Procedure To set maximum duration, specify completionDeadlineSeconds in your BuildConfig . The following example shows the part of a BuildConfig specifying completionDeadlineSeconds field for 30 minutes: spec: completionDeadlineSeconds: 1800 Note This setting is not supported with the Pipeline Strategy option. 9.3. Assigning builds to specific nodes Builds can be targeted to run on specific nodes by specifying labels in the nodeSelector field of a build configuration. The nodeSelector value is a set of key-value pairs that are matched to Node labels when scheduling the build pod. The nodeSelector value can also be controlled by cluster-wide default and override values. Defaults will only be applied if the build configuration does not define any key-value pairs for the nodeSelector and also does not define an explicitly empty map value of nodeSelector:{} . Override values will replace values in the build configuration on a key by key basis. Note If the specified NodeSelector cannot be matched to a node with those labels, the build still stay in the Pending state indefinitely. Procedure Assign builds to run on specific nodes by assigning labels in the nodeSelector field of the BuildConfig , for example: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: nodeSelector: 1 key1: value1 key2: value2 1 Builds associated with this build configuration will run only on nodes with the key1=value2 and key2=value2 labels. 9.4. Chained builds For compiled languages such as Go, C, C++, and Java, including the dependencies necessary for compilation in the application image might increase the size of the image or introduce vulnerabilities that can be exploited. To avoid these problems, two builds can be chained together. One build that produces the compiled artifact, and a second build that places that artifact in a separate image that runs the artifact. In the following example, a source-to-image (S2I) build is combined with a docker build to compile an artifact that is then placed in a separate runtime image. Note Although this example chains a S2I build and a docker build, the first build can use any strategy that produces an image containing the desired artifacts, and the second build can use any strategy that can consume input content from an image. The first build takes the application source and produces an image containing a WAR file. The image is pushed to the artifact-image image stream. The path of the output artifact depends on the assemble script of the S2I builder used. In this case, it is output to /wildfly/standalone/deployments/ROOT.war . apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: "master" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift The second build uses image source with a path to the WAR file inside the output image from the first build. An inline dockerfile copies that WAR file into a runtime image. apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: "." strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange 1 from specifies that the docker build should include the output of the image from the artifact-image image stream, which was the target of the build. 2 paths specifies which paths from the target image to include in the current docker build. 3 The runtime image is used as the source image for the docker build. The result of this setup is that the output image of the second build does not have to contain any of the build tools that are needed to create the WAR file. Also, because the second build contains an image change trigger, whenever the first build is run and produces a new image with the binary artifact, the second build is automatically triggered to produce a runtime image that contains that artifact. Therefore, both builds behave as a single build with two stages. 9.5. Pruning builds By default, builds that have completed their lifecycle are persisted indefinitely. You can limit the number of builds that are retained. Procedure Limit the number of builds that are retained by supplying a positive integer value for successfulBuildsHistoryLimit or failedBuildsHistoryLimit in your BuildConfig , for example: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2 1 successfulBuildsHistoryLimit will retain up to two builds with a status of completed . 2 failedBuildsHistoryLimit will retain up to two builds with a status of failed , canceled , or error . Trigger build pruning by one of the following actions: Updating a build configuration. Waiting for a build to complete its lifecycle. Builds are sorted by their creation timestamp with the oldest builds being pruned first. Note Administrators can manually prune builds using the 'oc adm' object pruning command. 9.6. Build run policy The build run policy describes the order in which the builds created from the build configuration should run. This can be done by changing the value of the runPolicy field in the spec section of the Build specification. It is also possible to change the runPolicy value for existing build configurations, by: Changing Parallel to Serial or SerialLatestOnly and triggering a new build from this configuration causes the new build to wait until all parallel builds complete as the serial build can only run alone. Changing Serial to SerialLatestOnly and triggering a new build causes cancellation of all existing builds in queue, except the currently running build and the most recently created build. The newest build runs . | [
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2",
"resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"",
"spec: completionDeadlineSeconds: 1800",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/builds_using_buildconfig/advanced-build-operations |
21.9. virt-inspector: Inspecting Guest Virtual Machines | 21.9. virt-inspector: Inspecting Guest Virtual Machines This section provides information about inspecting guest virtual machines. 21.9.1. Introduction virt-inspector is a tool for inspecting a disk image to find out what operating system it contains. 21.9.2. Installation To install virt-inspector and the documentation, enter the following command: The documentation, including example XML output and a Relax-NG schema for the output, will be installed in /usr/share/doc/libguestfs-devel-*/ where * is replaced by the version number of libguestfs . 21.9.3. Running virt-inspector You can run virt-inspector against any disk image or libvirt guest virtual machine as shown in the following example: Or as shown here: The result will be an XML report ( report.xml ). The main components of the XML file are a top-level <operatingsytems> element containing usually a single <operatingsystem> element, similar to the following: Processing these reports is best done using W3C standard XPath queries. Red Hat Enterprise Linux 7 comes with the xpath command-line program, which can be used for simple instances. However, for long-term and advanced usage, you should consider using an XPath library along with your favorite programming language. As an example, you can list out all file system devices using the following XPath query: Or list the names of all applications installed by entering: | [
"yum install libguestfs-tools",
"virt-inspector -a disk.img > report.xml",
"virt-inspector -d GuestName > report.xml",
"<operatingsystems> <operatingsystem> <!-- the type of operating system and Linux distribution --> <name>linux</name> <distro>rhel</distro> <!-- the name, version and architecture --> <product_name>Red Hat Enterprise Linux Server release 6.4 </product_name> <major_version>6</major_version> <minor_version>4</minor_version> <package_format>rpm</package_format> <package_management>yum</package_management> <root>/dev/VolGroup/lv_root</root> <!-- how the filesystems would be mounted when live --> <mountpoints> <mountpoint dev=\"/dev/VolGroup/lv_root\">/</mountpoint> <mountpoint dev=\"/dev/sda1\">/boot</mountpoint> <mountpoint dev=\"/dev/VolGroup/lv_swap\">swap</mountpoint> </mountpoints> < !-- filesystems--> <filesystem dev=\"/dev/VolGroup/lv_root\"> <label></label> <uuid>b24d9161-5613-4ab8-8649-f27a8a8068d3</uuid> <type>ext4</type> <content>linux-root</content> <spec>/dev/mapper/VolGroup-lv_root</spec> </filesystem> <filesystem dev=\"/dev/VolGroup/lv_swap\"> <type>swap</type> <spec>/dev/mapper/VolGroup-lv_swap</spec> </filesystem> <!-- packages installed --> <applications> <application> <name>firefox</name> <version>3.5.5</version> <release>1.fc12</release> </application> </applications> </operatingsystem> </operatingsystems>",
"virt-inspector GuestName | xpath //filesystem/@dev Found 3 nodes: -- NODE -- dev=\"/dev/sda1\" -- NODE -- dev=\"/dev/vg_f12x64/lv_root\" -- NODE -- dev=\"/dev/vg_f12x64/lv_swap\"",
"virt-inspector GuestName | xpath //application/name [...long list...]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-virt_inspector_inspecting_guest_virtual_machines |
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] | Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration 13.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description repositoryDigestMirrors array repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. repositoryDigestMirrors[] object RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. 13.1.2. .spec.repositoryDigestMirrors Description repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Type array 13.1.3. .spec.repositoryDigestMirrors[] Description RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. Type object Required source Property Type Description mirrors array (string) mirrors is one or more repositories that may also contain the same images. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. Other cluster configuration, including (but not limited to) other repositoryDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. source string source is the repository that users refer to, e.g. in image pull specifications. 13.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies DELETE : delete collection of ImageContentSourcePolicy GET : list objects of kind ImageContentSourcePolicy POST : create an ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} DELETE : delete an ImageContentSourcePolicy GET : read the specified ImageContentSourcePolicy PATCH : partially update the specified ImageContentSourcePolicy PUT : replace the specified ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status GET : read status of the specified ImageContentSourcePolicy PATCH : partially update status of the specified ImageContentSourcePolicy PUT : replace status of the specified ImageContentSourcePolicy 13.2.1. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies Table 13.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ImageContentSourcePolicy Table 13.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageContentSourcePolicy Table 13.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.5. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageContentSourcePolicy Table 13.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.7. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.8. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 202 - Accepted ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.2. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} Table 13.9. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy Table 13.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageContentSourcePolicy Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 13.12. Body parameters Parameter Type Description body DeleteOptions schema Table 13.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageContentSourcePolicy Table 13.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.15. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageContentSourcePolicy Table 13.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.17. Body parameters Parameter Type Description body Patch schema Table 13.18. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageContentSourcePolicy Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.3. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status Table 13.22. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy Table 13.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ImageContentSourcePolicy Table 13.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.25. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageContentSourcePolicy Table 13.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.27. Body parameters Parameter Type Description body Patch schema Table 13.28. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageContentSourcePolicy Table 13.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.30. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.31. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/imagecontentsourcepolicy-operator-openshift-io-v1alpha1 |
Preface | Preface Learn how to use both the OpenShift command-line interface and web console to install Red Hat OpenShift AI Self-Managed on your OpenShift cluster. To uninstall the product, learn how to use the recommended command-line interface (CLI) method. Note Red Hat does not support installing more than one instance of OpenShift AI on your cluster. Red Hat does not support installing the Red Hat OpenShift AI Operator on the same cluster as the Red Hat OpenShift AI Add-on. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/pr01 |
23.4. Stacking I/O Parameters | 23.4. Stacking I/O Parameters All layers of the Linux I/O stack have been engineered to propagate the various I/O parameters up the stack. When a layer consumes an attribute or aggregates many devices, the layer must expose appropriate I/O parameters so that upper-layer devices or tools will have an accurate view of the storage as it transformed. Some practical examples are: Only one layer in the I/O stack should adjust for a non-zero alignment_offset ; once a layer adjusts accordingly, it will export a device with an alignment_offset of zero. A striped Device Mapper (DM) device created with LVM must export a minimum_io_size and optimal_io_size relative to the stripe count (number of disks) and user-provided chunk size. In Red Hat Enterprise Linux 7, Device Mapper and Software Raid (MD) device drivers can be used to arbitrarily combine devices with different I/O parameters. The kernel's block layer will attempt to reasonably combine the I/O parameters of the individual devices. The kernel will not prevent combining heterogeneous devices; however, be aware of the risks associated with doing so. For instance, a 512-byte device and a 4K device may be combined into a single logical DM device, which would have a logical_block_size of 4K. File systems layered on such a hybrid device assume that 4K will be written atomically, but in reality it will span 8 logical block addresses when issued to the 512-byte device. Using a 4K logical_block_size for the higher-level DM device increases potential for a partial write to the 512-byte device if there is a system crash. If combining the I/O parameters of multiple devices results in a conflict, the block layer may issue a warning that the device is susceptible to partial writes and/or is misaligned. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/iolimitstacking |
Jenkins | Jenkins OpenShift Container Platform 4.12 Jenkins Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/jenkins/index |
Chapter 27. Managing servers | Chapter 27. Managing servers Note For step by step instructions on how to publish a Camel project to Red Hat Fuse, see Chapter 28, Publishing Fuse Integration Projects to a Server . 27.1. Adding a Server Overview For the tooling to manage a server, you need to add the server to the Servers list. Once added, the server appears in the Servers view, where you can connect to it and publish your Fuse Integration projects. Note If adding a Red Hat Fuse server, it is recommended that you edit its installDir /etc/users.properties file and add user information, in the form of user=password,role , to enable the tooling to establish an SSH connection to the server. Procedure There are three ways to add a new server to the Servers view: In the Servers view, click No servers are available. Click this link to create a new server... . Note This link appears in the Servers view only when no server has been defined. If you defined and added a server when you first created your project, the Servers view displays that server. In the Servers view, right-click to open the context menu and select New Server . On the menu bar, select File New Other Server Server . In the Define a New Server dialog, to add a new server: Expand the Red Hat JBoss Middleware node to expose the list of available server options: Click the server that you want to add. In the Server's host name field, accept the default ( localhost ). Note The address of localhost is 0.0.0.0 . In the Server name field, accept the default, or enter a different name for the runtime server. For Server runtime environment , accept the default or click Add to open the server's runtime definition page: Note If the server is not already installed on your machine, you can install it now by clicking Download and install runtime... and following the site's download instructions. Depending on the site, you might be required to provide valid credentials before you can continue the download process. Accept the default for the installation Name . In the Home Directory field, enter the path where the server runtime is installed, or click Browse to find and select it. to Execution Environment , select the runtime JRE from the drop-down menu. If the version you want does not appear in the list, click Environments and select the version from the list that appears. The JRE version you select must be installed on your machine. Note See Red Hat Fuse Supported Configurations for the required Java version. Leave the Alternate JRE option as is. Click to save the server's runtime definition and open its Configuration details page: Accept the default for SSH Port ( 8101 ). The runtime uses the SSH port to connect to the server's Karaf shell. If this default is incorrect for your setup, you can discover the correct port number by looking in the server's installDir /etc/org.apache.karaf.shell.cfg file. In the User Name field, enter the name used to log into the server. For Red Hat Fuse, this is a user name stored in the Red Hat Fuse installDir /etc/users.properties file. Note If the default user has been activated (uncommented) in the /etc/users.properties file, the tooling autofills the User Name and Password fields with the default user's name and password, as shown in [servCnfigDetails] . If a user has not been set up, you can either add one to that file by using the format user=password,role (for example, joe=secret,Administrator ), or you can set one using the karaf jaas command set: jaas:realms - to list the realms jaas:manage --index 1 - to edit the first (server) realm jaas:useradd <username> <password> - to add a user and associated password jaas:roleadd <username> Administrator - to specify the new user's role jaas:update - to update the realm with the new user information If a jaas realm has already been selected for the server, you can discover the user name by issuing the command JBossFuse:karaf@root> jaas:users . In the Password field, enter the password required for User Name to log into the server. Click Finish to save the server's configuration details. The server runtime appears in the Servers view. Expanding the server node exposes the server's JMX node: 27.2. Starting a Server Overview When you start a configured server, the tooling opens the server's remote management console in the Terminal view. This allows you to easily manage the container while testing your application. Procedure To start a server: In the Servers view, select the server you want to start. Click . The Console view opens and displays a message asking you to wait while the container is starting, for example: Note If you did not properly configure the user name and password for opening the remote console, a dialog opens asking you to enter the proper credentials. See Section 27.1, "Adding a Server" . After the container has started up, the Terminal view opens to display the container's management console. The running server appears in the Servers view: The running server also appears in the JMX Navigator view under Server Connections : Note If the server is running on the same machine as the tooling, the server also has an entry under Local Processes . 27.3. Connecting to a Running Server Overview After you start a configured server, it appears in the Servers view and in the JMX Navigator view under the Server Connections node. You may need to expand the Server Connections node to see the server. To publish and test your Fuse project application on the running server, you must first connect to it. You can connect to a running server either in the Servers view or in the JMX Navigator view. Note The Servers view and the JMX Navigator view are synchronized with regards to server connections. That is, connecting to a server in the Servers view also connects it in the JMX Navigator view, and vice versa. Connecting to a running server in the Servers view In the Servers view, expand the server runtime to expose its JMX[Disconnected] node. Double-click the JMX[Disconnected] node: Connecting to a running server in the JMX Navigator view In the JMX Navigator view, under the Server Connections node, select the server to which you want to connect. Double-click the selected server: Viewing bundles installed on the connected server In either the Servers view or the JMX Navigator view, expand the server runtime tree to expose the Bundles node, and select it. The tooling populates the Properties view with a list of bundles that are installed on the server: Using the Properties view's Search tool, you can search for bundles by their Symbolic Name or by their Identifier , if you know it. As you type the symbolic name or the identifier, the list updates, showing only the bundles that match the current search string. Note Alternatively, you can issue the osgi:list command in the Terminal view to see a generated list of bundles installed on the Red Hat Fuse server runtime. The tooling uses a different naming scheme for OSGi bundles displayed by the osgi:list command. In the <build> section of project's pom.xml file, you can find the bundle's symbolic name and its bundle name (OSGi) listed in the maven-bundle-plugin entry. For more details, see the section called "Verifying the project was published to the server" . 27.4. Disconnecting from a Server Overview When you are done testing your application, you can disconnect from the server without stopping it. Note The Servers view and the JMX Navigator view are synchronized with regards to server connections. That is, disconnecting from a server in the Servers view also disconnects it in the JMX Navigator view, and vice versa. Disconnecting from a server in the Servers view In the Servers view, expand the server runtime to expose its JMX[Connected] node. Right-click the JMX[Connected] node to open the context menu, and then select Disconnect . Disconnecting from a server in the JMX Navigator view In the JMX Navigator view, under Server Connections , select the server from which you want to disconnect. Right-click the selected server to open the context menu, and then select Disconnect . 27.5. Stopping a Server Overview You can shut down a server in the Servers view or in the server's remote console in the Terminal view. Using the Servers view To stop a server: In the Servers view, select the server you want to stop. Click . Using the remote console To stop a server: Open the Terminal view that is hosting the server's remote console. Press: CTRL + D 27.6. Deleting a Server Overview When you are finished with a configured server, or if you misconfigure a server, you can delete it and its configuration. First, delete the server from the Servers view or from the JMX Navigator view. , delete the server's configuration. Deleting a server In the Servers view, right-click the server you want to delete to open the context menu. Select Delete . Click OK . Deleting the server's configuration On Linux and Windows machines, select Window Preferences . Expand the Server folder, and then select Runtime Environments to open the Server Runtime Environments page. From the list, select the runtime environment of the server that you previously deleted from the Servers view, and then click Remove . Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/RiderManageServers |
5.311. squid | 5.311. squid 5.311.1. RHBA-2012:1290 - squid bug fix update Updated squid packages that fix several bugs are now available for Red Hat Enterprise Linux 6. [Updated 20th September 2012] This advisory has been updated with an accurate description of the "http10" option for BZ#852863. This update does not change the packages in any way. Squid is a high-performance proxy caching server for web clients that supports FTP, Gopher, and HTTP data objects. Bug Fixes BZ# 853053 Due to a bug in the ConnStateData::noteMoreBodySpaceAvailable() function, child processes of squid aborted upon encountering a failed assertion. An upstream patch has been provided to address this issue and squid child processes no longer abort in the described scenario. BZ# 852863 Due to an upstream patch, which renamed the HTTP header controlling persistent connections from "Proxy-Connection" to "Connection", the NTLM pass-through authentication does not work, thus preventing login. This update introduces the new "http10" option to the squid.conf file, which can be used to enable the change in the patch. This option is set to "off" by default. When set to "on", the NTLM pass-through authentication works properly, thus allowing login attempts to succeed. BZ# 852861 When the IPv6 protocol was disabled and squid tried to handle an HTTP GET request containing an IPv6 address, the squid child process terminated due to signal 6. This bug has been fixed and such requests are now handled as expected. BZ# 855330 The old "stale if hit" logic did not account for cases where the stored stale response became fresh due to a successful re-validation with the origin server. Consequently, incorrect warning messages were returned. With this update, squid no longer marks elements as stale in the described scenario, thus fixing this bug. All users of squid are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/squid |
Chapter 13. Volumes | Chapter 13. Volumes 13.1. Creating Volumes This section shows how to create disk volumes inside a block based storage pool. In the example below, the virsh vol-create-as command will create a storage volume with a specific size in GB within the guest_images_disk storage pool. As this command is repeated per volume needed, three volumes are created as shown in the example. | [
"# virsh vol-create-as guest_images_disk volume1 8 G Vol volume1 created # virsh vol-create-as guest_images_disk volume2 8 G Vol volume2 created # virsh vol-create-as guest_images_disk volume3 8 G Vol volume3 created # virsh vol-list guest_images_disk Name Path ----------------------------------------- volume1 /dev/sdb1 volume2 /dev/sdb2 volume3 /dev/sdb3 # parted -s /dev/sdb print Model: ATA ST3500418AS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 2 17.4kB 8590MB 8590MB primary 3 8590MB 17.2GB 8590MB primary 1 21.5GB 30.1GB 8590MB primary"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-Virtualization_Administration_Guide-Storage_Volumes |
Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster | Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on your bare metal cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 4.1. Scaling up a cluster created using local storage devices In order to scale up an OpenShift Data Foundation cluster which was created using local storage devices, a new disk needs to be added to the storage node. It is recommended to have the new disks of the same size as used earlier during the deployment as OpenShift Data Foundation does not support heterogeneous disks/OSD's. For deployments having three failure domains, you can scale up the storage by adding disks in the multiple of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is flexibility in adding the number of disks. In this case, you can add any number of disks. In order to check if flexible scaling is enabled or not, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Disks to be used for scaling are already attached to the storage node LocalVolumeDiscovery and LocalVolumeSet objects are already created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 4.2. Scaling out storage capacity on a bare metal cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. There is no limit on the number of nodes which can be added. Howerver, from the technical support perspective, 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 4.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While we recommend adding nodes in the multiple of three, you still get the flexibility of adding one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 4.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.1.2. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage by adding capacity . | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/scaling_storage/scaling_storage_of_bare_metal_openshift_data_foundation_cluster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.