title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
5.10. authconfig | 5.10. authconfig 5.10.1. RHBA-2012:0931 - authconfig bug fix and enhancement update Updated authconfig packages that fix multiple bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The authconfig packages provide a command line utility and a GUI application that can configure a workstation to be a client for certain network user information and authentication schemes, and other user information and authentication related options. Bug Fixes BZ# 689717 Prior to this update, SSSD configuration files failed to parse if the files were not correctly formatted. As a consequence, the authconfig utility could abort unexpectedly. With this update, the error is correctly handled, the configuration file is backed up, and a new file is created. BZ# 708850 Prior to this update, the man page "authconfig(8)" referred to non-existing obsolete configuration files. This update modifies the man page to point to configuration files that are currently modified by authconfig. BZ# 749700 Prior to this update, a deprecated "krb_kdcip" option was set instead of the "krb5_server" option when the SSSD configuration was updated. This update modifies the SSSD configuration setting to use the "krb5_server" option to set the Kerberos KDC server address. BZ# 755975 Prior to this update, the authconfig command always returned the exit value "1" when the "--savebackup" option was used, due to handling of nonexisting configuration files on the system. With this update, the exit value is "0" if the configuration backup succeeds even if some configuration files which can be handled by authconfig, are not present on the system. Enhancements BZ# 731094 Prior to this update, the authconfig utility did not support the SSSD configuration with the IPA backend. This update allows to join an IPAv2 domain with the system via the ipa-client-install command. BZ# 804615 With this update, the nss_sss module is also used in the "services" entry of the nsswitch.conf file when configuring this file. All users of authconfig are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/authconfig |
Chapter 2. The pcsd Web UI | Chapter 2. The pcsd Web UI This chapter provides an overview of configuring a Red Hat High Availability cluster with the pcsd Web UI. 2.1. pcsd Web UI Setup To set up your system to use the pcsd Web UI to configure a cluster, use the following procedure. Install the Pacemaker configuration tools, as described in Section 1.2, "Installing Pacemaker configuration tools" . On each node that will be part of the cluster, use the passwd command to set the password for user hacluster , using the same password on each node. Start and enable the pcsd daemon on each node: On one node of the cluster, authenticate the nodes that will constitute the cluster with the following command. After executing this command, you will be prompted for a Username and a Password . Specify hacluster as the Username . On any system, open a browser to the following URL, specifying one of the nodes you have authorized (note that this uses the https protocol). This brings up the pcsd Web UI login screen. Log in as user hacluster . This brings up the Manage Clusters page as shown in Figure 2.1, "Manage Clusters page" . Figure 2.1. Manage Clusters page | [
"systemctl start pcsd.service systemctl enable pcsd.service",
"pcs cluster auth node1 node2 ... nodeN",
"https:// nodename :2224"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-pcsd-haar |
Installing | Installing Red Hat Enterprise Linux AI 1.4 Installation documentation on various platforms Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/installing/index |
SystemTap Beginners Guide | SystemTap Beginners Guide Red Hat Enterprise Linux 7 Introduction to SystemTap William Cohen Red Hat Software Engineering [email protected] Don Domingo Red Hat Customer Content Services Vladimir Slavik Red Hat Customer Content Services [email protected] Robert Kratky Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/index |
Chapter 5. Network connections | Chapter 5. Network connections 5.1. Connection Options This section describes how to configure how to configure connections. The ConnectionOptions object can be provided to a Client instance when creating a new connection and allows configuration of several different aspects of the resulting Connection instance. ConnectionOptions can be passed in the connect method on IClient and are used to configure Example: Configuring authentication ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.User = "user" connectionOptions.Password = "password" IConnection connection = client.Connect(serverHost, serverPort, connectionOptions); For a definitive list of options refer to Connection Options 5.1.1. Connection Transport Options The ConnectionOptions object exposes a set of configuration options for the underlying I/O transport layer known as the TransportOptions which allows for fine grained configuration of network level options. Example: Configuring transport options ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.TransportOptions.TcpNoDelay = false; IConnection connection = client.Connect(serverHost, serverPort, connectionOptions); For a definitive list of options refer to Connection Transport Options 5.2. Reconnect and failover When creating a new connection it is possible to configure that connection to perform automatic connection recovery. Example: Configuring transport reconnection and failover ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.ReconnectOptions.ReconnectEnabled = true; connectionOptions.ReconnectOptions.ReconnectDelay = 30_000; connectionOptions.ReconnectOptions.AddReconnectLocation(<hostname>, <port>); IConnection connection = client.Connect(serverHost, serverPort, connectionOptions); For a definitive list of options refer to Reconnect and failover | [
"ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.User = \"user\" connectionOptions.Password = \"password\" IConnection connection = client.Connect(serverHost, serverPort, connectionOptions);",
"ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.TransportOptions.TcpNoDelay = false; IConnection connection = client.Connect(serverHost, serverPort, connectionOptions);",
"ConnectionOptions connectionOptions = new ConnectionOptions(); connectionOptions.ReconnectOptions.ReconnectEnabled = true; connectionOptions.ReconnectOptions.ReconnectDelay = 30_000; connectionOptions.ReconnectOptions.AddReconnectLocation(<hostname>, <port>); IConnection connection = client.Connect(serverHost, serverPort, connectionOptions);"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_proton_dotnet/1.0/html/using_qpid_proton_dotnet/network_connections |
Chapter 22. Performance tuning considerations with DRL | Chapter 22. Performance tuning considerations with DRL The following key concepts or suggested practices can help you optimize DRL rules and decision engine performance. These concepts are summarized in this section as a convenience and are explained in more detail in the cross-referenced documentation, where applicable. This section will expand or change as needed with new releases of Red Hat Process Automation Manager. Define the property and value of pattern constraints from left to right In DRL pattern constraints, ensure that the fact property name is on the left side of the operator and that the value (constant or a variable) is on the right side. The property name must always be the key in the index and not the value. For example, write Person( firstName == "John" ) instead of Person( "John" == firstName ) . Defining the constraint property and value from right to left can hinder decision engine performance. For more information about DRL patterns and constraints, see Section 16.8, "Rule conditions in DRL (WHEN)" . Use equality operators more than other operator types in pattern constraints when possible Although the decision engine supports many DRL operator types that you can use to define your business rule logic, the equality operator == is evaluated most efficiently by the decision engine. Whenever practical, use this operator instead of other operator types. For example, the pattern Person( firstName == "John" ) is evaluated more efficiently than Person( firstName != "OtherName" ) . In some cases, using only equality operators might be impractical, so consider all of your business logic needs and options as you use DRL operators. List the most restrictive rule conditions first For rules with multiple conditions, list the conditions from most to least restrictive so that the decision engine can avoid assessing the entire set of conditions if the more restrictive conditions are not met. For example, the following conditions are part of a travel-booking rule that applies a discount to travelers who book both a flight and a hotel together. In this scenario, customers rarely book hotels with flights to receive this discount, so the hotel condition is rarely met and the rule is rarely executed. Therefore, the first condition ordering is more efficient because it prevents the decision engine from evaluating the flight condition frequently and unnecessarily when the hotel condition is not met. Preferred condition order: hotel and flight Inefficient condition order: flight and hotel For more information about DRL patterns and constraints, see Section 16.8, "Rule conditions in DRL (WHEN)" . Avoid iterating over large collections of objects with excessive from clauses Avoid using the from condition element in DRL rules to iterate over large collections of objects, as shown in the following example: Example conditions with from clause In such cases, the decision engine iterates over the large graph every time the rule condition is evaluated and impedes rule evaluation. Alternatively, instead of adding an object with a large graph that the decision engine must iterate over frequently, add the collection directly to the KIE session and then join the collection in the condition, as shown in the following example: Example conditions without from clause In this example, the decision engine iterates over the list only one time and can evaluate rules more efficiently. For more information about the from element or other DRL condition elements, see Section 16.8.7, "Supported rule condition elements in DRL (keywords)" . Use decision engine event listeners instead of System.out.println statements in rules for debug logging You can use System.out.println statements in your rule actions for debug logging and console output, but doing this for many rules can impede rule evaluation. As a more efficient alternative, use the built-in decision engine event listeners when possible. If these listeners do not meet your requirements, use a system logging utility supported by the decision engine, such as Logback, Apache Commons Logging, or Apache Log4j. For more information about supported decision engine event listeners and logging utilities, see Decision engine in Red Hat Process Automation Manager . Use the drools-metric module to identify the obstruction in your rules You can use the drools-metric module to identify slow rules especially when you process many rules. The drools-metric module can also assist in analyzing the decision engine performance. Note that the drools-metric module is not for production environment use. However, you can perform the analysis in your test environment. To analyze the decision engine performance using drools-metric , first add drools-metric to your project dependencies: Example project dependency for drools-metric <dependency> <groupId>org.drools</groupId> <artifactId>drools-metric</artifactId> </dependency> If you want to use drools-metric to enable trace logging, configure a logger for org.drools.metric.util.MetricLogUtils as shown in the following example: Example logback.xml configuration file <configuration> <logger name="org.drools.metric.util.MetricLogUtils" level="trace"/> ... <configuration> Alternatively, you can use drools-metric to expose the data using Micrometer . To expose the data, enable the Micrometer registry of your choice as shown in the following example: Example project dependency for Micrometer <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-jmx</artifactId> <!-- Discover more registries at micrometer.io. --> </dependency> Example Java code for Micrometer Metrics.addRegitry(new JmxMeterRegistry(s -> null, Clock.SYSTEM)); Regardless of whether you want to use logging or Micrometer, you need to enable MetricLogUtils by setting the system property drools.metric.logger.enabled to true . Optionally, you can change the microseconds threshold of metric reporting by setting the drools.metric.logger.threshold system property. Note Only node executions exceeding the threshold are reported. The default value is 500 . After configuring the drools-metric to use logging, rule execution produces logs as shown in the following example: Example rule execution output This example includes the following key parameters: evalCount is the number of constraint evaluations against inserted facts during the node execution. When evalCount is used with Micrometer, a counter with the data is called org.drools.metric.evaluation.count . elapsedMicro is the elapsed time of the node execution in microseconds. When elapsedMicro is used with Micrometer, look for a timer called org.drools.metric.elapsed.time . If you find an outstanding evalCount or elapsedMicro log, correlate the node name with ReteDumper.dumpAssociatedRulesRete() output to identify the rule associated with the node. Example ReteDumper usage ReteDumper.dumpAssociatedRulesRete(kbase); Example ReteDumper output | [
"when USDh:hotel() // Rarely booked USDf:flight()",
"when USDf:flight() USDh:hotel() // Rarely booked",
"when USDc: Company() USDe : Employee ( salary > 100000.00) from USDc.employees",
"when USDc: Company(); Employee (salary > 100000.00, company == USDc)",
"<dependency> <groupId>org.drools</groupId> <artifactId>drools-metric</artifactId> </dependency>",
"<configuration> <logger name=\"org.drools.metric.util.MetricLogUtils\" level=\"trace\"/> <configuration>",
"<dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-jmx</artifactId> <!-- Discover more registries at micrometer.io. --> </dependency>",
"Metrics.addRegitry(new JmxMeterRegistry(s -> null, Clock.SYSTEM));",
"TRACE [JoinNode(6) - [ClassObjectType class=com.sample.Order]], evalCount:1000, elapsedMicro:5962 TRACE [JoinNode(7) - [ClassObjectType class=com.sample.Order]], evalCount:100000, elapsedMicro:95553 TRACE [ AccumulateNode(8) ], evalCount:4999500, elapsedMicro:2172836 TRACE [EvalConditionNode(9)]: cond=com.sample.Rule_Collect_expensive_orders_combination930932360Eval1Invoker@ee2a6922], evalCount:49500, elapsedMicro:18787",
"ReteDumper.dumpAssociatedRulesRete(kbase);",
"[ AccumulateNode(8) ] : [Collect expensive orders combination]"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/performance-tuning-drl-ref_drl-rules |
Chapter 62. Security | Chapter 62. Security certutil does not return the NSS database password requirements in FIPS mode When creating a new Network Security Services (NSS) database with the certutil tool, the user has nowhere to find out what the database password requirements are when running in FIPS mode. The prompt message does not provide password requirements, and certutil returns only a generic error message: (BZ# 1401809 ) systemd-importd runs as init_t The systemd-importd service is using the NoNewPrivileges security flag in the systemd unit file. This blocks the SELinux domain transition from the init_t to systemd_importd_t domain. (BZ# 1365944 ) The SCAP password length requirement is ignored in the kickstart installation The interactive kickstart installation does not enforce the password length check defined by the SCAP rule and accepts shorter root passwords. To work around this problem, use the --strict option with the pwpolicy root command in the kickstart file. (BZ#1372791) rhnsd.pid is writable by group and others In Red Hat Enterprise Linux 7.4, the default permissions of the /var/run/rhnsd.pid file are set to -rw-rw-rw-. . This setting is not secure. To work around this problem, change the permissions of this file to be writable only by the owner: (BZ#1480306) | [
"certutil: could not authenticate to token NSS FIPS 140-2 Certificate DB.: SEC_ERROR_IO: An I/O error occurred during security authorization.",
"chmod go-w /var/run/rhnsd.pid"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_security |
Chapter 36. Managing custom tasks in Business Central | Chapter 36. Managing custom tasks in Business Central Custom tasks (work items) are tasks that can run custom logic. You can customize and reuse custom tasks across multiple business processes or across all projects in Business Central. You can also add custom elements in the designer palette, including name, icon, sub-category, input and output parameters, and documentation. Red Hat Process Automation Manager provides a set of custom tasks within the custom task repository in Business Central. You can enable or disable the default custom tasks and upload custom tasks into Business Central to implement the tasks in the relevant processes. Note Red Hat Process Automation Manager includes a limited set of supported custom tasks. Custom tasks that are not included in Red Hat Process Automation Manager are not supported. Procedure In Business Central, click in the upper-right corner and select Custom Tasks Administration . This page lists the custom task installation settings and available custom tasks for processes in projects throughout Business Central. The custom tasks that you enable on this page become available in the project-level settings where you can then install each custom task to be used in processes. The way in which the custom tasks are installed in a project is determined by the global settings that you enable or disable under Settings on this Custom Tasks Administration page. Under Settings , enable or disable each setting to determine how the available custom tasks are implemented when a user installs them at the project level. The following custom task settings are available: Install as Maven artifact : Uploads the custom task JAR file to the Maven repository that is configured with Business Central, if the file is not already present. Install custom task dependencies into project : Adds any custom task dependencies to the pom.xml file of the project where the task is installed. Use version range when installing custom task into project : Uses a version range instead of a fixed version of a custom task that is added as a project dependency. Example: [7.16,) instead of 7.16.0.Final Enable or disable (set to ON or OFF ) any available custom tasks as needed. Custom tasks that you enable are displayed in project-level settings for all projects in Business Central. Figure 36.1. Enable custom tasks and custom task settings To add a custom task, click Add Custom Task , browse to the relevant JAR file, and click the Upload icon. If a class implements a WorkItemHandler , you can replace annotations with a .wid file by adding the file to Business Central separately. Optional: To remove a custom task, click remove on the row of the custom task you want to remove and click Ok to confirm removal. After you configure all required custom tasks, navigate to a project in Business Central and go to the project Settings Custom Tasks page to view the available custom tasks that you enabled. For each custom task, click Install to make the task available to the processes in that project or click Uninstall to exclude the task from the processes in the project. If you are prompted for additional information when you install a custom task, enter the required information and click Install again. The required parameters for the custom task depend on the type of task. For example, rule and decision tasks require artifact GAV information (Group ID, Artifact ID, Version), email tasks require host and port access information, and REST tasks require API credentials. Other custom tasks might not require any additional parameters. Figure 36.2. Install custom tasks for use in processes Click Save . Return to the project page, select or add a business process in the project, and in the process designer palette, select the Custom Tasks option to view the available custom tasks that you enabled and installed: Figure 36.3. Access installed custom tasks in process designer | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/manage-service-tasks-proc_configuring-central |
Hardware Guide | Hardware Guide Red Hat Ceph Storage 8 Hardware selection recommendations for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/hardware_guide/index |
5.325. system-config-printer | 5.325. system-config-printer 5.325.1. RHBA-2012:0448 - system-config-printer bug fix update Updated system-config-printer packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The system-config-printer package contains a print queue configuration tool with a graphical user interface. Bug Fixes BZ# 739745 Previously, displaying tooltips while being in a main loop recursion caused the system-config-printer utility to terminate unexpectedly. To prevent system-config-printer from crashing, displaying tooltips is now avoided during the main loop recursion. BZ# 744519 Python bindings for the CUPS library were not reliable when threads were used. In particular, a single password callback function was used instead of one for each thread. This, in some cases, caused the system-config-printer utility to terminate unexpectedly with a segmentation fault. With this update, thread local storage is used for the password callback function in Python bindings for the CUPS library. All users of system-config-printer are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/system-config-printer |
Chapter 18. Managing More Code with Make | Chapter 18. Managing More Code with Make The GNU Make utility, commonly abbreviated as Make , is a tool for controlling the generation of executables from source files. Make automatically determines which parts of a complex program have changed and need to be recompiled. Make uses configuration files called Makefiles to control the way programs are built. 18.1. GNU make and Makefile Overview To create a usable form (usually executable files) from the source files of a particular project, perform several necessary steps. Record the actions and their sequence to be able to repeat them later. Red Hat Enterprise Linux contains GNU make , a build system designed for this purpose. Prerequisites Understanding the concepts of compiling and linking GNU make GNU make reads Makefiles which contain the instructions describing the build process. A Makefile contains multiple rules that describe a way to satisfy a certain condition ( target ) with a specific action ( recipe ). Rules can hierarchically depend on another rule. Running make without any options makes it look for a Makefile in the current directory and attempt to reach the default target. The actual Makefile file name can be one of Makefile , makefile , and GNUmakefile . The default target is determined from the Makefile contents. Makefile Details Makefiles use a relatively simple syntax for defining variables and rules , which consists of a target and a recipe . The target specifies the output if a rule is executed. The lines with recipes must start with the tab character. Typically, a Makefile contains rules for compiling source files, a rule for linking the resulting object files, and a target that serves as the entry point at the top of the hierarchy. Consider the following Makefile for building a C program which consists of a single file, hello.c . all: hello hello: hello.o gcc hello.o -o hello hello.o: hello.c gcc -c hello.c -o hello.o This specifies that to reach the target all , the file hello is required. To get hello , one needs hello.o (linked by gcc ), which in turn is created from hello.c (compiled by gcc ). The target all is the default target because it is the first target that does not start with a period. Running make without any arguments is then identical to running make all , if the current directory contains this Makefile . Typical Makefile A more typical Makefile uses variables for generalization of the steps and adds a target "clean" which removes everything but the source files. CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE) Adding more source files to such a Makefile requires adding them to the line where the SOURCE variable is defined. Additional resources GNU make: Introduction - 2 An Introduction to Makefiles Chapter 15, Building Code with GCC 18.2. Example: Building a C Program Using a Makefile Build a sample C program using a Makefile by following the steps in the example below. Prerequisites Understanding of Makefiles and make Procedure Create a directory hellomake and change to this directory: Create a file hello.c with the following contents: #include <stdio.h> int main(int argc, char *argv[]) { printf("Hello, World!\n"); return 0; } Create a file Makefile with the following contents: CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE) Caution The Makefile recipe lines must start with the tab character. When copying the text above from the browser, you may paste spaces instead. Correct this change manually. Run make : This creates an executable file hello . Run the executable file hello : Run the Makefile target clean to remove the created files: Additional Resources Section 15.8, "Example: Building a C Program with GCC" Section 15.9, "Example: Building a C++ Program with GCC" 18.3. Documentation Resources for make For more information about make , see the resources listed below. Installed Documentation Use the man and info tools to view manual pages and information pages installed on your system: Online Documentation The GNU Make Manual hosted by the Free Software Foundation The Red Hat Developer Toolset User Guide - GNU make | [
"all: hello hello: hello.o gcc hello.o -o hello hello.o: hello.c gcc -c hello.c -o hello.o",
"CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)",
"mkdir hellomake cd hellomake",
"#include <stdio.h> int main(int argc, char *argv[]) { printf(\"Hello, World!\\n\"); return 0; }",
"CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)",
"make gcc -c -Wall hello.c -o hello.o gcc hello.o -o hello",
"./hello Hello, World!",
"make clean rm -rf hello.o hello",
"man make info make"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/managing-more-code-make |
Chapter 2. Alertmanager [monitoring.coreos.com/v1] | Chapter 2. Alertmanager [monitoring.coreos.com/v1] Description Alertmanager describes an Alertmanager cluster. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Alertmanager cluster. Read-only. Not included when requesting from the apiserver, only from the Prometheus Operator API itself. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 2.1.1. .spec Description Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalPeers array (string) AdditionalPeers allows injecting a set of additional Alertmanagers to peer with to form a highly available cluster. affinity object If specified, the pod's scheduling constraints. alertmanagerConfigNamespaceSelector object Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. alertmanagerConfigSelector object AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. alertmanagerConfiguration object EXPERIMENTAL: alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This field may change in future releases. baseImage string Base image that is used to deploy pods, without tag. Deprecated: use 'image' instead clusterAdvertiseAddress string ClusterAdvertiseAddress is the explicit address to advertise in cluster. Needs to be provided for non RFC1918 [1] (public) addresses. [1] RFC1918: https://tools.ietf.org/html/rfc1918 clusterGossipInterval string Interval between gossip attempts. clusterPeerTimeout string Timeout for cluster peering. clusterPushpullInterval string Interval between pushpull attempts. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/alertmanager/configmaps/<configmap-name> in the 'alertmanager' container. configSecret string ConfigSecret is the name of a Kubernetes Secret in the same namespace as the Alertmanager object, which contains the configuration for this Alertmanager instance. If empty, it defaults to alertmanager-<alertmanager-name> . The Alertmanager configuration should be available under the alertmanager.yaml key. Additional keys from the original secret are copied to the generated secret. If either the secret or the alertmanager.yaml key is missing, the operator provisions an Alertmanager configuration with one empty receiver (effectively dropping alert notifications). containers array Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. externalUrl string The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name. forceEnableClusterMode boolean ForceEnableClusterMode ensures Alertmanager does not deactivate the cluster mode when running with a single replica. Use case is e.g. spanning an Alertmanager cluster across Kubernetes clusters with a single replica in each. hostAliases array Pods' hostAliases configuration hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. image string Image if specified has precedence over baseImage, tag and sha combinations. Specifying the version is still necessary to ensure the Prometheus Operator knows what version of Alertmanager is being configured. imagePullSecrets array An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Using initContainers for any use case other then secret fetching is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. listenLocal boolean ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP. Note this is only for the Alertmanager UI, not the gossip communication. logFormat string Log format for Alertmanager to be configured with. logLevel string Log level for Alertmanager to be configured with. minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Define which Nodes the Pods are scheduled on. paused boolean If set to true all actions on the underlying managed objects are not goint to be performed, except for delete actions. podMetadata object PodMetadata configures Labels and Annotations which are propagated to the alertmanager pods. portName string Port name used for the pods and governing service. This defaults to web priorityClassName string Priority class assigned to the Pods replicas integer Size is the expected size of the alertmanager cluster. The controller will eventually make the size of the running cluster equal to the expected size. resources object Define resources requests and limits for single Pods. retention string Time duration Alertmanager shall retain data for. Default is '120h', and must match the regular expression [0-9]+(ms|s|m|h) (milliseconds seconds minutes hours). routePrefix string The route prefix Alertmanager registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . secrets array (string) Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/alertmanager/secrets/<secret-name> in the 'alertmanager' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. sha string SHA of Alertmanager container image to be deployed. Defaults to the value of version . Similar to a tag, but the SHA explicitly deploys an immutable container image. Version and Tag are ignored if SHA is set. Deprecated: use 'image' instead. The image digest can be specified as part of the image URL. storage object Storage is the definition of how storage will be used by the Alertmanager instances. tag string Tag of Alertmanager container image to be deployed. Defaults to the value of version . Version is ignored if Tag is set. Deprecated: use 'image' instead. The image tag can be specified as part of the image URL. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array If specified, the pod's topology spread constraints. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. version string Version the cluster should be on. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. web object Defines the web command line flags when starting Alertmanager. 2.1.2. .spec.affinity Description If specified, the pod's scheduling constraints. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 2.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 2.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 2.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 2.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 2.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 2.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 2.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 2.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.54. .spec.alertmanagerConfigNamespaceSelector Description Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.55. .spec.alertmanagerConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.56. .spec.alertmanagerConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.57. .spec.alertmanagerConfigSelector Description AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.58. .spec.alertmanagerConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.59. .spec.alertmanagerConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.60. .spec.alertmanagerConfiguration Description EXPERIMENTAL: alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This field may change in future releases. Type object Property Type Description global object Defines the global parameters of the Alertmanager configuration. name string The name of the AlertmanagerConfig resource which is used to generate the Alertmanager configuration. It must be defined in the same namespace as the Alertmanager object. The operator will not enforce a namespace label for routes and inhibition rules. templates array Custom notification templates. templates[] object SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. 2.1.61. .spec.alertmanagerConfiguration.global Description Defines the global parameters of the Alertmanager configuration. Type object Property Type Description httpConfig object HTTP client configuration. resolveTimeout string ResolveTimeout is the default value used by alertmanager if the alert does not include EndsAt, after this time passes it can declare the alert as resolved if it has not been updated. This has no impact on alerts from Prometheus, as they always include EndsAt. 2.1.62. .spec.alertmanagerConfiguration.global.httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 2.1.63. .spec.alertmanagerConfiguration.global.httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object The secret's key that contains the credentials of the request type string Set the authentication type. Defaults to Bearer, Basic will cause an error 2.1.64. .spec.alertmanagerConfiguration.global.httpConfig.authorization.credentials Description The secret's key that contains the credentials of the request Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.65. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 2.1.66. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.67. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.68. .spec.alertmanagerConfiguration.global.httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.69. .spec.alertmanagerConfiguration.global.httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 2.1.70. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.71. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.72. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.73. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.74. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Struct containing the CA cert to use for the targets. cert object Struct containing the client cert file for the targets. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 2.1.75. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca Description Struct containing the CA cert to use for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.76. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.77. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.78. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert Description Struct containing the client cert file for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.79. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.80. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.81. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.82. .spec.alertmanagerConfiguration.templates Description Custom notification templates. Type array 2.1.83. .spec.alertmanagerConfiguration.templates[] Description SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.84. .spec.alertmanagerConfiguration.templates[].configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.85. .spec.alertmanagerConfiguration.templates[].secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.86. .spec.containers Description Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.87. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.88. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.89. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.90. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.91. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.92. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.93. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.94. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.95. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.96. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.97. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 2.1.98. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 2.1.99. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.100. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.101. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.102. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.103. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.104. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.105. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.106. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.107. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.108. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.109. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.110. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.111. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.112. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.113. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.114. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.115. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.116. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.117. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.118. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.119. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.120. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.121. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.122. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.123. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.124. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.125. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.126. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.127. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.128. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.129. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.130. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.131. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.132. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.133. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.134. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.135. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.136. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.137. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.138. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.139. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.140. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.141. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.142. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.143. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.144. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.145. .spec.hostAliases Description Pods' hostAliases configuration Type array 2.1.146. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 2.1.147. .spec.imagePullSecrets Description An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 2.1.148. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.149. .spec.initContainers Description InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Using initContainers for any use case other then secret fetching is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.150. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.151. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.152. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.153. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.154. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.155. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.156. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.157. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.158. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.159. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.160. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 2.1.161. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 2.1.162. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.163. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.164. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.165. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.166. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.167. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.168. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.169. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.170. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.171. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.172. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.173. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.174. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.175. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.176. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.177. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.178. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.179. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.180. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.181. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.182. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.183. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.184. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.185. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.186. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.187. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.188. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.189. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.190. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.191. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.192. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.193. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.194. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.195. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.196. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.197. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.198. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.199. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.200. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.201. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.202. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 2.1.203. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.204. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.205. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.206. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.207. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.208. .spec.podMetadata Description PodMetadata configures Labels and Annotations which are propagated to the alertmanager pods. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.209. .spec.resources Description Define resources requests and limits for single Pods. Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.210. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.211. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.212. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.213. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 2.1.214. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 2.1.215. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.216. .spec.storage Description Storage is the definition of how storage will be used by the Alertmanager instances. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be disabled by default in a future release, this option will become unnecessary. DisableMountSubPath allows to remove any subPath usage in volume mounts. emptyDir object EmptyDirVolumeSource to be used by the Prometheus StatefulSets. If specified, used in place of any volumeClaimTemplate. More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the Prometheus StatefulSets. This is a beta field in k8s 1.21, for lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object A PVC spec to be used by the Prometheus StatefulSets. 2.1.217. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the Prometheus StatefulSets. If specified, used in place of any volumeClaimTemplate. More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir 2.1.218. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the Prometheus StatefulSets. This is a beta field in k8s 1.21, for lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.219. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.220. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.221. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.222. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.223. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.224. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.225. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.226. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.227. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.228. .spec.storage.volumeClaimTemplate Description A PVC spec to be used by the Prometheus StatefulSets. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims 2.1.229. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.230. .spec.storage.volumeClaimTemplate.spec Description Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.231. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.232. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.233. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.234. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.235. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.236. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.237. .spec.storage.volumeClaimTemplate.status Description Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources integer-or-string allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contails details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. resizeStatus string resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. 2.1.238. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 2.1.239. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contails details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 2.1.240. .spec.tolerations Description If specified, the pod's tolerations. Type array 2.1.241. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.242. .spec.topologySpreadConstraints Description If specified, the pod's topology spread constraints. Type array 2.1.243. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 2.1.244. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.245. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.246. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.247. .spec.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. Type array 2.1.248. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.249. .spec.volumes Description Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 2.1.250. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 2.1.251. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 2.1.252. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 2.1.253. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 2.1.254. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 2.1.255. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.256. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 2.1.257. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.258. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.259. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.260. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.261. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 2.1.262. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.263. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.264. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 2.1.265. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.266. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.267. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.268. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir 2.1.269. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.270. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.271. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.272. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.273. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.274. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.275. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.276. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.277. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.278. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.279. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 2.1.280. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 2.1.281. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.282. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 2.1.283. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 2.1.284. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 2.1.285. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 2.1.286. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 2.1.287. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 2.1.288. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.289. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 2.1.290. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 2.1.291. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 2.1.292. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 2.1.293. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 2.1.294. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 2.1.295. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 2.1.296. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.297. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.298. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.299. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.300. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 2.1.301. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.302. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.303. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.304. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 2.1.305. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.306. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.307. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 2.1.308. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 2.1.309. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 2.1.310. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.311. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 2.1.312. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.313. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 2.1.314. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.315. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.316. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 2.1.317. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.318. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 2.1.319. .spec.web Description Defines the web command line flags when starting Alertmanager. Type object Property Type Description httpConfig object Defines HTTP parameters for web server. tlsConfig object Defines the TLS parameters for HTTPS. 2.1.320. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 2.1.321. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 2.1.322. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 2.1.323. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.324. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.325. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.326. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.327. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.328. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.329. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.330. .status Description Most recent observed status of the Alertmanager cluster. Read-only. Not included when requesting from the apiserver, only from the Prometheus Operator API itself. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Alertmanager cluster. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Alertmanager cluster (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this Alertmanager cluster. updatedReplicas integer Total number of non-terminated pods targeted by this Alertmanager cluster that have the desired version spec. 2.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/alertmanagers GET : list objects of kind Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers DELETE : delete collection of Alertmanager GET : list objects of kind Alertmanager POST : create an Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} DELETE : delete an Alertmanager GET : read the specified Alertmanager PATCH : partially update the specified Alertmanager PUT : replace the specified Alertmanager 2.2.1. /apis/monitoring.coreos.com/v1/alertmanagers Table 2.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Alertmanager Table 2.2. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty 2.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers Table 2.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Alertmanager Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Alertmanager Table 2.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.8. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty HTTP method POST Description create an Alertmanager Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.10. Body parameters Parameter Type Description body Alertmanager schema Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 202 - Accepted Alertmanager schema 401 - Unauthorized Empty 2.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the Alertmanager namespace string object name and auth scope, such as for teams and projects Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Alertmanager Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Alertmanager Table 2.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.18. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Alertmanager Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body Patch schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Alertmanager Table 2.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.23. Body parameters Parameter Type Description body Alertmanager schema Table 2.24. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring_apis/alertmanager-monitoring-coreos-com-v1 |
Chapter 47. HL7 | Chapter 47. HL7 The HL7 component is used for working with the HL7 MLLP protocol and HL7 v2 messages using the HAPI library . This component supports the following: HL7 MLLP codec for Mina HL7 MLLP codec for Netty Type Converter from/to HAPI and String HL7 DataFormat using the HAPI library 47.1. Dependencies When using hl7 with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-hl7-starter</artifactId> </dependency> 47.2. HL7 MLLP protocol HL7 is often used with the HL7 MLLP protocol, which is a text based TCP socket based protocol. This component ships with a Mina and Netty Codec that conforms to the MLLP protocol so you can easily expose an HL7 listener accepting HL7 requests over the TCP transport layer. To expose a HL7 listener service, the camel-mina or link: camel-netty component is used with the HL7MLLPCodec (mina) or HL7MLLPNettyDecoder/HL7MLLPNettyEncoder (Netty). HL7 MLLP codec can be configured as follows: Name Default Value Description startByte 0x0b The start byte spanning the HL7 payload. endByte1 0x1c The first end byte spanning the HL7 payload. endByte2 0x0d The 2nd end byte spanning the HL7 payload. charset JVM Default The encoding (a charset name ) to use for the codec. If not provided, Camel will use the JVM default Charset . produceString true If true, the codec creates a string using the defined charset. If false, the codec sends a plain byte array into the route, so that the HL7 Data Format can determine the actual charset from the HL7 message content. convertLFtoCR false Will convert \n to \r ( 0x0d , 13 decimal) as HL7 stipulates \r as segment terminators. The HAPI library requires the use of \r . 47.2.1. Exposing an HL7 listener using Mina In the Spring XML file, we configure a mina endpoint to listen for HL7 requests using TCP on port 8888 : <endpoint id="hl7MinaListener" uri="mina:tcp://localhost:8888?sync=true&codec=#hl7codec"/> sync=true indicates that this listener is synchronous and therefore will return a HL7 response to the caller. The HL7 codec is setup with codec=#hl7codec . Note that hl7codec is just a Spring bean ID, so it could be named mygreatcodecforhl7 or whatever. The codec is also set up in the Spring XML file: <bean id="hl7codec" class="org.apache.camel.component.hl7.HL7MLLPCodec"> <property name="charset" value="iso-8859-1"/> </bean> The endpoint hl7MinaLlistener can then be used in a route as a consumer, as this Java DSL example illustrates: from("hl7MinaListener") .bean("patientLookupService"); This is a very simple route that will listen for HL7 and route it to a service named patientLookupService . This is also Spring bean ID, configured in the Spring XML as: <bean id="patientLookupService" class="com.mycompany.healthcare.service.PatientLookupService"/> The business logic can be implemented in POJO classes that do not depend on Camel, as shown here: import ca.uhn.hl7v2.HL7Exception; import ca.uhn.hl7v2.model.Message; import ca.uhn.hl7v2.model.v24.segment.QRD; public class PatientLookupService { public Message lookupPatient(Message input) throws HL7Exception { QRD qrd = (QRD)input.get("QRD"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // find patient data based on the patient id and create a HL7 model object with the response Message response = ... create and set response data return response } 47.2.2. Exposing an HL7 listener using Netty (available from Camel 2.15 onwards) In the Spring XML file, we configure a netty endpoint to listen for HL7 requests using TCP on port 8888 : <endpoint id="hl7NettyListener" uri="netty:tcp://localhost:8888?sync=true&encoders=#hl7encoder&decoders=#hl7decoder"/> sync=true indicates that this listener is synchronous and therefore will return a HL7 response to the caller. The HL7 codec is setup with encoders=#hl7encoder*and*decoders=#hl7decoder . Note that hl7encoder and hl7decoder are just bean IDs, so they could be named differently. The beans can be set in the Spring XML file: <bean id="hl7decoder" class="org.apache.camel.component.hl7.HL7MLLPNettyDecoderFactory"/> <bean id="hl7encoder" class="org.apache.camel.component.hl7.HL7MLLPNettyEncoderFactory"/> The endpoint hl7NettyListener can then be used in a route as a consumer, as this Java DSL example illustrates: from("hl7NettyListener") .bean("patientLookupService"); 47.3. HL7 Model using java.lang.String or byte[] The HL7 MLLP codec uses plain String as its data format. Camel uses its Type Converter to convert to/from strings to the HAPI HL7 model objects, but you can use the plain String objects if you prefer, for instance if you wish to parse the data yourself. You can also let both the Mina and Netty codecs use a plain byte[] as its data format by setting the produceString property to false. The Type Converter is also capable of converting the byte[] to/from HAPI HL7 model objects. 47.4. HL7v2 Model using HAPI The HL7v2 model uses Java objects from the HAPI library. Using this library, you can encode and decode from the EDI format (ER7) that is mostly used with HL7v2. The sample below is a request to lookup a patient with the patient ID 0101701234 . MSH|^~\\&|MYSENDER|MYRECEIVER|MYAPPLICATION||200612211200||QRY^A19|1234|P|2.4 QRD|200612211200|R|I|GetPatient|||1^RD|0101701234|DEM|| Using the HL7 model you can work with a ca.uhn.hl7v2.model.Message object, e.g. to retrieve a patient ID: Message msg = exchange.getIn().getBody(Message.class); QRD qrd = (QRD)msg.get("QRD"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // 0101701234 This is powerful when combined with the HL7 listener, because you don't have to work with byte[] , String or any other simple object formats. You can just use the HAPI HL7v2 model objects. If you know the message type in advance, you can be more type-safe: QRY_A19 msg = exchange.getIn().getBody(QRY_A19.class); String patientId = msg.getQRD().getWhoSubjectFilter(0).getIDNumber().getValue(); 47.5. HL7 DataFormat The camel-hl7 JAR ships with a HL7 data format that can be used to marshal or unmarshal HL7 model objects. The HL7 dataformat supports 1 options, which are listed below. Name Default Java Type Description validate Boolean Whether to validate the HL7 message Is by default true. marshal = from Message to byte stream (can be used when responding using the HL7 MLLP codec) unmarshal = from byte stream to Message (can be used when receiving streamed data from the HL7 MLLP To use the data format, simply instantiate an instance and invoke the marshal or unmarshal operation in the route builder: DataFormat hl7 = new HL7DataFormat(); from("direct:hl7in") .marshal(hl7) .to("jms:queue:hl7out"); In the sample above, the HL7 is marshalled from a HAPI Message object to a byte stream and put on a JMS queue. The example is the opposite: DataFormat hl7 = new HL7DataFormat(); from("jms:queue:hl7out") .unmarshal(hl7) .to("patientLookupService"); Here we unmarshal the byte stream into a HAPI Message object that is passed to our patient lookup service. 47.5.1. Segment separators Unmarshalling does not automatically fix segment separators anymore by converting \n to \r . If you need this conversion, org.apache.camel.component.hl7.HL7#convertLFToCR provides a handy Expression for this purpose. 47.5.2. Charset Both marshal and unmarshal evaluate the charset provided in the field MSH-18 . If this field is empty, by default the charset contained in the corresponding Camel charset property/header is assumed. You can even change this default behavior by overriding the guessCharsetName method when inheriting from the HL7DataFormat class. There is a shorthand syntax in Camel for well-known data formats that are commonly used. Then you don't need to create an instance of the HL7DataFormat object: from("direct:hl7in") .marshal().hl7() .to("jms:queue:hl7out"); from("jms:queue:hl7out") .unmarshal().hl7() .to("patientLookupService"); 47.6. Message Headers The unmarshal operation adds these fields from the MSH segment as headers on the Camel message: Key MSH field Example CamelHL7SendingApplication MSH-3 MYSERVER CamelHL7SendingFacility MSH-4 MYSERVERAPP CamelHL7ReceivingApplication MSH-5 MYCLIENT CamelHL7ReceivingFacility MSH-6 MYCLIENTAPP CamelHL7Timestamp MSH-7 20071231235900 CamelHL7Security MSH-8 null CamelHL7MessageType MSH-9-1 ADT CamelHL7TriggerEvent MSH-9-2 A01 CamelHL7MessageControl MSH-10 1234 CamelHL7ProcessingId MSH-11 P CamelHL7VersionId MSH-12 2.4 CamelHL7Context `` contains the that was used to parse the message CamelHL7Charset MSH-18 UNICODE UTF-8 All headers except CamelHL7Context are String types. If a header value is missing, its value is null . 47.7. Dependencies To use HL7 in your Camel routes you'll need to add a dependency on camel-hl7 listed above, which implements this data format. The HAPI library is split into a base library and several structure libraries, one for each HL7v2 message version: v2.1 structures library v2.2 structures library v2.3 structures library v2.3.1 structures library v2.4 structures library v2.5 structures library v2.5.1 structures library v2.6 structures library By default camel-hl7 only references the HAPI base library . Applications are responsible for including structure libraries themselves. For example, if an application works with HL7v2 message versions 2.4 and 2.5 then the following dependencies must be added: <dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v24</artifactId> <version>2.2</version> <!-- use the same version as your hapi-base version --> </dependency> <dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v25</artifactId> <version>2.2</version> <!-- use the same version as your hapi-base version --> </dependency> Alternatively, an OSGi bundle containing the base library, all structures libraries and required dependencies (on the bundle classpath) can be downloaded from the central Maven repository . <dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-osgi-base</artifactId> <version>2.2</version> </dependency> 47.8. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.dataformat.hl7.enabled Whether to enable auto configuration of the hl7 data format. This is enabled by default. Boolean camel.dataformat.hl7.validate Whether to validate the HL7 message Is by default true. true Boolean camel.language.hl7terser.enabled Whether to enable auto configuration of the hl7terser language. This is enabled by default. Boolean camel.language.hl7terser.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-hl7-starter</artifactId> </dependency>",
"<endpoint id=\"hl7MinaListener\" uri=\"mina:tcp://localhost:8888?sync=true&codec=#hl7codec\"/>",
"<bean id=\"hl7codec\" class=\"org.apache.camel.component.hl7.HL7MLLPCodec\"> <property name=\"charset\" value=\"iso-8859-1\"/> </bean>",
"from(\"hl7MinaListener\") .bean(\"patientLookupService\");",
"<bean id=\"patientLookupService\" class=\"com.mycompany.healthcare.service.PatientLookupService\"/>",
"import ca.uhn.hl7v2.HL7Exception; import ca.uhn.hl7v2.model.Message; import ca.uhn.hl7v2.model.v24.segment.QRD; public class PatientLookupService { public Message lookupPatient(Message input) throws HL7Exception { QRD qrd = (QRD)input.get(\"QRD\"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // find patient data based on the patient id and create a HL7 model object with the response Message response = ... create and set response data return response }",
"<endpoint id=\"hl7NettyListener\" uri=\"netty:tcp://localhost:8888?sync=true&encoders=#hl7encoder&decoders=#hl7decoder\"/>",
"<bean id=\"hl7decoder\" class=\"org.apache.camel.component.hl7.HL7MLLPNettyDecoderFactory\"/> <bean id=\"hl7encoder\" class=\"org.apache.camel.component.hl7.HL7MLLPNettyEncoderFactory\"/>",
"from(\"hl7NettyListener\") .bean(\"patientLookupService\");",
"MSH|^~\\\\&|MYSENDER|MYRECEIVER|MYAPPLICATION||200612211200||QRY^A19|1234|P|2.4 QRD|200612211200|R|I|GetPatient|||1^RD|0101701234|DEM||",
"Message msg = exchange.getIn().getBody(Message.class); QRD qrd = (QRD)msg.get(\"QRD\"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // 0101701234",
"QRY_A19 msg = exchange.getIn().getBody(QRY_A19.class); String patientId = msg.getQRD().getWhoSubjectFilter(0).getIDNumber().getValue();",
"DataFormat hl7 = new HL7DataFormat(); from(\"direct:hl7in\") .marshal(hl7) .to(\"jms:queue:hl7out\");",
"DataFormat hl7 = new HL7DataFormat(); from(\"jms:queue:hl7out\") .unmarshal(hl7) .to(\"patientLookupService\");",
"from(\"direct:hl7in\") .marshal().hl7() .to(\"jms:queue:hl7out\"); from(\"jms:queue:hl7out\") .unmarshal().hl7() .to(\"patientLookupService\");",
"<dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v24</artifactId> <version>2.2</version> <!-- use the same version as your hapi-base version --> </dependency> <dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v25</artifactId> <version>2.2</version> <!-- use the same version as your hapi-base version --> </dependency>",
"<dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-osgi-base</artifactId> <version>2.2</version> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-hl7-dataformat-starter |
function::stack_size | function::stack_size Name function::stack_size - Return the size of the kernel stack Synopsis Arguments None Description This function returns the size of the kernel stack. | [
"stack_size:long()"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stack-size |
Chapter 2. Dependency management | Chapter 2. Dependency management A specific Red Hat build of Apache Camel for Quarkus release is supposed to work only with a specific Quarkus release. 2.1. Quarkus tooling for starting a new project The easiest and most straightforward way to get the dependency versions right in a new project is to use one of the Quarkus tools: code.quarkus.redhat.com - an online project generator, Quarkus Maven plugin These tools allow you to select extensions and scaffold a new Maven project. Tip The universe of available extensions spans over Quarkus Core, Camel Quarkus and several other third party participating projects, such as Hazelcast, Cassandra, Kogito and OptaPlanner. The generated pom.xml will look similar to the following: <project> ... <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 3.8.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> ... </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> ... </dependencies> ... </project> Note BOM stands for "Bill of Materials" - it is a pom.xml whose main purpose is to manage the versions of artifacts so that end users importing the BOM in their projects do not need to care which particular versions of the artifacts are supposed to work together. In other words, having a BOM imported in the <depependencyManagement> section of your pom.xml allows you to avoid specifying versions for the dependencies managed by the given BOM. Which particular BOMs end up in the pom.xml file depends on extensions you have selected in the generator tool. The generator tools take care to select a minimal consistent set. If you choose to add an extension at a later point that is not managed by any of the BOMs in your pom.xml file, you do not need to search for the appropriate BOM manually. With the quarkus-maven-plugin you can select the extension, and the tool adds the appropriate BOM as required. You can also use the quarkus-maven-plugin to upgrade the BOM versions. The com.redhat.quarkus.platform BOMs are aligned with each other which means that if an artifact is managed in more than one BOM, it is always managed with the same version. This has the advantage that application developers do not need to care for the compatibility of the individual artifacts that may come from various independent projects. 2.2. Combining with other BOMs When combining camel-quarkus-bom with any other BOM, think carefully in which order you import them, because the order of imports defines the precedence. I.e. if my-foo-bom is imported before camel-quarkus-bom then the versions defined in my-foo-bom will take the precedence. This might or might not be what you want, depending on whether there are any overlaps between my-foo-bom and camel-quarkus-bom and depending on whether those versions with higher precedence work with the rest of the artifacts managed in camel-quarkus-bom . | [
"<project> <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 3.8.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> </dependencies> </project>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-dependency-management |
B.35. kabi-whitelists | B.35. kabi-whitelists B.35.1. RHBA-2010:0856 - kabi-whitelists bug fix update An updated kabi-whitelists package that fixes a bug is now available. The kabi-whitelists package contains reference files documenting interfaces provided by the Red Hat Enterprise Linux 6 kernel that are considered to be stable by Red Hat kernel engineering, and safe for longer term use by third party loadable device drivers, as well as for other purposes. Bug Fix BZ# 643570 Two exported kernel symbols were removed from the final version of the Kernel Application Binary Interface (kABI) whitelists package in Red Hat Enterprise Linux 6. All users are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/kabi-whitelists |
Chapter 7. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta1] | Chapter 7. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta1] 7.1. API endpoints The following API endpoints are available: | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/schedule_and_quota_apis/prioritylevelconfiguration-flowcontrol-apiserver-k8s-io-v1beta1 |
7.179. redhat-support-tool | 7.179. redhat-support-tool 7.179.1. RHBA-2015:1406 - redhat-support-tool and redhat-support-lib-python update Updated redhat-support-tool and redhat-support-lib-python packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The redhat-support-tool utility facilitates console-based access to Red Hat's subscriber services and gives Red Hat subscribers more venues for accessing the content and services available to them as Red Hat customers. Further, it enables Red Hat customers to integrate and automate their helpdesk services with our subscription services. Bug Fixes BZ# 1198411 Previously, bugs in the redhat-support-lib-python library caused the "addattachment" command to fail with an error message "TypeError: unhashable type" when files were uploaded using FTP through an HTTP proxy configured to proxy FTP. As a consequence, attachments could not be sent to the RedHat FTP dropbox if redhat-support-tool was configured to use an HTTP proxy and the "-f" option was used with the "addattachment" command. The underlying redhat-support-lib-python code has been fixed, and the "redhat-support-tool addattachment -f" command now successfully uploads files to the RedHat FTP dropbox in this scenario. BZ# 1146360 Due to bugs in redhat-support-lib-python, the "addattachment" command failed with an error message "unknown URL type" when files were uploaded to the Customer Portal using an HTTP proxy. Consequently, attachments could not be added to cases if redhat-support-tool was configured to use an HTTP proxy. This bug has been fixed, and the "redhat-support-tool addattachment" command now successfully uploads files to the Customer Portal through an HTTP proxy. BZ# 1198616 When retrieving case information from the Customer Portal using the /rs/case Representational State Transfer (REST) endpoint, the case group number was included in the response but not in the case group name. Consequently, when viewing the case details with the "redhat-support-tool getcase" command, the case group number and name were not displayed. With this update, an additional call to the /rs/groups endpoint has been added, and "redhat-support-tool getcase" now displays the case group name along with other case information. BZ# 1104722 Previously, the way redhat-support-tool stored Customer Portal passwords was inconsistent in terms of encoding and decoding. As a consequence, certain passwords could not be decoded correctly. With this update, the method of decoding of the stored Customer Portal passwords has been made consistent with how the passwords were encoded, and the described problem no longer occurs. Users of redhat-support-tool and redhat-support-lib-python are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-redhat-support-tool |
Appendix B. Provisioning FIPS-Compliant Hosts | Appendix B. Provisioning FIPS-Compliant Hosts Satellite supports provisioning hosts that comply with the National Institute of Standards and Technology's Security Requirements for Cryptographic Modules standard, reference number FIPS 140-2, referred to here as FIPS. To enable the provisioning of hosts that are FIPS-compliant, complete the following tasks: Change the provisioning password hashing algorithm for the operating system Create a host group and set a host group parameter to enable FIPS For more information, see Creating a Host Group in the Managing Hosts guide. The provisioned hosts have the FIPS-compliant settings applied. To confirm that these settings are enabled, complete the steps in Section B.3, "Verifying FIPS Mode is Enabled" . B.1. Changing the Provisioning Password Hashing Algorithm To provision FIPS-compliant hosts, you must first set the password hashing algorithm that you use in provisioning to SHA256. This configuration setting must be applied for each operating system you want to deploy as FIPS-compliant. Procedure Identify the Operating System IDs: Update each operating system's password hash value. Note that you cannot use a comma-separated list of values. B.2. Setting the FIPS-Enabled Parameter To provision a FIPS-compliant host, you must create a host group and set the host group parameter fips_enabled to true . If this is not set to true , or is absent, the FIPS-specific changes do not apply to the system. You can set this parameter when you provision a host or for a host group. To set this parameter when provisioning a host, append --parameters fips_enabled=true to the Hammer command. For more information, see the output of the command hammer hostgroup set-parameter --help . B.3. Verifying FIPS Mode is Enabled To verify these FIPS compliance changes have been successful, you must provision a host and check its configuration. Procedure Log on to the host as root or with an admin-level account. Enter the following command: A value of 1 confirms that FIPS mode is enabled. | [
"hammer os list",
"hammer os update --password-hash SHA256 --title \" My_Operating_System \"",
"hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name fips_enabled --value \"true\"",
"cat /proc/sys/crypto/fips_enabled"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/provisioning_fips_compliant_hosts_provisioning |
Chapter 6. RHEL 8.1.0 release | Chapter 6. RHEL 8.1.0 release 6.1. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.1. 6.1.1. Installer and image creation Modules can now be disabled during Kickstart installation With this enhancement, users can now disable a module to prevent the installation of packages from the module. To disable a module during Kickstart installation, use the command: module --name=foo --stream=bar --disable (BZ#1655523) Support for the repo.git section to blueprints is now available A new repo.git blueprint section allows users to include extra files in their image build. The files must be hosted in git repository that is accessible from the lorax-composer build server. ( BZ#1709594 ) Image Builder now supports image creation for more cloud providers With this update, the Image Builder expanded the number of Cloud Providers that the Image Builder can create an image for. As a result, now you can create RHEL images that can be deployed also on Google Cloud and Alibaba Cloud as well as run the custom instances on these platforms. ( BZ#1689140 ) 6.1.2. Software management dnf-utils has been renamed to yum-utils With this update, the dnf-utils package, that is a part of the YUM stack, has been renamed to yum-utils . For compatibility reasons, the package can still be installed using the dnf-utils name, and will automatically replace the original package when upgrading your system. (BZ#1722093) 6.1.3. Subscription management subscription-manager now reports the role, usage and add-ons values With this update, the subscription-manager can now display the Role, Usage and Add-ons values for each subscription available in the current organization, which is registered to either the Customer Portal or to the Satellite. To show the available subscriptions with the addition of Role, Usage and Add-ons values for those subscriptions use: To show the consumed subscriptions including the additional Role, Usage and Add-ons values use: (BZ#1665167) 6.1.4. Infrastructure services tuned rebased to version 2.12 The tuned packages have been upgraded to upstream version 2.12, which provides a number of bug fixes and enhancements over the version, notably: Handling of devices that have been removed and reattached has been fixed. Support for negation of CPU list has been added. Performance of runtime kernel parameter configuration has been improved by switching from the sysctl tool to a new implementation specific to Tuned . ( BZ#1685585 ) chrony rebased to version 3.5 The chrony packages have been upgraded to upstream version 3.5, which provides a number of bug fixes and enhancements over the version, notably: Support for more accurate synchronization of the system clock with hardware timestamping in RHEL 8.1 kernel has been added. Hardware timestamping has received significant improvements. The range of available polling intervals has been extended. The filter option has been added to NTP sources. ( BZ#1685469 ) New FRRouting routing protocol stack is available With this update, Quagga has been replaced by Free Range Routing ( FRRouting , or FRR ), which is a new routing protocol stack. FRR is provided by the frr package available in the AppStream repository. FRR provides TCP/IP-based routing services with support for multiple IPv4 and IPv6 routing protocols, such as BGP , IS-IS , OSPF , PIM , and RIP . With FRR installed, the system can act as a dedicated router, which exchanges routing information with other routers in either internal or external network. (BZ#1657029) GNU enscript now supports ISO-8859-15 encoding With this update, support for ISO-8859-15 encoding has been added into the GNU enscript program. ( BZ#1664366 ) Improved accuracy of measuring system clock offset in phc2sys The phc2sys program from the linuxptp packages now supports a more accurate method for measuring the offset of the system clock. (BZ#1677217) ptp4l now supports team interfaces in active-backup mode With this update, support for team interfaces in active-backup mode has been added into the PTP Boundary/Ordinary Clock (ptp4l). (BZ#1685467) The PTP time synchronization on macvlan interfaces is now supported This update adds support for hardware timestamping on macvlan interfaces into the Linux kernel. As a result, macvlan interfaces can now use the Precision Time Protocol (PTP) for time synchronization. (BZ#1664359) 6.1.5. Security New package: fapolicyd The fapolicyd software framework introduces a form of application whitelisting and blacklisting based on a user-defined policy. The application whitelisting feature provides one of the most efficient ways to prevent running untrusted and possibly malicious applications on the system. The fapolicyd framework provides the following components: fapolicyd service fapolicyd command-line utilities yum plugin rule language Administrator can define the allow and deny execution rules, both with possibility of auditing, based on a path, hash, MIME type, or trust for any application. Note that every fapolicyd setup affects overall system performance. The performance hit varies depending on the use case. The application whitelisting slow-downs the open() and exec() system calls, and therefore primarily affects applications that perform such system calls frequently. See the fapolicyd(8) , fapolicyd.rules(5) , and fapolicyd.conf(5) man pages for more information. (BZ#1673323) New package: udica The new udica package provides a tool for generation SELinux policies for containers. With udica , you can create a tailored security policy for better control of how a container accesses host system resources, such as storage, devices, and network. This enables you to harden your container deployments against security violations and it also simplifies achieving and maintaining regulatory compliance. See the Creating SELinux policies for containers section in the RHEL 8 Using SELinux title for more information. (BZ#1673643) SELinux user-space tools updated to version 2.9 The libsepol , libselinux , libsemanage , policycoreutils , checkpolicy , and mcstrans SELinux user-space tools have been upgraded to the latest upstream release 2.9, which provides many bug fixes and enhancements over the version. ( BZ#1672638 , BZ#1672642 , BZ#1672637 , BZ#1672640 , BZ#1672635 , BZ#1672641 ) SETools updated to version 4.2.2 The SETools collection of tools and libraries has been upgraded to the latest upstream release 4.2.2, which provides the following changes: Removed source policy references from man pages, as loading source policies is no longer supported Fixed a performance regression in alias loading ( BZ#1672631 ) selinux-policy rebased to 3.14.3 The selinux-policy package has been upgraded to upstream version 3.14.3, which provides a number of bug fixes and enhancements to the allow rules over the version. ( BZ#1673107 ) A new SELinux type: boltd_t A new SELinux type, boltd_t , confines boltd , a system daemon for managing Thunderbolt 3 devices. As a result, boltd now runs as a confined service in SELinux enforcing mode. (BZ#1684103) A new SELinux policy class: bpf A new SELinux policy class, bpf , has been introduced. The bpf class enables users to control the Berkeley Packet Filter (BPF) flow through SElinux, and allows inspection and simple manipulation of Extended Berkeley Packet Filter (eBPF) programs and maps controlled by SELinux. (BZ#1673056) OpenSCAP rebased to version 1.3.1 The openscap packages have been upgraded to upstream version 1.3.1, which provides many bug fixes and enhancements over the version, most notably: Support for SCAP 1.3 source data streams: evaluating, XML schemas, and validation Tailoring files are included in ARF result files OVAL details are always shown in HTML reports, users do not have to provide the --oval-results option HTML report displays OVAL test details also for OVAL tests included from other OVAL definitions using the OVAL extend_definition element OVAL test IDs are shown in HTML reports Rule IDs are shown in HTML guides ( BZ#1718826 ) OpenSCAP now supports SCAP 1.3 The OpenSCAP suite now supports data streams conforming to the latest version of the SCAP standard - SCAP 1.3. You can now use SCAP 1.3 data streams, such as those contained in the scap-security-guide package, in the same way as SCAP 1.2 data streams without any additional usability restrictions. ( BZ#1709429 ) scap-security-guide rebased to version 0.1.46 The scap-security-guide packages have been upgraded to upstream version 0.1.46, which provides many bug fixes and enhancements over the version, most notably: * SCAP content conforms to the latest version of SCAP standard, SCAP 1.3 * SCAP content supports UBI images ( BZ#1718839 ) OpenSSH rebased to 8.0p1 The openssh packages have been upgraded to upstream version 8.0p1, which provides many bug fixes and enhancements over the version, most notably: Increased default RSA key size to 3072 bits for the ssh-keygen tool Removed support for the ShowPatchLevel configuration option Applied numerous GSSAPI key exchange code fixes, such as the fix of Kerberos cleanup procedures Removed fall back to the sshd_net_t SELinux context Added support for Match final blocks Fixed minor issues in the ssh-copy-id command Fixed Common Vulnerabilities and Exposures (CVE) related to the scp utility (CVE-2019-6111, CVE-2018-20685, CVE-2019-6109) Note, that this release introduces minor incompatibility of scp as mitigation of CVE-2019-6111. If your scripts depend on advanced bash expansions of the path during an scp download, you can use the -T switch to turn off these mitigations temporarily when connecting to trusted servers. ( BZ#1691045 ) libssh now complies with the system-wide crypto-policies The libssh client and server now automatically load the /etc/libssh/libssh_client.config file and the /etc/libssh/libssh_server.config , respectively. This configuration file includes the options set by the system-wide crypto-policies component for the libssh back end and the options set in the /etc/ssh/ssh_config or /etc/ssh/sshd_config OpenSSH configuration file. With automatic loading of the configuration file, libssh now use the system-wide cryptographic settings set by crypto-policies . This change simplifies control over the set of used cryptographic algorithms by applications. (BZ#1610883, BZ#1610884) An option for rsyslog to preserve case of FROMHOST is available This update to the rsyslog service introduces the option to manage letter case preservation of the FROMHOST property for the imudp and imtcp modules. Setting the preservecase value to on means the FROMHOST property is handled in a case sensitive manner. To avoid breaking existing configurations, the default values of preservecase are on for imtcp and off for imudp . (BZ#1614181) 6.1.6. Networking PMTU discovery and route redirection is now supported with VXLAN and GENEVE tunnels The kernel in Red Hat Enterprise Linux (RHEL) 8.0 did not handle Internet Control Message Protocol (ICMP) and ICMPv6 messages for Virtual Extensible LAN (VXLAN) and Generic Network Virtualization Encapsulation (GENEVE) tunnels. As a consequence, Path MTU (PMTU) discovery and route redirection was not supported with VXLAN and GENEVE tunnels in RHEL releases prior to 8.1. With this update, the kernel handles ICMP "Destination Unreachable" and "Redirect Message", as well as ICMPv6 "Packet Too Big" and "Destination Unreachable" error messages by adjusting the PMTU and modifying forwarding information. As a result, RHEL 8.1 supports PMTU discovery and route redirection with VXLAN and GENEVE tunnels. (BZ#1652222) Notable changes in XDP and networking eBPF features in kernel The XDP and the networking eBPF features in the kernel package have been upgraded to upstream version 5.0, which provides a number of bug fixes and enhancements over the version: eBPF programs can now better interact with the TCP/IP stack, perform flow dissection, have wider range of bpf helpers available, and have access to new map types. XDP metadata are now available to AF_XDP sockets. (BZ#1687459) The new PTP_SYS_OFFSET_EXTENDED control for ioctl() improves the accuracy of measured system-PHC ofsets This enhancement adds the PTP_SYS_OFFSET_EXTENDED control for more accurate measurements of the system precision time protocol (PTP) hardware clock (PHC) offset to the ioctl() function. The PTP_SYS_OFFSET control which, for example, the chrony service uses to measure the offset between a PHC and the system clock is not accurate enough. With the new PTP_SYS_OFFSET_EXTENDED control, drivers can isolate the reading of the lowest bits. This improves the accuracy of the measured offset. Network drivers typically read multiple PCI registers, and the driver does not read the lowest bits of the PHC time stamp between two readings of the system clock. (BZ#1677215) ipset rebased to version 7.1 The ipset packages have been upgraded to upstream version 7.1, which provides a number of bug fixes and enhancements over the version: The ipset protocol version 7 introduces the IPSET_CMD_GET_BYNAME and IPSET_CMD_GET_BYINDEX operations. Additionally, the user space component can now detect the exact compatibility level that the kernel component supports. A significant number of bugs have been fixed, such as memory leaks and use-after-free bugs. (BZ#1649090) 6.1.7. Kernel Kernel version in RHEL 8.1 Red Hat Enterprise Linux 8.1 is distributed with the kernel version 4.18.0-147. (BZ#1797671) Live patching for the kernel is now available Live patching for the kernel, kpatch , provides a mechanism to patch the running kernel without rebooting or restarting any processes. Live kernel patches will be provided for selected minor release streams of RHEL covered under the Extended Update Support (EUS) policy to remediate Critical and Important CVEs. To subscribe to the kpatch stream for the RHEL 8.1 version of the kernel, install the kpatch-patch-4_18_0-147 package provided by the RHEA-2019:3695 advisory. For more information, see Applying patches with kernel live patching in Managing, monitoring and updating the kernel. (BZ#1763780) Extended Berkeley Packet Filter in RHEL 8 Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine executes special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. In RHEL 8.1, the BPF Compiler Collection (BCC) tools package is fully supported on the AMD and Intel 64-bit architectures. The BCC tools package is a collection of dynamic kernel tracing utilities that use the eBPF virtual machine. The following eBPF components are currently available as a Technology Preview: The BCC tools package on the following architectures: the 64-bit ARM architecture, IBM Power Systems, Little Endian, and IBM Z The BCC library on all architectures The bpftrace tracing language The eXpress Data Path (XDP) feature For details regarding the Technology Preview components, see Section 6.5.2, "Kernel" . (BZ#1780124) Red Hat Enterprise Linux 8 now supports early kdump The early kdump feature allows the crash kernel and initramfs to load early enough to capture the vmcore information even for early crashes. For more details about early kdump , see the /usr/share/doc/kexec-tools/early-kdump-howto.txt file. (BZ#1520209) RHEL 8 now supports ipcmni_extend A new kernel command line parameter ipcmni_extend has been added to Red Hat Enterprise Linux 8. The parameter extends a number of unique System V Inter-process Communication (IPC) identifiers from the current maximum of 32 KB (15 bits) up to 16 MB (24 bits). As a result, users whose applications produce a lot of shared memory segments are able to create a stronger IPC identifier without exceeding the 32 KB limit. Note that in some cases using ipcmni_extend results in a small performance overhead and it should be used only if the applications need more than 32 KB of unique IPC identifier. (BZ#1710480) The persistent memory initialization code supports parallel initialization The persistent memory initialization code enables parallel initialization on systems with multiple nodes of persistent memory. The parallel initialization greatly reduces the overall memory initialization time on systems with large amounts of persistent memory. As a result, these systems can now boot much faster. (BZ#1634343) TPM userspace tool has been updated to the last version The tpm2-tools userspace tool has been updated to version 2.0. With this update, tpm2-tools is able to fix many defects. ( BZ#1664498 ) The rngd daemon is now able to run with non-root privileges The random number generator daemon ( rngd ) checks whether data supplied by the source of randomness is sufficiently random and then stores the data in the kernel's random-number entropy pool. With this update, rngd is able to run with non-root user privileges to enhance system security. ( BZ#1692435 ) Full support for the ibmvnic driver With the introduction of Red Hat Enterprise Linux 8.0, the IBM Virtual Network Interface Controller (vNIC) driver for IBM POWER architectures, ibmvnic , was available as a Technology Preview. vNIC is a PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. Starting with Red Hat Enterprise Linux 8.1 the ibmvnic device driver is fully supported on IBM POWER9 systems. (BZ#1665717) Intel (R) Omni-Path Architecture (OPA) Host Software Intel Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 8.1. Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. ( BZ#1766186 ) UBSan has been enabled in the debug kernel in RHEL 8 The Undefined Behavior Sanitizer ( UBSan ) utility exposes undefined behavior flaws in C code languages at runtime. This utility has now been enabled in the debug kernel because the compiler behavior was, in some cases, different than developers' expectations. Especially, in the case of compiler optimization, where subtle, obscure bugs would appear. As a result, running the debug kernel with UBSan enabled allows the system to easily detect such bugs. (BZ#1571628) The fadump infrastructure now supports re-registering in RHEL 8 The support has been added for re-registering (unregistering and registering) of the firmware-assisted dump ( fadump ) infrastructure after any memory hot add/remove operation to update the crash memory ranges. The feature aims to prevent the system from potential racing issues during unregistering and registering fadump from userspace during udev events. (BZ#1710288) The determine_maximum_mpps.sh script has been introduced in RHEL for Real Time 8 The determine_maximum_mpps.sh script has been introduced to help use the queuelat test program. The script executes queuelat to determine the maximum packets per second a machine can handle. ( BZ#1686494 ) kernel-rt source tree now matches the latest RHEL 8 tree The kernel-rt sources have been upgraded to be based on the latest Red Hat Enterprise Linux kernel source tree, which provides a number of bug fixes and enhancements over the version. ( BZ#1678887 ) The ssdd test has been added to RHEL for Real Time 8 The ssdd test has been added to enable stress testing of the tracing subsystem. The test runs multiple tracing threads to verify locking is correct within the tracing system. ( BZ#1666351 ) 6.1.8. Hardware enablement Memory Mode for Optane DC Persistent Memory technology is fully supported Intel Optane DC Persistent Memory storage devices provide data center-class persistent memory technology, which can significantly increase transaction throughput. To use the Memory Mode technology, your system does not require any special drivers or specific certification. Memory Mode is transparent to the operating system. ( BZ#1718422 ) IBM Z now supports system boot signature verification Secure Boot allows the system firmware to check the authenticity of cryptographic keys that were used to sign the kernel space code. As a result,the feature improves security since only code from trusted vendors can be executed. Note that IBM z15 is required to use Secure Boot. (BZ#1659399) 6.1.9. File systems and storage Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is supported on configurations where the hardware vendor has qualified it and provides full support for the particular host bus adapter (HBA) and storage array configuration on RHEL. DIF/DIX is not supported on the following configurations: It is not supported for use on the boot device. It is not supported on virtualized guests. Red Hat does not support using the Automatic Storage Management library (ASMLib) when DIF/DIX is enabled. DIF/DIX is enabled or disabled at the storage device, which involves various layers up to (and including) the application. The method for activating the DIF on storage devices is device-dependent. For further information on the DIF/DIX feature, see What is DIF/DIX . (BZ#1649493) Optane DC memory systems now supports EDAC reports Previously, EDAC was not reporting memory corrected/uncorrected events if the memory address was within a NVDIMM module. With this update, EDAC can properly report the events with the correct memory module information. (BZ#1571534) The VDO Ansible module has been moved to Ansible packages Previously, the VDO Ansible module was provided by the vdo RPM package. Starting with this release, the module is provided by the ansible package instead. The original location of the VDO Ansible module file was: The new location of the file is: The vdo package continues to distribute Ansible playbooks. For more information on Ansible, see http://docs.ansible.com/ . ( BZ#1669534 ) Aero adapters are now fully supported The following Aero adapters, previously available as a Technology Preview, are now fully supported: PCI ID 0x1000:0x00e2 and 0x1000:0x00e6, controlled by the mpt3sas driver PCI ID 0x1000:Ox10e5 and 0x1000:0x10e6, controlled by the megaraid_sas driver (BZ#1663281) LUKS2 now supports online re-encryption The Linux Unified Key Setup version 2 (LUKS2) format now supports re-encrypting encrypted devices while the devices are in use. For example, you do not have to unmount the file system on the device to perform the following tasks: Change the volume key Change the encryption algorithm When encrypting a non-encrypted device, you must still unmount the file system, but the encryption is now significantly faster. You can remount the file system after a short initialization of the encryption. Additionally, the LUKS2 re-encryption is now more resilient. You can select between several options that prioritize performance or data protection during the re-encryption process. To perform the LUKS2 re-encryption, use the cryptsetup reencrypt subcommand. Red Hat no longer recommends using the cryptsetup-reencrypt utility for the LUKS2 format. Note that the LUKS1 format does not support online re-encryption, and the cryptsetup reencrypt subcommand is not compatible with LUKS1. To encrypt or re-encrypt a LUKS1 device, use the cryptsetup-reencrypt utility. For more information on disk encryption, see Encrypting block devices using LUKS . ( BZ#1676622 ) New features of ext4 available in RHEL 8 In RHEL8, following are the new fully supported features of ext4: Non-default features: project quota mmp Non-default mount options: bsddf|minixdf grpid|bsdgroups and nogrpid|sysvgroups resgid=n and resuid=n errors={continue|remount-ro|panic} commit=nrsec max_batch_time=usec min_batch_time=usec grpquota|noquota|quota|usrquota prjquota dax lazytime|nolazytime discard|nodiscard init_itable|noinit_itable jqfmt={vfsold|vfsv0|vfsv1} usrjquota=aquota.user|grpjquota=aquota.group For more information on features and mount options, see the ext4 man page. Other ext4 features, mount options or both, or combination of features, mount options or both may not be fully supported by Red Hat. If your special workload requires a feature or mount option that is not fully supported in the Red Hat release, contact Red Hat support to evaluate it for inclusion in our supported list. (BZ#1741531) NVMe over RDMA now supports an Infiniband in the target mode for IBM Coral systems In RHEL 8.1, NVMe over RDMA now supports an Infiniband in the target mode for IBM Coral systems, with a single NVMe PCIe add in card as the target. ( BZ#1721683 ) 6.1.10. High availability and clusters Pacemaker now defaults the concurrent-fencing cluster property to true If multiple cluster nodes need to be fenced at the same time, and they use different configured fence devices, Pacemaker will now execute the fencing simultaneously, rather than serialized as before. This can result in greatly sped up recovery in a large cluster when multiple nodes must be fenced. ( BZ#1715426 ) Extending a shared logical volume no longer requires a refresh on every cluster node With this release, extending a shared logical volume no longer requires a refresh on every cluster node after running the lvextend command on one cluster node. For the full procedure to extend the size of a GFS2 file system, see Growing a GFS2 file system . (BZ#1649086) Maximum size of a supported RHEL HA cluster increased from 16 to 32 nodes With this release, Red Hat supports cluster deployments of up to 32 full cluster nodes. (BZ#1693491) Commands for adding, changing, and removing corosync links have been added to pcs The Kronosnet (knet) protocol now allows you to add and remove knet links in running clusters. To support this feature, the pcs command now provides commands to add, change, and remove knet links and to change a upd/udpu link in an existing cluster. For information on adding and modifying links in an existing cluster, see Adding and modifying links in an existing cluster . (BZ#1667058) 6.1.11. Dynamic programming languages, web and database servers A new module stream: php:7.3 RHEL 8.1 introduces PHP 7.3 , which provides a number of new features and enhancements. Notable changes include: Enhanced and more flexible heredoc and nowdoc syntaxes The PCRE extension upgraded to PCRE2 Improved multibyte string handling Support for LDAP controls Improved FastCGI Process Manager (FPM) logging Several deprecations and backward incompatible changes For more information, see Migrating from PHP 7.2.x to PHP 7.3.x . Note that the RHEL 8 version of PHP 7.3 does not support the Argon2 password hashing algorithm. To install the php:7.3 stream, use: If you want to upgrade from the php:7.2 stream, see Switching to a later stream . ( BZ#1653109 ) A new module stream: ruby:2.6 A new module stream, ruby:2.6 , is now available. Ruby 2.6.3 , included in RHEL 8.1, provides numerous new features, enhancements, bug and security fixes, and performance improvements over version 2.5 distributed in RHEL 8.0. Notable enhancements include: Constant names are now allowed to begin with a non-ASCII capital letter. Support for an endless range has been added. A new Binding#source_location method has been provided. USDSAFE is now a process global state and it can be set back to 0 . The following performance improvements have been implemented: The Proc#call and block.call processes have been optimized. A new garbage collector managed heap, Transient heap ( theap ), has been introduced. Native implementations of coroutines for individual architectures have been introduced. In addition, Ruby 2.5 , provided by the ruby:2.5 stream, has been upgraded to version 2.5.5, which provides a number of bug and security fixes. To install the ruby:2.6 stream, use: If you want to upgrade from the ruby:2.5 stream, see Switching to a later stream . (BZ#1672575) A new module stream: nodejs:12 RHEL 8.1 introduces Node.js 12 , which provides a number of new features and enhancements over version 10. Notable changes include: The V8 engine upgraded to version 7.4 A new default HTTP parser, llhttp (no longer experimental) Integrated capability of heap dump generation Support for ECMAScript 2015 (ES6) modules Improved support for native modules Worker threads no longer require a flag A new experimental diagnostic report feature Improved performance To install the nodejs:12 stream, use: If you want to upgrade from the nodejs:10 stream, see Switching to a later stream . (BZ#1685191) Judy-devel available in CRB The Judy-devel package is now available as a part of the mariadb-devel:10.3 module in the CodeReady Linux Builder repository (CRB) . As a result, developers are now able to build applications with the Judy library. To install the Judy-devel package, enable the mariadb-devel:10.3 module first: (BZ#1657053) FIPS compliance in Python 3 This update adds support for OpenSSL FIPS mode to Python 3 . Namely: In FIPS mode, the blake2 , sha3 , and shake hashes use the OpenSSL wrappers and do not offer extended functionality (such as keys, tree hashing, or custom digest size). In FIPS mode, the hmac.HMAC class can be instantiated only with an OpenSSL wrapper or a string with OpenSSL hash name as the digestmod argument. The argument must be specified (instead of defaulting to the md5 algorithm). Note that hash functions support the usedforsecurity argument, which allows using insecure hashes in OpenSSL FIPS mode. The user is responsible for ensuring compliance with any relevant standards. ( BZ#1731424 ) FIPS compliance changes in python3-wheel This update of the python3-wheel package removes a built-in implementation for signing and verifying data that is not compliant with FIPS. (BZ#1731526) A new module stream: nginx:1.16 The nginx 1.16 web and proxy server, which provides a number of new features and enhancements over version 1.14, is now available. For example: Numerous updates related to SSL (loading of SSL certificates and secret keys from variables, variable support in the ssl_certificate and ssl_certificate_key directives, a new ssl_early_data directive) New keepalive -related directives A new random directive for distributed load balancing New parameters and improvements to existing directives (port ranges for the listen directive, a new delay parameter for the limit_req directive, which enables two-stage rate limiting) A new USDupstream_bytes_sent variable Improvements to User Datagram Protocol (UDP) proxying Other notable changes include: In the nginx:1.16 stream, the nginx package does not require the nginx-all-modules package, therefore nginx modules must be installed explicitly. When you install nginx as module, the nginx-all-modules package is installed as a part of the common profile, which is the default profile. The ssl directive has been deprecated; use the ssl parameter for the listen directive instead. nginx now detects missing SSL certificates during configuration testing. When using a host name in the listen directive, nginx now creates listening sockets for all addresses that the host name resolves to. To install the nginx:1.16 stream, use: If you want to upgrade from the nginx:1.14 stream, see Switching to a later stream . (BZ#1690292) perl-IO-Socket-SSL rebased to version 2.066 The perl-IO-Socket-SSL package has been upgraded to version 2.066, which provides a number of bug fixes and enhancements over the version, for example: Improved support for TLS 1.3, notably a session reuse and an automatic post-handshake authentication on the client side Added support for multiple curves, automatic setting of curves, partial trust chains, and support for RSA and ECDSA certificates on the same domain (BZ#1632600) perl-Net-SSLeay rebased to version 1.88 The perl-Net-SSLeay package has been upgraded to version 1.88, which provides multiple bug fixes and enhancements. Notable changes include: Improved compatibility with OpenSSL 1.1.1, such as manipulating a stack of certificates and X509 stores, and selecting elliptic curves and groups Improved compatibility with TLS 1.3, for example, a session reuse and a post-handshake authentication Fixed memory leak in the cb_data_advanced_put() subroutine. (BZ#1632597) 6.1.12. Compilers and development tools GCC Toolset 9 available Red Hat Enterprise Linux 8.1 introduces GCC Toolset 9, an Application Stream containing more up-to-date versions of development tools. The following tools and versions are provided by GCC Toolset 9: Tool Version GCC 9.1.1 GDB 8.3 Valgrind 3.15.0 SystemTap 4.1 Dyninst 10.1.0 binutils 2.32 elfutils 0.176 dwz 0.12 make 4.2.1 strace 5.1 ltrace 0.7.91 annobin 8.79 GCC Toolset 9 is available as an Application Stream in the form of a Software Collection in the AppStream repository. GCC Toolset is a set of tools similar to Red Hat Developer Toolset for RHEL 7. To install GCC Toolset 9: To run a tool from GCC Toolset 9: To run a shell session where tool versions from GCC Toolset 9 take precedence over system versions of these tools: For detailed instructions regarding usage, see Using GCC Toolset . (BZ#1685482) Upgraded compiler toolsets The following compiler toolsets, distributed as Application Streams, have been upgraded with RHEL 8.1: Clang and LLVM Toolset, which provides the LLVM compiler infrastructure framework, the Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code analysis, to version 8.0.1 Rust Toolset, which provides the Rust programming language compiler rustc , the cargo build tool and dependency manager, and required libraries, to version 1.37 Go Toolset, which provides the Go ( golang ) programming language tools and libraries, to version 1.12.8. (BZ#1731502, BZ#1691975, BZ#1680091, BZ#1677819, BZ#1681643) SystemTap rebased to version 4.1 The SystemTap instrumentation tool has been updated to upstream version 4.1. Notable improvements include: The eBPF runtime backend can handle more features of the scripting language such as string variables and rich formatted printing. Performance of the translator has been significantly improved. More types of data in optimized C code can now be extracted with DWARF4 debuginfo constructs. ( BZ#1675740 ) General availability of the DHAT tool Red Hat Enterprise Linux 8.1 introduces the general availability of the DHAT tool. It is based on the valgrind tool version 3.15.0. You can find changes/improvements in valgrind tool functionality below: use --tool=dhat instead of --tool=exp-dhat , --show-top-n and --sort-by options have been removed because dhat tool now prints the minimal data after the program ends, a new viewer dh_view.html , which is a JavaScript programm, contains the profile results. A short message explains how to view the results after the run is ended, the documentation for a viewer is located: /usr/libexec/valgrind/dh_view.html , the documentation for the DHAT tool is located: /usr/share/doc/valgrind/html/dh-manual.html , the support for amd64 (x86_64): the RDRAND and F16C insn set extensions is added, in cachegrind the cg_annotate command has a new option, --show-percs , which prints percentages to all event counts, in callgrind the callgrind_annotate command has a new option, --show-percs , which prints percentages to all event counts, in massif the default value for --read-inline-info is now yes , in memcheck option --xtree-leak=yes , which outputs leak result in xtree format, automatically activates the option --show-leak-kinds=all , the new option --show-error-list=no|yes displays the list of the detected errors and the used suppression at the end of the run. Previously, the user could specify the option -v for valgrind command, which shows a lot of information that might be confusing. The option -s is an equivalent to the option --show-error-list=yes . (BZ#1683715) elfutils rebased to version 0.176 The elfutils packages have been updated to upstream version 0.176. This version brings various bug fixes, and resolves the following vulnerabilities: CVE-2019-7146 CVE-2019-7149 CVE-2019-7150 CVE-2019-7664 CVE-2019-7665 Notable improvements include: The libdw library has been extended with the dwelf_elf_begin() function which is a variant of elf_begin() that handles compressed files. A new --reloc-debug-sections-only option has been added to the eu-strip tool to resolve all trivial relocations between debug sections in place without any other stripping. This functionality is relevant only for ET_REL files in certain circumstances. (BZ#1683705) Additional memory allocation checks in glibc Application memory corruption is a leading cause of application and security defects. Early detection of such corruption, balanced against the cost of detection, can provide significant benefits to application developers. To improve detection, six additional memory corruption checks have been added to the malloc metadata in the GNU C Library ( glibc ), which is the core C library in RHEL. These additional checks have been added at a very low cost to runtime performance. (BZ#1651283) GDB can access more POWER8 registers With this update, the GNU debugger (GDB) and its remote stub gdbserver can access the following additional registers and register sets of the POWER8 processor line of IBM: PPR DSCR TAR EBB/PMU HTM (BZ#1187581) binutils disassembler can handle NFP binary files The disassembler tool from the binutils package has been extended to handle binary files for the Netronome Flow Processor (NFP) hardware series. This functionality is required to enable further features in the bpftool Berkeley Packet Filter (BPF) code compiler. (BZ#1644391) Partially writable GOT sections are now supported on the IBM Z architecture The IBM Z binaries using the "lazy binding" feature of the loader can now be hardened by generating partially writable Global offset table (GOT) sections. These binaries require a read-write GOT, but not all entries to be writable. This update provides protection for the entries from potential attacks. (BZ#1525406) binutils now supports Arch13 processors of IBM Z This update adds support for the extensions related to the Arch13 processors into the binutils packages on IBM Z architecture. As a result, it is now possible to build kernels that can use features available in arch13-enabled CPUs on IBM Z. (BZ#1659437) Dyninst rebased to version 10.1.0 The Dyninst instrumentation library has been updated to upstream version 10.1.0. Notable changes include: Dyninst supports the Linux PowerPC Little Endian ( ppcle ) and 64-bit ARM ( aarch64 ) architectures. Start-up time has been improved by using parallel code analysis. (BZ#1648441) Date formatting updates for the Japanese Reiwa era The GNU C Library now provides correct Japanese era name formatting for the Reiwa era starting on May 1st, 2019. The time handling API data has been updated, including the data used by the strftime and strptime functions. All APIs will correctly print the Reiwa era including when strftime is used along with one of the era conversion specifiers such as %EC , %EY , or %Ey . (BZ#1577438) Performance Co-Pilot rebased to version 4.3.2 In RHEL 8.1, the Performance Co-Pilot (PCP) tool has been updated to upstream version 4.3.2. Notable improvements include: New metrics have been added - Linux kernel entropy, pressure stall information, Nvidia GPU statistics, and more. Tools such as pcp-dstat , pcp-atop , the perfevent PMDA, and others have been updated to report the new metrics. The pmseries and pmproxy utilities for a performant PCP integration with Grafana have been updated. This release is backward compatible for libraries, over-the-wire protocol and on-disk PCP archive format. ( BZ#1685302 ) 6.1.13. Identity Management IdM now supports Ansible roles and modules for installation and management This update introduces the ansible-freeipa package, which provides Ansible roles and modules for Identity Management (IdM) deployment and management. You can use Ansible roles to install and uninstall IdM servers, replicas, and clients. You can use Ansible modules to manage IdM groups, topology, and users. There are also example playbooks available. This update simplifies the installation and configuration of IdM based solutions. (JIRA:RHELPLAN-2542) New tool to test the overall fitness of IdM deployment: Healthcheck This update introduces the Healthcheck tool in Identity Management (IdM). The tool provides tests verifying that the current IdM server is configured and running correctly. The major areas currently covered are: * Certificate configuration and expiration dates * Replication errors * Replication topology * AD Trust configuration * Service status * File permissions of important configuration files * Filesystem space The Healthcheck tool is available in the command-line interface (CLI). (JIRA:RHELPLAN-13066) IdM now supports renewing expired system certificates when the server is offline With this enhancement, administrators can renew expired system certificates when Identity Management (IdM) is offline. When a system certificate expires, IdM fails to start. The new ipa-cert-fix command replaces the workaround to manually set the date back to proceed with the renewal process. As a result, the downtime and support costs reduce in the mentioned scenario. (JIRA:RHELPLAN-13074) Identity Management supports trust with Windows Server 2019 When using Identity Management, you can now establish a supported forest trust to Active Directory forests that run by Windows Server 2019. The supported forest and domain functional levels are unchanged and supported up to level Windows Server 2016. (JIRA:RHELPLAN-15036) samba rebased to version 4.10.4 The samba packages have been upgraded to upstream version 4.10.4, which provides a number of bug fixes and enhancements over the version: Samba 4.10 fully supports Python 3. Note that future Samba versions will not have any runtime support for Python 2. The JavaScript Object Notation (JSON) logging feature now logs the Windows event ID and logon type for authentication messages. The new vfs_glusterfs_fuse file system in user space (FUSE) module improves the performance when Samba accesses a GlusterFS volume. To enable this module, add glusterfs_fuse to the vfs_objects parameter of the share in the /etc/samba/smb.conf file. Note that vfs_glusterfs_fuse does not replace the existing vfs_glusterfs module. The server message block (SMB) client Python bindings are now deprecated and will be removed in a future Samba release. This only affects users who use the Samba Python bindings to write their own utilities. Samba automatically updates its tdb database files when the smbd , nmbd , or winbind service starts. Back up the databases files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating: https://www.samba.org/samba/history/samba-4.10.0.html (BZ#1638001) Updated system-wide certificate store location for OpenLDAP The default location for trusted CAs for OpenLDAP has been updated to use the system-wide certificate store ( /etc/pki/ca-trust/source ) instead of /etc/openldap/certs . This change has been made to simplify the setting up of CA trust. No additional setup is required to set up CA trust, unless you have service-specific requirements. For example, if you require an LDAP server's certificate to be only trusted for LDAP client connections, in this case you must set up the CA certificates as you did previously. (JIRA:RHELPLAN-7109) New ipa-crl-generation commands have been introduced to simplify managing IdM CRL master This update introduces the ipa-crl-generation status/enable/disable commands. These commands, run by the root user, simplify work with the Certificate Revocation List (CRL) in IdM. Previously, moving the CRL generation master from one IdM CA server to another was a lengthy, manual and error-prone procedure. The ipa-crl-generation status command checks if the current host is the CRL generation master. The ipa-crl-generation enable command makes the current host the CRL generation master in IdM if the current host is an IdM CA server. The ipa-crl-generation disable command stops CRL generation on the current host. Additionally, the ipa-server-install --uninstall command now includes a safeguard checking whether the host is the CRL generation master. This way, IdM ensures that the system administrator does not remove the CRL generation master from the topology. (JIRA:RHELPLAN-13068) OpenID Connect support in keycloak-httpd-client-install The keycloak-httpd-client-install identity provider previously supported only the SAML (Security Assertion Markup Language) authentication with the mod_auth_mellon authentication module. This rebase introduces the mod_auth_openidc authentication module support, which allows you to configure also the OpenID Connect authentication. The keycloak-httpd-client-install identity provider allows an apache instance to be configured as an OpenID Connect client by configuring mod_auth_openidc . (BZ#1553890) Setting up IdM as a hidden replica is now available as a Technology Preview This enhancement enables administrators to set up an Identity Management (IdM) replica as a hidden replica. A hidden replica is an IdM server that has all services running and available. However, it is not advertised to other clients or masters because no SRV records exist for the services in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect hidden replicas. Hidden replicas are primarily designed for dedicated services that can otherwise disrupt clients. For example, a full backup of IdM requires to shut down all IdM services on the master or replica. Since no clients use a hidden replica, administrators can temporarily shut down the services on this host without affecting any clients. Other use cases include high-load operations on the IdM API or the LDAP server, such as a mass import or extensive queries. To install a new hidden replica, use the ipa-replica-install --hidden-replica command. To change the state of an existing replica, use the ipa server-state command. ( BZ#1719767 ) SSSD now enforces AD GPOs by default The default setting for the SSSD option ad_gpo_access_control is now enforcing . In RHEL 8, SSSD enforces access control rules based on Active Directory Group Policy Objects (GPOs) by default. Red Hat recommends ensuring GPOs are configured correctly in Active Directory before upgrading from RHEL 7 to RHEL 8. If you would not like to enforce GPOs, change the value of the ad_gpo_access_control option in the /etc/sssd/sssd.conf file to permissive . (JIRA:RHELPLAN-51289) 6.1.14. Desktop Modified workspace switcher in GNOME Classic Workspace switcher in the GNOME Classic environment has been modified. The switcher is now located in the right part of the bottom bar, and it is designed as a horizontal strip of thumbnails. Switching between workspaces is possible by clicking on the required thumbnail. Alternatively, you can also use the combination of Ctrl + Alt + down/up arrow keys to switch between workspaces. The content of the active workspace is shown in the left part of the bottom bar in form of the window list . When you press the Super key within the particular workspace, you can see the window picker , which includes all windows that are open in this workspace. However, the window picker no longer displays the following elements that were available in the release of RHEL: dock (vertical bar on the left side of the screen) workspace switcher (vertical bar on the right side of the screen) search entry For particular tasks that were previously achieved with the help of these elements, adopt the following approaches: To launch applications, instead of using dock , you can: Use the Applications menu on the top bar Press the kdb:[Alt + F2] keys to make the Enter a Command screen appear, and write the name of the executable into this screen. To switch between workspaces, instead of using the vertical workspace switcher , use the horizontal workspace switcher in the right bottom bar. If you require the search entry or the vertical workspace switcher , use GNOME Standard environment instead of GNOME Classic. ( BZ#1704360 ) 6.1.15. Graphics infrastructures DRM rebased to Linux kernel version 5.1 The Direct Rendering Manager (DRM) kernel graphics subsystem has been rebased to upstream Linux kernel version 5.1, which provides a number of bug fixes and enhancements over the version. Most notably: The mgag200 driver has been updated. The driver continues providing support for HPE Proliant Gen10 Systems, which use Matrox G200 eH3 GPUs. The updated driver also supports current and new Dell EMC PowerEdge Servers. The nouveau driver has been updated to provide hardware enablement to current and future Lenovo platforms that use NVIDIA GPUs. The i915 display driver has been updated for continued support of current and new Intel GPUs. Bug fixes for Aspeed AST BMC display chips have been added. Support for AMD Raven 2 set of Accelerated Processing Units (APUs) has been added. Support for AMD Picasso APUs has been added. Support for AMD Vega GPUs has been added. Support for Intel Amber Lake-Y and Intel Comet Lake-U GPUs has been added. (BZ#1685552) Support for AMD Picasso graphic cards This update introduces the amdgpu graphics driver. As a result AMD Picasso graphics cards are now fully supported on RHEL 8. (BZ#1685427) 6.1.16. The web console Enabling and disabling SMT Simultaneous Multi-Threading (SMT) configuration is now available in RHEL 8. Disabling SMT in the web console allows you to mitigate a class of CPU security vulnerabilities such as: Microarchitectural Data Sampling L1 Terminal Fault Attack ( BZ#1678956 ) Adding a search box in the Services page The Services page now has a search box for filtering services by: Name Description State In addition, service states have been merged into one list. The switcher buttons at the top of the page have also been changed to tabs to improve user experience of the Services page. ( BZ#1657752 ) Adding support for firewall zones The firewall settings on the Networking page now supports: Adding and removing zones Adding or removing services to arbitrary zones and Configuring custom ports in addition to firewalld services. ( BZ#1678473 ) Adding improvements to Virtual Machines configuration With this update, the RHEL 8 web console includes a lot of improvements in the Virtual Machines page. You can now: Manage various types of storage pools Configure VM autostart Import existing qcow images Install VMs through PXE boot Change memory allocation Pause/resume VMs Configure cache characteristics (directsync, writeback) Change the boot order ( BZ#1658847 ) 6.1.17. Red Hat Enterprise Linux system roles A new storage role added to RHEL system roles The storage role has been added to RHEL system roles provided by the rhel-system-roles package. The storage role can be used to manage local storage using Ansible. Currently, the storage role supports the following types of tasks: Managing file systems on whole disks Managing LVM volume groups Managing logical volumes and their file systems For more information, see Managing file systems and Configuring and managing logical volumes . (BZ#1691966) 6.1.18. Virtualization WALinuxAgent rebased to version 2.2.38 The WALinuxAgent package has been upgraded to upstream version 2.2.38, which provides a number of bug fixes and enhancements over the version. In addition, WALinuxAgent is no longer compatible with Python 2, and applications dependant on Python 2. As a result, applications and extensions written in Python 2 will need to be converted to Python 3 to establish compatibility with WALinuxAgent . ( BZ#1722848 ) Windows automatically finds the needed virtio-win drivers Windows can now automatically find the virtio-win drivers it needs from the driver ISO without requiring the user to select the folder in which they are located. ( BZ#1223668 ) KVM supports 5-level paging With Red Hat Enterprise Linux 8, KVM virtualization supports the 5-level paging feature. On selected host CPUs, this significantly increases the physical and virtual address space that the host and guest systems can use. (BZ#1526548) Smart card sharing is now supported on Windows guests with ActivClient drivers This update adds support for smart card sharing in virtual machines (VMs) that use a Windows guest OS and ActivClient drivers. This enables smart card authentication for user logins using emulated or shared smart cards on these VMs. (BZ#1615840) New options have been added for virt-xml The virt-xml utility can now use the following command-line options: --no-define - Changes done to the virtual machine (VM) by the virt-xml command are not saved into persistent configuration. --start - Starts the VM after performing requested changes. Using these two options together allows users to change the configuration of a VM and start the VM with the new configuration without making the changes persistent. For example, the following command changes the boot order of the testguest VM to network for the boot, and initiates the boot: (JIRA:RHELPLAN-13960) IBM z14 GA2 CPUs supported by KVM With this update, KVM supports the IBM z14 GA2 CPU model. This makes it possible to create virtual machines on IBM z14 GA2 hosts that use RHEL 8 as the host OS with an IBM z14 GA2 CPU in the guest. (JIRA:RHELPLAN-13649) Nvidia NVLink2 is now compatible with virtual machines on IBM POWER9 Nvidia VGPUs that support the NVLink2 feature can now be assigned to virtual machines (VMs) running in a RHEL 8 host on an IBM POWER9 system. This makes it possible for these VMs to use the full performance potential of NVLink2. (JIRA:RHELPLAN-12811) 6.2. New Drivers Network Drivers Serial Line Internet Protocol support (slip.ko.xz) Platform CAN bus driver for Bosch C_CAN controller (c_can_platform.ko.xz) virtual CAN interface (vcan.ko.xz) Softing DPRAM CAN driver (softing.ko.xz) serial line CAN interface (slcan.ko.xz) CAN driver for EMS Dr. Thomas Wuensche CAN/USB interfaces (ems_usb.ko.xz) CAN driver for esd CAN-USB/2 and CAN-USB/Micro interfaces (esd_usb2.ko.xz) Socket-CAN driver for SJA1000 on the platform bus (sja1000_platform.ko.xz) Socket-CAN driver for PLX90xx PCI-bridge cards with the SJA1000 chips (plx_pci.ko.xz) Socket-CAN driver for EMS CPC-PCI/PCIe/104P CAN cards (ems_pci.ko.xz) Socket-CAN driver for KVASER PCAN PCI cards (kvaser_pci.ko.xz) Intel(R) 2.5G Ethernet Linux Driver (igc.ko.xz) Realtek 802.11ac wireless PCI driver (rtwpci.ko.xz) Realtek 802.11ac wireless core module (rtw88.ko.xz) MediaTek MT76 devices support (mt76.ko.xz) MediaTek MT76x0U (USB) support (mt76x0u.ko.xz) MediaTek MT76x2U (USB) support (mt76x2u.ko.xz) Graphics Drivers and Miscellaneous Drivers Virtual Kernel Mode Setting (vkms.ko.xz) Intel GTT (Graphics Translation Table) routines (intel-gtt.ko.xz) Xen frontend/backend page directory based shared buffer handling (xen-front-pgdir-shbuf.ko.xz) LED trigger for audio mute control (ledtrig-audio.ko.xz) Host Wireless Adapter Radio Control Driver (hwa-rc.ko.xz) Network Block Device (nbd.ko.xz) Pericom PI3USB30532 Type-C mux driver (pi3usb30532.ko.xz) Fairchild FUSB302 Type-C Chip Driver (fusb302.ko.xz) TI TPS6598x USB Power Delivery Controller Driver (tps6598x.ko.xz) Intel PCH Thermal driver (intel_pch_thermal.ko.xz) PCIe AER software error injector (aer_inject.ko.xz) Simple stub driver for PCI SR-IOV PF device (pci-pf-stub.ko.xz) mISDN Digital Audio Processing support (mISDN_dsp.ko.xz) ISDN layer 1 for Cologne Chip HFC-4S/8S chips (hfc4s8s_l1.ko.xz) ISDN4Linux: Call diversion support (dss1_divert.ko.xz) CAPI4Linux: Userspace /dev/capi20 interface (capi.ko.xz) USB Driver for Gigaset 307x (bas_gigaset.ko.xz) ISDN4Linux: Driver for HYSDN cards (hysdn.ko.xz) mISDN Digital Audio Processing support (mISDN_dsp.ko.xz) mISDN driver for Winbond w6692 based cards (w6692.ko.xz) mISDN driver for CCD's hfc-pci based cards (hfcpci.ko.xz) mISDN driver for hfc-4s/hfc-8s/hfc-e1 based cards (hfcmulti.ko.xz) mISDN driver for NETJet (netjet.ko.xz) mISDN driver for AVM FRITZ!CARD PCI ISDN cards (avmfritz.ko.xz) Storage Drivers NVMe over Fabrics TCP host (nvme-tcp.ko.xz) NVMe over Fabrics TCP target (nvmet-tcp.ko.xz) device-mapper writecache target (dm-writecache.ko.xz) 6.3. Updated Drivers Network Driver Updates QLogic FastLinQ 4xxxx Ethernet Driver (qede.ko.xz) has been updated to version 8.37.0.20. QLogic FastLinQ 4xxxx Core Module (qed.ko.xz) has been updated to version 8.37.0.20. Broadcom BCM573xx network driver (bnxt_en.ko.xz) has been updated to version 1.10.0. QLogic BCM57710/57711/57711E/57712/57712_MF/57800/57800_MF/57810/57810_MF/57840/57840_MF Driver (bnx2x.ko.xz) has been updated to version 1.713.36-0. Intel(R) Gigabit Ethernet Network Driver (igb.ko.xz) has been updated to version 5.6.0-k. Intel(R) 10 Gigabit Virtual Function Network Driver (ixgbevf.ko.xz) has been updated to version 4.1.0-k-rh8.1.0. Intel(R) 10 Gigabit PCI Express Network Driver (ixgbe.ko.xz) has been updated to version 5.1.0-k-rh8.1.0. Intel(R) Ethernet Switch Host Interface Driver (fm10k.ko.xz) has been updated to version 0.26.1-k. Intel(R) Ethernet Connection E800 Series Linux Driver (ice.ko.xz) has been updated to version 0.7.4-k. Intel(R) Ethernet Connection XL710 Network Driver (i40e.ko.xz) has been updated to version 2.8.20-k. The Netronome Flow Processor (NFP) driver (nfp.ko.xz) has been updated to version 4.18.0-147.el8.x86_64. Elastic Network Adapter (ENA) (ena.ko.xz) has been updated to version 2.0.3K. Graphics and Miscellaneous Driver Updates Standalone drm driver for the VMware SVGA device (vmwgfx.ko.xz) has been updated to version 2.15.0.0. hpe watchdog driver (hpwdt.ko.xz) has been updated to version 2.0.2. Storage Driver Updates Driver for HP Smart Array Controller version 3.4.20-170-RH3 (hpsa.ko.xz) has been updated to version 3.4.20-170-RH3. LSI MPT Fusion SAS 3.0 Device Driver (mpt3sas.ko.xz) has been updated to version 28.100.00.00. Emulex LightPulse Fibre Channel SCSI driver 12.2.0.3 (lpfc.ko.xz) has been updated to version 0:12.2.0.3. QLogic QEDF 25/40/50/100Gb FCoE Driver (qedf.ko.xz) has been updated to version 8.37.25.20. Cisco FCoE HBA Driver (fnic.ko.xz) has been updated to version 1.6.0.47. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.01.00.15.08.1-k1. Driver for Microsemi Smart Family Controller version 1.2.6-015 (smartpqi.ko.xz) has been updated to version 1.2.6-015. QLogic FastLinQ 4xxxx iSCSI Module (qedi.ko.xz) has been updated to version 8.33.0.21. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.707.51.00-rc1. 6.4. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 8.1 that have a significant impact on users. 6.4.1. Installer and image creation Using the version or inst.version kernel boot parameters no longer stops the installation program Previously, booting the installation program from the kernel command line using the version or inst.version boot parameters printed the version, for example anaconda 30.25.6 , and stopped the installation program. With this update, the version and inst.version parameters are ignored when the installation program is booted from the kernel command line, and as a result, the installation program is not stopped. (BZ#1637472) The xorg-x11-drv-fbdev , xorg-x11-drv-vesa , and xorg-x11-drv-vmware video drivers are now installed by default Previously, workstations with specific models of NVIDIA graphics cards and workstations with specific AMD accelerated processing units did not display the graphical login window after a RHEL 8.0 Server installation. This issue also impacted virtual machines relying on EFI for graphics support, such as Hyper-V. With this update, the xorg-x11-drv-fbdev , xorg-x11-drv-vesa , and xorg-x11-drv-vmware video drivers are installed by default and the graphical login window is displayed after a RHEL 8.0 and later Server installation. (BZ#1687489) Rescue mode no longer fails without displaying an error message Previously, running rescue mode on a system with no Linux partitions resulted in the installation program failing with an exception. With this update, the installation program displays the error message "You don't have any Linux partitions" when a system with no Linux partitions is detected. (BZ#1628653) The installation program now sets the lvm_metadata_backup Blivet flag for image installations Previously, the installation program failed to set the lvm_metadata_backup Blivet flag for image installations. As a consequence, LVM backup files were located in the /etc/lvm/ subdirectory after an image installation. With this update, the installation program sets the lvm_metadata_backup Blivet flag, and as a result, there are no LVM backup files located in the /etc/lvm/ subdirectory after an image installation. (BZ#1673901) The RHEL 8 installation program now handles strings from RPM Previously, when the python3-rpm library returned a string, the installation program failed with an exception. With this update, the installation program can now handle strings from RPM. ( BZ#1689909 ) The inst.repo kernel boot parameter now works for a repository on a hard drive that has a non-root path Previously, the RHEL 8 installation process could not proceed without manual intervention if the inst.repo=hd:<device>:<path> kernel boot parameter was pointing to a repository (not an ISO image) on a hard drive, and a non-root (/) path was used. With this update, the installation program can now propagate any <path> for a repository located on a hard drive, ensuring the installation proceeds as normal. ( BZ#1689194 ) The --changesok option now allows the installation program to change the root password Previously, using the --changesok option when installing Red Hat Enterprise Linux 8 from a Kickstart file did not allow the installation program to change the root password. With this update, the --changesok option is successfully passed by Kickstart, and as a result, users specifying the pwpolicy root -changesok option in their Kickstart file can now change the root password using the GUI, even if the password has already been set by Kickstart. (BZ#1584145) Image Building no longer fails when using lorax-composer API Previously, when using lorax-composer API from a subscribed RHEL system, the image building process always failed. Anaconda could not access the repositories, because the subscription certificates from the host are not passed through. To fix the issue update lorax-composer , pykickstart , and Anaconda packages. That will allow to pass supported CDN certificates. ( BZ#1663950 ) 6.4.2. Shells and command-line tools systemd in debug mode no longer produces unnecessary log messages When using the systemd system and service manager in debug mode, systemd previously produced unnecessary and harmless log messages that started with: With this update, systemd has been fixed to no longer produce these unnecessary debug messages. ( BZ#1658691 ) 6.4.3. Security fapolicyd no longer prevents RHEL updates When an update replaces the binary of a running application, the kernel modifies the application binary path in memory by appending the " (deleted)" suffix. Previously, the fapolicyd file access policy daemon treated such applications as untrusted, and prevented them from opening and executing any other files. As a consequence, the system was sometimes unable to boot after applying updates. With the release of the RHBA-2020:5241 advisory, fapolicyd ignores the suffix in the binary path so the binary can match the trust database. As a result, fapolicyd enforces the rules correctly and the update process can finish. (BZ#1897092) SELinux no longer prevents Tomcat from sending emails Prior to this update, the SELinux policy did not allow the tomcat_t and pki_tomcat_t domains to connect to SMTP ports. Consequently, SELinux denied applications on the Tomcat server from sending emails. With this update of the selinux-policy packages, the policy allows processes from the Tomcat domains access SMTP ports, and SELinux no longer prevents applications on Tomcat from sending emails. (BZ#1687798) lockdev now runs correctly with SELinux Previously, the lockdev tool could not transition into the lockdev_t context even though the SELinux policy for lockdev_t was defined. As a consequence, lockdev was allowed to run in the 'unconfined_t' domain when used by the root user. This introduced vulnerabilities into the system. With this update, the transition into lockdev_t has been defined, and lockdev can now be used correctly with SELinux in enforcing mode. (BZ#1673269) iotop now runs correctly with SELinux Previously, the iotop tool could not transition into the iotop_t context even though the SELinux policy for iotop_t was defined. As a consequence, iotop was allowed to run in the 'unconfined_t' domain when used by the root user. This introduced vulnerabilities into the system. With this update, the transition into iotop_t has been defined, and iotop can now be used correctly with SELinux in enforcing mode. (BZ#1671241) SELinux now properly handles NFS 'crossmnt' The NFS protocol with the crossmnt option automatically creates internal mounts when a process accesses a subdirectory already used as a mount point on the server. Previously, this caused SELinux to check whether the process accessing an NFS mounted directory had a mount permission, which caused AVC denials. In the current version, SELinux permission checking skips these internal mounts. As a result, accessing an NFS directory that is mounted on the server side does not require mount permission. (BZ#1647723) An SELinux policy reload no longer causes false ENOMEM errors Reloading the SELinux policy previously caused the internal security context lookup table to become unresponsive. Consequently, when the kernel encountered a new security context during a policy reload, the operation failed with a false "Out of memory" (ENOMEM) error. With this update, the internal Security Identifier (SID) lookup table has been redesigned and no longer freezes. As a result, the kernel no longer returns misleading ENOMEM errors during an SELinux policy reload. (BZ#1656787) Unconfined domains can now use smc_socket Previously, the SELinux policy did not have the allow rules for the smc_socket class. Consequently, SELinux blocked an access to smc_socket for the unconfined domains. With this update, the allow rules have been added to the SELinux policy. As a result, the unconfined domains can use smc_socket . (BZ#1683642) Kerberos cleanup procedures are now compatible with GSSAPIDelegateCredentials and default cache from krb5.conf Previously, when the default_ccache_name option was configured in the krb5.conf file, the kerberos credentials were not cleaned up with the GSSAPIDelegateCredentials and GSSAPICleanupCredentials options set. This bug is now fixed by updating the source code to clean up credential caches in the described use cases. After the configuration, the credential cache gets cleaned up on exit if the user configures it. ( BZ#1683295 ) OpenSSH now correctly handles PKCS #11 URIs for keys with mismatching labels Previously, specifying PKCS #11 URIs with the object part (key label) could prevent OpenSSH from finding related objects in PKCS #11. With this update, the label is ignored if the matching objects are not found, and keys are matched only by their IDs. As a result, OpenSSH is now able to use keys on smart cards referenced using full PKCS #11 URIs. (BZ#1671262) SSH connections with VMware-hosted systems now work properly The version of the OpenSSH suite introduced a change of the default IP Quality of Service (IPQoS) flags in SSH packets, which was not correctly handled by the VMware virtualization platform. Consequently, it was not possible to establish an SSH connection with systems on VMware. The problem has been fixed in VMWare Workstation 15, and SSH connections with VMware-hosted systems now work correctly. (BZ#1651763) curve25519-sha256 is now supported by default in OpenSSH Previously, the curve25519-sha256 SSH key exchange algorithm was missing in the system-wide crypto policies configurations for the OpenSSH client and server even though it was compliant with the default policy level. As a consequence, if a client or a server used curve25519-sha256 and this algorithm was not supported by the host, the connection might fail. This update of the crypto-policies package fixes the bug, and SSH connections no longer fail in the described scenario. ( BZ#1678661 ) Ansible playbooks for OSPP and PCI-DSS profiles no longer exit after encountering a failure Previously, Ansible remediations for the Security Content Automation Protocol (OSPP) and the Payment Card Industry Data Security Standard (PCI-DSS) profiles failed due to incorrect ordering and other errors in the remediations. This update fixes the ordering and errors in generated Ansible remediation playbooks, and Ansible remediations now work correctly. ( BZ#1741455 ) Audit transport=KRB5 now works properly Prior to this update, Audit KRB5 transport mode did not work correctly. Consequently, Audit remote logging using the Kerberos peer authentication did not work. With this update, the problem has been fixed, and Audit remote logging now works properly in the described scenario. ( BZ#1730382 ) 6.4.4. Networking The kernel now supports destination MAC addresses in bitmap:ipmac , hash:ipmac , and hash:mac IP set types Previously, the kernel implementation of the bitmap:ipmac , hash:ipmac , and hash:mac IP set types only allowed matching on the source MAC address, while destination MAC addresses could be specified, but were not matched against set entries. As a consequence, administrators could create iptables rules that used a destination MAC address in one of these IP set types, but packets matching the given specification were not actually classified. With this update, the kernel compares the destination MAC address and returns a match if the specified classification corresponds to the destination MAC address of a packet. As a result, rules that match packets against the destination MAC address now work correctly. (BZ#1649087) The gnome-control-center application now supports editing advanced IPsec settings Previously, the gnome-control-center application only displayed the advanced options of IPsec VPN connections. Consequently, users could not change these settings. With this update, the fields in the advanced settings are now editable, and users can save the changes. ( BZ#1697329 ) The TRACE target in the iptables-extensions(8) man page has been updated Previously, the description of the TRACE target in the iptables-extensions(8) man page referred only to the compat variant, but Red Hat Enterprise Linux 8 uses the nf_tables variant. As a consequence, the man page did not reference the xtables-monitor command-line utility to display TRACE events. The man page has been updated and, as a result, now mentions xtables-monitor . ( BZ#1658734 ) Error logging in the ipset service has been improved Previously, the ipset service did not report configuration errors with a meaningful severity in the systemd logs. The severity level for invalid configuration entries was only informational , and the service did not report errors for an unusable configuration. As a consequence, it was difficult for administrators to identify and troubleshoot issues in the ipset service's configuration. With this update, ipset reports configuration issues as warnings in systemd logs and, if the service fails to start, it logs an entry with the error severity including further details. As a result, it is now easier to troubleshoot issues in the configuration of the ipset service. ( BZ#1683711 ) The ipset service now ignores invalid configuration entries during startup The ipset service stores configurations as sets in separate files. Previously, when the service started, it restored the configuration from all sets in a single operation, without filtering invalid entries that can be inserted by manually editing a set. As a consequence, if a single configuration entry was invalid, the service did not restore further unrelated sets. The problem has been fixed. As a result, the ipset service detects and removes invalid configuration entries during the restore operation, and ignores invalid configuration entries. ( BZ#1683713 ) The ipset list command reports consistent memory for hash set types When you add entries to a hash set type, the ipset utility must resize the in-memory representation to for new entries by allocating an additional memory block. Previously, ipset set the total per-set allocated size to only the size of the new block instead of adding the value to the current in-memory size. As a consequence, the ip list command reported an inconsistent memory size. With this update, ipset correctly calculates the in-memory size. As a result, the ipset list command now displays the correct in-memory size of the set, and the output matches the actual allocated memory for hash set types. (BZ#1714111) The kernel now correctly updates PMTU when receiving ICMPv6 Packet Too Big message In certain situations, such as for link-local addresses, more than one route can match a source address. Previously, the kernel did not check the input interface when receiving Internet Control Message Protocol Version 6 (ICMPv6) packets. Therefore, the route lookup could return a destination that did not match the input interface. Consequently, when receiving an ICMPv6 Packet Too Big message, the kernel could update the Path Maximum Transmission Unit (PMTU) for a different input interface. With this update, the kernel checks the input interface during the route lookup. As a result, the kernel now updates the correct destination based on the source address and PMTU works as expected in the described scenario. (BZ#1721961) The /etc/hosts.allow and /etc/hosts.deny files no longer contain outdated references to removed tcp_wrappers Previously, the /etc/hosts.allow and /etc/hosts.deny files contained outdated information about the tcp_wrappers package. The files are removed in RHEL 8 as they are no longer needed for tcp_wrappers which is removed. ( BZ#1663556 ) 6.4.5. Kernel tpm2-abrmd-selinux now has a proper dependency on selinux-policy-targeted Previously, the tpm2-abrmd-selinux package had a dependency on the selinux-policy-base package instead of the selinux-policy-targeted package. Consequently, if a system had selinux-policy-minimum installed instead of selinux-policy-targeted , installation of the tpm2-abrmd-selinux package failed. This update fixes the bug and tpm2-abrmd-selinux can be installed correctly in the described scenario. (BZ#1642000) All /sys/kernel/debug files can be accessed Previously, the return value for "Operation not permitted" (EPERM) error remained set until the end of the function regardless of the error. Consequently, any attempts to access certain /sys/kernel/debug (debugfs) files failed with an unwarranted EPERM error. This update moves the EPERM return value to the following block. As a result, debugfs files can be accessed without problems in the described scenario. (BZ#1686755) NICs are no longer affected by a bug in the qede driver for the 41000 and 45000 FastLinQ series Previously, firmware upgrade and debug data collection operations failed due to a bug in the qede driver for the 41000 and 45000 FastLinQ series. It made the NIC unusable. The reboot (PCI reset) of the host made the NIC operational again. This issue could occur in the following scenarios: during the upgrade of Firmware of the NIC using the inbox driver during the collection of debug data running the ethtool -d ethx command while running an sosreport command that included ethtool -d ethx. during the initiation of automatic debug data collection by the inbox driver, such as I/O timeout, Mail Box Command time-out and a Hardware Attention. To fix this issue, Red Hat released an erratum via Red Hat Bug Advisory (RHBA). Before the release of RHBA, it was recommended to create a case in https://access.redhat.com/support to request for supported fix. (BZ#1697310) The generic EDAC GHES driver now detects which DIMM reported an error Previously, the EDAC GHES driver was not able to detect which DIMM reported an error. Consequently, the following error message appeared: The driver has been now updated to scan the DMI (SMBIOS) tables to detect the specific DIMM that matches the Desktop Management Interface (DMI) handle 0x<ADDRESS> . As a result, EDAC GHES correctly detects which specific DIMM reported a hardware error. (BZ#1721386) podman is able to checkpoint containers in RHEL 8 Previously, the version of the Checkpoint and Restore In Userspace (CRIU) package was outdated. Consequently, CRIU did not support container checkpoint and restore functionality, and the podman utility failed to checkpoint containers. When running the podman container checkpoint command, the following error message was displayed: This update fixes the problem by upgrading the version of the CRIU package. As a result, podman now supports container checkpoint and restore functionality. (BZ#1689746) early-kdump and standard kdump no longer fail if the add_dracutmodules+=earlykdump option is used in dracut.conf Previously, an inconsistency occurred between the kernel version being installed for early-kdump and the kernel version initramfs was generated for. As a consequence, booting failed when early-kdump was enabled. In addition, if early-kdump detected that it was being included in a standard kdump initramfs image, it forced an exit. Therefore the standard kdump service also failed when trying to rebuild kdump initramfs if early-kdump was added as a default dracut module. As a consequence, early-kdump and standard kdump both failed. With this update, early-kdump uses the consistent kernel name during the installation, only the version differs from the running kernel. Also, the standard kdump service will forcibly drop early-kdump to avoid image generation failure. As a result, early-kdump and standard kdump no longer fail in the described scenario. (BZ#1662911) The first kernel with SME enabled now succeeds in dumping the vmcore Previously, the encrypted memory in the first kernel with the active Secure Memory Encryption (SME) feature caused a failure of the kdump mechanism. Consequently, the first kernel was not able to dump the contents (vmcore) of its memory. With this update, the ioremap_encrypted() function has been added to remap the encrypted memory and modify the related code. As a result, the encrypted first kernel's memory is now properly accessed, and the vmcore can be dumped and parsed by the crash tools in the described scenario. (BZ#1564427) The first kernel with SEV enabled now succeeds in dumping the vmcore Previously, the encrypted memory in the first kernel with the active Secure Encrypted Virtualization (SEV) feature caused a failure of the kdump mechanism. Consequently, the first kernel was not able to dump the contents (vmcore) of its memory. With this update, the ioremap_encrypted() function has been added to remap the encrypted memory and modify the related code. As a result, the first kernel's encrypted memory is now properly accessed, and the vmcore can be dumped and parsed by the crash tools in the described scenario. (BZ#1646810) Kernel now reserves more space for SWIOTLB Previously, when Secure Encrypted Virtualization (SEV) or Secure Memory Encryption (SME) features was enabled in the kernel, the Software Input Output Translation Lookaside Buffer (SWIOTLB) technology had to be enabled as well and consumed a significant amount of memory. Consequently, the capture kernel failed to boot or got an out-of-memory error. This update fixes the bug by reserving extra crashkernel memory for SWIOTLB while SEV/SME is active. As a result, the capture kernel has more memory reserved for SWIOTLB and the bug no longer appears in the described scenario. (BZ#1728519) C-state transitions can now be disabled during hwlatdetect runs To achieve real-time performance, the hwlatdetect utility needs to be able to disable power saving in the CPU during test runs. This update allows hwlatdetect to turn off C-state transitions for the duration of the test run and hwlatdetect is now able to detect hardware latencies more accurately. ( BZ#1707505 ) 6.4.6. Hardware enablement The openmpi package can be installed now Previously, a rebase on opensm package changed its soname mechanism. As a consequence, the openmpi package could not be installed due to unresolved dependencies. This update fixes the problem. As a result, the openmpi package can be installed now without any issue. (BZ#1717289) 6.4.7. File systems and storage The RHEL 8 installation program now uses the entry ID to set the default boot entry Previously, the RHEL 8 installation program used the index of the first boot entry as the default, instead of using the entry ID. As a consequence, adding a new boot entry became the default, as it was sorted first and set to the first index. With this update, the installation program uses the entry ID to set the default boot entry, and as a result, the default entry is not changed, even if boot entries are added and sorted before the default. ( BZ#1671047 ) The system now boots successfully when SME is enabled with smartpqi Previously, the system failed to boot on certain AMD machines when the Secure Memory Encryption (SME) feature was enabled and the root disk was using the smartpqi driver. When the boot failed, the system displayed a message similar to the following in the boot log: This problem was caused by the smartpqi driver, which was falling back to the Software Input Output Translation Lookaside Buffer (SWIOTLB) because the coherent Direct Memory Access (DMA) mask was not set. With this update, the coherent DMA mask is now correctly set. As a result, the system now boots successfully when SME is enabled on machines that use the smartpqi driver for the root disk. (BZ#1712272) FCoE LUNs do not disappear after being created on the bnx2fc cards Previously, after creating a FCoE LUN on the bnx2fc cards, the FCoE LUNs were not attached correctly. As a consequence, FCoE LUNs disappeared after being created on the bnx2fc cards on RHEL 8.0. With this update, FCoE LUNs are attached correctly. As a result, it is now possible to discover the FCoE LUNs after they are created on the bnx2fc cards. (BZ#1685894) VDO volumes no longer lose deduplication advice after moving to a different-endian platform Previously, the Universal Deduplication Service (UDS) index lost all deduplication advice after moving the VDO volume to a platform that used a different endian. As a consequence, VDO was unable to deduplicate newly written data against the data that was stored before you moved the volume, leading to lower space savings. With this update, you can now move VDO volumes between platforms that use different endians without losing deduplication advice. ( BZ#1696492 ) kdump service works on large IBM POWER systems Previously, RHEL8 kdump kernel did not start. As a consequence, the kdump initrd file on large IBM POWER systems was not created. With this update, squashfs-tools-4.3-19.el8 component is added. This update adds a limit (128) to the number of CPUs which the squashfs-tools-4.3-19.el8 component can use from the available pool (instead of using all the available CPUs). This fixes the running out of resources error. As a result, kdump service now works on large IBM POWER systems. (BZ#1716278) Verbosity debug options now added to nfs.conf Previously, the /etc/nfs.conf file and the nfs.conf(5) man page did not include the following options: verbosity rpc-verbosity As a consequence, users were unaware of the availability of these debug flags. With this update, these flags are now included in the [gssd] section of the /etc/nfs.conf file and are also documented in the nfs.conf(8) man page. (BZ#1668026) 6.4.8. Dynamic programming languages, web and database servers Socket::inet_aton() can now be used from multiple threads safely Previously, the Socket::inet_aton() function, used for resolving a domain name from multiple Perl threads, called the unsafe gethostbyname() glibc function. Consequently, an incorrect IPv4 address was occasionally returned, or the Perl interpreter terminated unexpectedly. With this update, the Socket::inet_aton() implementation has been changed to use the thread-safe getaddrinfo() glibc function instead of gethostbyname() . As a result, the inet_aton() function from Perl Socket module can be used from multiple threads safely. ( BZ#1699793 , BZ#1699958 ) 6.4.9. Compilers and development tools gettext returns untranslated text even when out of memory Previously, the gettext() function for text localization returned the NULL value instead of text when out of memory, resulting in applications lacking text output or labels. The bug has been fixed and now, gettext() - returns untranslated text when out of memory as expected. ( BZ#1663035 ) The locale command now warns about LOCPATH being set whenever it encounters an error during execution Previously, the locale command did not provide any diagnostics for the LOCPATH environment variable when it encountered errors due to an invalid LOCPATH . The locale command is now set to warn that LOCPATH has been set any time it encounters an error during execution. As a result, locale now reports LOCPATH along with any underlying errors that it encounters. ( BZ#1701605 ) gdb now can read and correctly represent z registers in core files on aarch64 SVE Previously, the gdb component failed to read z registers from core files with aarch64 scalable vector extension (SVE) architecture. With this update, the gdb component is now able to read z registers from core files. As a result, the info register command successfully shows the z register contents. (BZ#1669953) GCC rebased to version 8.3.1 The GNU Compiler Collection (GCC) has been updated to upstream version 8.3.1. This version brings a large number of miscellaneous bug fixes. ( BZ#1680182 ) 6.4.10. Identity Management FreeRADIUS now resolves hostnames pointing to IPv6 addresses In RHEL 8 versions of FreeRADIUS, the ipaddr utility only supported IPv4 addresses. Consequently, for the radiusd daemon to resolve IPv6 addresses, a manual update of the configuration was required after an upgrade of the system from RHEL 7 to RHEL 8. This update fixes the underlying code, and ipaddr in FreeRADIUS now uses IPv6 addresses, too. ( BZ#1685546 ) The Nuxwdog service no longer fails to start the PKI server in HSM environments Previously, due to bugs, the keyutils package was not installed as a dependency of the pki-core package. Additionally, the Nuxwdog watchdog service failed to start the public key infrastructure (PKI) server in environments that use a hardware security module (HSM). These problems have been fixed. As a result, the required keyutils package is now installed automatically as a dependency, and Nuxwdog starts the PKI server as expected in environments with HSM. ( BZ#1695302 ) The IdM server now works correctly in the FIPS mode Previously, the SSL connector for Tomcat server was incompletely implemented. As a consequence, the Identity Management (IdM) server with an installed certificate server did not work on machines with the FIPS mode enabled. This bug has been fixed by adding JSSTrustManager and JSSKeyManager . As a result, the IdM server works correctly in the described scenario. Note that there are several bugs that prevent the IdM server from running in the FIPS mode in RHEL 8. This update fixes just one of them. ( BZ#1673296 ) The KCM credential cache is now suitable for a large number of credentials in a single credential cache Previously, if the Kerberos Credential Manager (KCM) contained a large number of credentials, Kerberos operations, such as kinit , failed due to a limitation of the size of entries in the database and the number of these entries. This update introduces the following new configuration options to the kcm section of the sssd.conf file: max_ccaches (integer) max_uid_ccaches (integer) max_ccache_size (integer) As a result, KCM can now handle a large number of credentials in a single ccache. For further information on the configuration options, see sssd-kcm man page . (BZ#1448094) Samba no longer denies access when using the sss ID mapping plug-in Previously, when you ran Samba on the domain member with this configuration and added a configuration that used the sss ID mapping back end to the /etc/samba/smb.conf file to share directories, changes in the ID mapping back end caused errors. Consequently, Samba denied access to files in certain cases, even if the user or group existed and it was known by SSSD. The problem has been fixed. As a result, Samba no longer denies access when using the sss plug-in. ( BZ#1657665 ) Default SSSD time-out values no longer conflict with each other Previously, there was a conflict between the default time-out values. The default values for the following options have been changed to improve the failover capability: dns_resolver_op_timeout - set to 2s (previously 6s) dns_resolver_timeout - set to 4s (previously 6s) ldap_opt_timeout - set to 8s (previously 6s) Also, a new dns_resolver_server_timeout option, with default value of 1000 ms has been added, which specifies the time out duration for SSSD to switch from one DNS server to another. (BZ#1382750) 6.4.11. Desktop systemctl isolate multi-user.target now displays the console prompt When running the systemctl isolate multi-user.target command from GNOME Terminal in a GNOME Desktop session, only a cursor was displayed, and not the console prompt. This update fixes gdm , and the console prompt is now displayed as expected in the described situation. ( BZ#1678627 ) 6.4.12. Graphics infrastructures The 'i915' display driver now supports display configurations up to 3x4K. Previously, it was not possible to have display configurations larger than 2x4K when using the 'i915' display driver in an Xorg session. With this update, the 'i915' driver now supports display configurations up to 3x4K. (BZ#1664969) Linux guests no longer display an error when initializing the GPU driver Previously, Linux guests returned a warning when initializing the GPU driver. This happened because Intel Graphics Virtualization Technology -g (GVT -g) only simulates the DisplayPort (DP) interface for guest and leaves the 'EDP_PSR_IMR' and 'EDP_PSR_IIR' registers as default memory-mapped I/O (MMIO) read/write registers. To resolve this issue, handlers have been added to these registers and the warning is no longer returned. (BZ#1643980) 6.4.13. The web console It is possible to login to RHEL web console with session_recording shell Previously, it was not possible for users of the tlog shell (which enables session recording) to log in to the RHEL web console. This update fixes the bug. The workaround of adding the tlog-rec-session shell to /etc/shells/ should be reverted after installing this update. (BZ#1631905) 6.4.14. Virtualization Hot-plugging PCI devices to a pcie-to-pci bridge controller works correctly Previously, if a guest virtual machine configuration contained a pcie-to-pci-bridge controller that had no endpoint devices attached to it at the time the guest was started, hot-plugging new devices to that controller was not possible. This update improves how hot-plugging legacy PCI devices on a PCIe system is handled, which prevents the problem from occurring. ( BZ#1619884 ) Enabling nested virtualization no longer blocks live migration Previously, the nested virtualization feature was incompatible with live migration. As a consequence, enabling nested virtualization on a RHEL 8 host prevented migrating any virtual machines (VMs) from the host, as well as saving VM state snapshots to disk. This update fixes the described problem, and the impacted VMs are now possible to migrate. ( BZ#1689216 ) 6.4.15. Supportability redhat-support-tool now creates an sosreport archive Previously, the redhat-support-tool utility was unable to create an sosreport archive. The workaround was running the sosreport command separately and then entering the redhat-support-tool addattachment -c command to upload the archive. Users can also use the web UI on Customer Portal which creates the customer case and uploads the sosreport archive. In addition, command options such as findkerneldebugs , btextract , analyze , or diagnose do not work as expected and will be fixed in a future release. ( BZ#1688274 ) 6.5. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.1. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 6.5.1. Networking TIPC has full support The Transparent Inter Process Communication ( TIPC ) is a protocol specially designed for efficient communication within clusters of loosely paired nodes. It works as a kernel module and provides a tipc tool in iproute2 package to allow designers to create applications that can communicate quickly and reliably with other applications regardless of their location within the cluster. This feature is now fully supported in RHEL 8. (BZ#1581898) eBPF for tc available as a Technology Preview As a Technology Preview, the Traffic Control (tc) kernel subsystem and the tc tool can attach extended Berkeley Packet Filtering (eBPF) programs as packet classifiers and actions for both ingress and egress queueing disciplines. This enables programmable packet processing inside the kernel network data path. ( BZ#1699825 ) nmstate available as a Technology Preview Nmstate is a network API for hosts. The nmstate packages, available as a Technology Preview, provide a library and the nmstatectl command-line utility to manage host network settings in a declarative manner. The networking state is described by a pre-defined schema. Reporting of the current state and changes to the desired state both conform to the schema. For further details, see the /usr/share/doc/nmstate/README.md file and the examples in the /usr/share/doc/nmstate/examples directory. (BZ#1674456) AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. (BZ#1633143) XDP available as a Technology Preview The eXpress Data Path (XDP) feature, which is available as a Technology Preview, provides a means to attach extended Berkeley Packet Filter (eBPF) programs for high-performance packet processing at an early point in the kernel ingress data path, allowing efficient programmable packet analysis, filtering, and manipulation. (BZ#1503672) KTLS available as a Technology Preview In Red Hat Enterprise Linux 8, Kernel Transport Layer Security (KTLS) is provided as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also provides the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that support this functionality. (BZ#1570255) The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, an Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. (BZ#1906489) 6.5.2. Kernel Control Group v2 available as a Technology Preview in RHEL 8 Control Group v2 mechanism is a unified hierarchy control group. Control Group v2 organizes processes hierarchically and distributes system resources along the hierarchy in a controlled and configurable manner. Unlike the version, Control Group v2 has only a single hierarchy. This single hierarchy enables the Linux kernel to: Categorize processes based on the role of their owner. Eliminate issues with conflicting policies of multiple hierarchies. Control Group v2 supports numerous controllers: CPU controller regulates the distribution of CPU cycles. This controller implements: Weight and absolute bandwidth limit models for normal scheduling policy. Absolute bandwidth allocation model for real time scheduling policy. Memory controller regulates the memory distribution. Currently, the following types of memory usages are tracked: Userland memory - page cache and anonymous memory. Kernel data structures such as dentries and inodes. TCP socket buffers. I/O controller regulates the distribution of I/O resources. Writeback controller interacts with both Memory and I/O controllers and is Control Group v2 specific. The information above was based on link: https://www.kernel.org/doc/Documentation/cgroup-v2.txt . You can refer to the same link to obtain more information about particular Control Group v2 controllers. ( BZ#1401552 ) kexec fast reboot as a Technology Preview The kexec fast reboot feature, continues to be available as a Technology Preview. Rebooting is now significantly faster thanks to kexec fast reboot . To use this feature, load the kexec kernel manually, and then reboot the operating system. ( BZ#1769727 ) eBPF available as a Technology Preview Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine includes a new system call bpf() , which supports creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as the root user. See the bpf (2) man page for more information. The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. All components are available as a Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as a Technology Preview: The BPF Compiler Collection (BCC) tools package, a collection of dynamic kernel tracing utilities that use the eBPF virtual machine. The BCC tools package is available as a Technology Preview on the following architectures: the 64-bit ARM architecture, IBM Power Systems, Little Endian, and IBM Z. Note that it is fully supported on the AMD and Intel 64-bit architectures. bpftrace , a high-level tracing language that utilizes the eBPF virtual machine. The eXpress Data Path (XDP) feature, a networking technology that enables fast packet processing in the kernel using the eBPF virtual machine. (BZ#1559616) Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol which implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which supports two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. (BZ#1605216) 6.5.3. Hardware enablement The igc driver available as a Technology Preview for RHEL 8 The igc Intel 2.5G Ethernet Linux wired LAN driver is now available on all architectures for RHEL 8 as a Technology Preview. The ethtool utility also supports igc wired LANs. (BZ#1495358) 6.5.4. File systems and storage NVMe/TCP is available as a Technology Preview Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding nvme-tcp.ko and nvmet-tcp.ko kernel modules have been added as a Technology Preview. The use of NVMe/TCP as either a storage client or a target is manageable with tools provided by the nvme-cli and nvmetcli packages. NVMe/TCP provides a storage transport option along with the existing NVMe over Fabrics (NVMe-oF) transport, which include Remote Direct Memory Access (RDMA) and Fibre Channel (NVMe/FC). (BZ#1696451) File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8.1, file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1627455) OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . (BZ#1690207) Stratis is now available as a Technology Preview Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Stratis enables you to more easily perform storage tasks such as: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . (JIRA:RHELPLAN-1212) A Samba server, available to IdM and AD users logged into IdM hosts, can now be set up on an IdM domain member as a Technology Preview With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the /etc/samba/smb.conf with the ID mapping configuration for the sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member. Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients. For details, see Setting up Samba on an IdM domain member . (JIRA:RHELPLAN-13195) 6.5.5. High availability and clusters Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on the podman container platform, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack. (BZ#1619620) Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. ( BZ#1784200 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1775847) 6.5.6. Identity Management Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . ( BZ#1664719 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. ( BZ#1664718 ) 6.5.7. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. (BZ#1698565) 6.5.8. Red Hat Enterprise Linux system roles The postfix role of RHEL system roles available as a Technology Preview Red Hat Enterprise Linux system roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. The rhel-system-roles packages are distributed through the AppStream repository. The postfix role is available as a Technology Preview. The following roles are fully supported: kdump network selinux storage timesync For more information, see the Knowledgebase article about RHEL system roles . (BZ#1812552) rhel-system-roles-sap available as a Technology Preview The rhel-system-roles-sap package provides Red Hat Enterprise Linux (RHEL) system roles for SAP, which can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Access is limited to RHEL for SAP Solutions offerings. Please contact Red Hat Customer Support if you need assistance with your subscription. The following new roles in the rhel-system-roles-sap package are available as a Technology Preview: sap-preconfigure sap-netweaver-preconfigure sap-hana-preconfigure For more information, see Red Hat Enterprise Linux system roles for SAP . Note: RHEL 8.1 for SAP Solutions is scheduled to be validated for use with SAP HANA on Intel 64 architecture and IBM POWER9. Other SAP applications and database products, for example, SAP NetWeaver and SAP ASE, can use RHEL 8.1 features. Please consult SAP Notes 2369910 and 2235581 for the latest information about validated releases and SAP support. (BZ#1660832) rhel-system-roles-sap rebased to version 1.1.1 With the RHBA-2019:4258 advisory, the rhel-system-roles-sap package has been updated to provide multiple bug fixes. Notably: SAP system roles work on hosts with non-English locales kernel.pid_max is set by the sysctl module nproc is set to unlimited for HANA (see SAP note 2772999 step 9) hard process limit is set before soft process limit code that sets process limits now works identically to role sap-preconfigure handlers/main.yml only works for non-uefi systems and is silently ignored on uefi systems removed unused dependency on rhel-system-roles removed libssh2 from the sap_hana_preconfigure_packages added further checks to avoid failures when certain CPU settings are not supported converted all true and false to lowercase updated minimum package handling host name and domain name set correctly many minor fixes The rhel-system-roles-sap package is available as a Technology Preview. (BZ#1766622) 6.5.9. Virtualization Select Intel network adapters now support SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine. The feature is currently supported with Microsoft Windows Server 2019 and 2016. (BZ#1348508) KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization (BZ#1519039) AMD SEV for KVM virtual machines As a Technology Preview, RHEL 8 introduces the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts VM memory so that the host cannot access data on the VM. This increases the security of the VM if the host is successfully infected by malware. Note that the number of VMs that can use this feature at a time on a single host is determined by the host hardware. Current AMD EPYC processors support up to 15 running VMs using SEV. Also note that for VMs with SEV configured to be able to boot, you must also configure the VM with a hard memory limit. To do so, add the following to the VM's XML configuration: The recommended value for N is equal to or greater then the guest RAM + 256 MiB. For example, if the guest is assigned 2 GiB RAM, N should be 2359296 or greater. (BZ#1501618, BZ#1501607, JIRA:RHELPLAN-7677) Intel vGPU As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, assigning a physical GPU to VMs makes it impossible for the host to use the GPU, and may prevent graphical display output on the host from working. (BZ#1528684) Nested virtualization now available on IBM POWER 9 As a Technology Preview, it is now possible to use the nested virtualization features on RHEL 8 host machines running on IBM POWER 9 systems. Nested virtualization enables KVM virtual machines (VMs) to act as hypervisors, which allows for running VMs inside VMs. Note that nested virtualization also remains a Technology Preview on AMD64 and Intel 64 systems. Also note that for nested virtualization to work on IBM POWER 9, the host, the guest, and the nested guests currently all need to run one of the following operating systems: RHEL 8 RHEL 7 for POWER 9 (BZ#1505999, BZ#1518937) Creating nested virtual machines As a Technology Preview, nested virtualization is available for KVM virtual machines (VMs) in RHEL 8. With this feature, a VM that runs on a physical host can act as a hypervisor, and host its own VMs. Note that nested virtualization is only available on AMD64 and Intel 64 architectures, and the nested host must be a RHEL 7 or RHEL 8 VM. (JIRA:RHELPLAN-14047) 6.5.10. Containers The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. (JIRA:RHELDOCS-16861) 6.6. Deprecated functionality This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 8.1. Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 8. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8 . For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 6.6.1. Installer and image creation Several Kickstart commands and options have been deprecated Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs. auth or authconfig device deviceprobe dmraid install lilo lilocheck mouse multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec Where only specific options are listed, the base command and its other options are still available and not deprecated. For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document. (BZ#1642765) The --interactive option of the ignoredisk Kickstart command has been deprecated Using the --interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option. (BZ#1637872) 6.6.2. Software management The rpmbuild --sign command has been deprecated With this update, the rpmbuild --sign command has become deprecated. Using this command in future releases of Red Hat Enterprise Linux can result in an error. It is recommended that you use the rpmsign command instead. ( BZ#1688849 ) 6.6.3. Security TLS 1.0 and TLS 1.1 are deprecated The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level. If your scenario, for example, a video conferencing application in the Firefox web browser, requires using the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level: For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8) man page. ( BZ#1660839 ) DSA is deprecated in RHEL 8 The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic policy level. (BZ#1646541) SSL2 Client Hello has been deprecated in NSS The Transport Layer Security ( TLS ) protocol version 1.2 and earlier allow to start a negotiation with a Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer ( SSL ) protocol version 2. Support for this feature in the Network Security Services ( NSS ) library has been deprecated and it is disabled by default. Applications that require support for this feature need to use the new SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature may be removed completely in future releases of Red Hat Enterprise Linux 8. (BZ#1645153) TPM 1.2 is deprecated The Trusted Platform Module (TPM) secure cryptoprocessor standard version was updated to version 2.0 in 2016. TPM 2.0 provides many improvements over TPM 1.2, and it is not backward compatible with the version. TPM 1.2 is deprecated in RHEL 8, and it might be removed in the major release. (BZ#1657927) 6.6.4. Networking Network scripts are deprecated in RHEL 8 Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. (BZ#1647725) 6.6.5. Kernel Diskless boot has been deprecated Diskless booting allows multiple systems to share a root filesystem via the network. While convenient, it is prone to introducing network latency in realtime workloads. With a future minor update of RHEL for Real Time 8, the diskless booting will no longer be supported. ( BZ#1748980 ) The rdma_rxe Soft-RoCE driver is deprecated Software Remote Direct Memory Access over Converged Ethernet (Soft-RoCE), also known as RXE, is a feature that emulates Remote Direct Memory Access (RDMA). In RHEL 8, the Soft-RoCE feature is available as an unsupported Technology Preview. However, due to stability issues, this feature has been deprecated and will be removed in RHEL 9. (BZ#1878207) 6.6.6. Hardware enablement The qla3xxx driver is deprecated The qla3xxx driver has been deprecated in RHEL 8. The driver will likely not be supported in future major releases of this product, and thus it is not recommended for new deployments. (BZ#1658840) The dl2k , dnet , ethoc , and dlci drivers are deprecated The dl2k , dnet , ethoc , and dlci drivers have been deprecated in RHEL 8. The drivers will likely not be supported in future major releases of this product, and thus they are not recommended for new deployments. (BZ#1660627) 6.6.7. File systems and storage The elevator kernel command line parameter is deprecated The elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated. The upstream Linux kernel has removed support for the elevator parameter, but it is still available in RHEL 8 for compatibility reasons. Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use udev rules or the Tuned service to configure it. Match the selected devices and switch the scheduler only for those devices. For more information, see Setting the disk scheduler . (BZ#1665295) NFSv3 over UDP has been disabled The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP). NFS over UDP is no longer supported in RHEL 8. (BZ#1592011) 6.6.8. Desktop The libgnome-keyring library has been deprecated The libgnome-keyring library has been deprecated in favor of the libsecret library, as libgnome-keyring is not maintained upstream, and does not follow the necessary cryptographic policies for RHEL. The new libsecret library is the replacement that follows the necessary security standards. (BZ#1607766) 6.6.9. Graphics infrastructures AGP graphics cards are no longer supported Graphics cards using the Accelerated Graphics Port (AGP) bus are not supported in Red Hat Enterprise Linux 8. Use the graphics cards with PCI-Express bus as the recommended replacement. (BZ#1569610) 6.6.10. The web console The web console no longer supports incomplete translations The RHEL web console no longer provides translations for languages that have translations available for less than 50 % of the Console's translatable strings. If the browser requests translation to such a language, the user interface will be in English instead. ( BZ#1666722 ) 6.6.11. Virtualization virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL 8 web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager may not be yet available the RHEL 8 web console. (JIRA:RHELPLAN-10304) Virtual machine snapshots are not properly supported in RHEL 8 The current mechanism of creating virtual machine (VM) snapshots has been deprecated, as it is not working reliably. As a consequence, it is recommended not to use VM snapshots in RHEL 8. Note that a new VM snapshot mechanism is under development and will be fully implemented in a future minor release of RHEL 8. ( BZ#1686057 ) The Cirrus VGA virtual GPU type has been deprecated With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga , virtio-vga , or qxl devices instead of Cirrus VGA. (BZ#1651994) 6.6.12. Deprecated packages The following packages have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux: 389-ds-base-legacy-tools authd custodia hostname libidn net-tools network-scripts nss-pam-ldapd sendmail yp-tools ypbind ypserv 6.7. Known issues This part describes known issues in Red Hat Enterprise Linux 8. 6.7.1. Installer and image creation The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) Anaconda installation includes low limits of minimal resources setting requirements Anaconda initiates the installation on systems with minimal resource settings required available and do not provide message warning about the required resources for performing the installation successfully. As a result, the installation can fail and the output errors do not provide clear messages for possible debug and recovery. To work around this problem, make sure that the system has the minimal resources settings required for installation: 2GB memory on PPC64(LE) and 1GB on x86_64. As a result, it should be possible to perform a successful installation. (BZ#1696609) Installation fails when using the reboot --kexec command The RHEL 8 installation fails when using a Kickstart file that contains the reboot --kexec command. To avoid the problem, use the reboot command instead of reboot --kexec in your Kickstart file. ( BZ#1672405 ) Support secure boot for s390x in the installer RHEL 8.1 provides support for preparing boot disks for use in IBM Z environments that enforce the use of secure boot. The capabilities of the server and Hypervisor used during installation determine if the resulting on-disk format contains secure boot support or not. There is no way to influence the on-disk format during installation. Consequently, if you install RHEL 8.1 in an environment that supports secure boot, the system is unable to boot when moved to an environment lacking secure boot support, as it is done in some fail-over scenarios. To work around this problem, you need to configure the zipl tool that controls the on-disk boot format. zipl can be configured to write the on-disk format even if the environment in which it is run supports secure boot. Perform the following manual steps as root user once the installation of RHEL 8.1 is completed: Edit the configuration file /etc/zipl.conf Add a line containing "secure=0" to the section labelled "defaultboot". Run the zipl tool without parameters After performing these steps, the on-disk format of the RHEL 8.1 boot disk will no longer contain secure boot support. As a result, the installation can be booted in environments that lack secure boot support. (BZ#1659400) RHEL 8 initial setup cannot be performed via SSH Currently, the RHEL 8 initial setup interface does not display when logged in to the system using SSH. As a consequence, it is impossible to perform the initial setup on a RHEL 8 machine managed via SSH. To work around this problem, perform the initial setup in the main system console (ttyS0) and, afterwards, log in using SSH. (BZ#1676439) The default value for the secure= boot option is not set to auto Currently, the default value for the secure= boot option is not set to auto. As a consequence, the secure boot feature is not available because the current default is disabled. To work around this problem, manually set secure=auto in the [defaultboot] section of the /etc/zipl.conf file. As a result, the secure boot feature is made available. For more information, see the zipl.conf man page. (BZ#1750326) Copying the content of the Binary DVD.iso file to a partition omits the .treeinfo and .discinfo files During local installation, while copying the content of the RHEL 8 Binary DVD.iso image file to a partition, the * in the cp <path>/\* <mounted partition>/dir command fails to copy the .treeinfo and .discinfo files. These files are required for a successful installation. As a result, the BaseOS and AppStream repositories are not loaded, and a debug-related log message in the anaconda.log file is the only record of the problem. To work around the problem, copy the missing .treeinfo and .discinfo files to the partition. (BZ#1687747) Self-signed HTTPS server cannot be used in Kickstart installation Currently, the installer fails to install from a self-signed https server when the installation source is specified in the kickstart file and the --noverifyssl option is used: To work around this problem, append the inst.noverifyssl parameter to the kernel command line when starting the kickstart installation. For example: (BZ#1745064) 6.7.2. Software management yum repolist ends on first unavailable repository with skip_if_unavailable=false The repository configuration option skip_if_unavailable is by default set as follows: This setting forces the yum repolist command to end on first unavailable repository with an error and exit status 1. Consequently, yum repolist does not continue listing available repositiories. Note that it is possible to override this setting in each repository's *.repo file. However, if you want to keep the default settings, you can work around the problem by using yum repolist with the following option: (BZ#1697472) 6.7.3. Subscription management syspurpose addons have no effect on the subscription-manager attach --auto output. In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. (BZ#1687900) 6.7.4. Shells and command-line tools Applications using Wayland protocol cannot be forwarded to remote display servers In Red Hat Enterprise Linux 8.1, most applications use the Wayland protocol by default instead of the X11 protocol. As a consequence, the ssh server cannot forward the applications that use the Wayland protocol but is able to forward the applications that use the X11 protocol to a remote display server. To work around this problem, set the environment variable GDK_BACKEND=x11 before starting the applications. As a result, the application can be forwarded to remote display servers. ( BZ#1686892 ) systemd-resolved.service fails to start on boot The systemd-resolved service occasionally fails to start on boot. If this happens, restart the service manually after the boot finishes by using the following command: However, the failure of systemd-resolved on boot does not impact any other services. (BZ#1640802) 6.7.5. Infrastructure services Support for DNSSEC in dnsmasq The dnsmasq package introduces Domain Name System Security Extensions (DNSSEC) support for verifying hostname information received from root servers. Note that DNSSEC validation in dnsmasq is not compliant with FIPS 140-2. Do not enable DNSSEC in dnsmasq on Federal Information Processing Standard (FIPS) systems, and use the compliant validating resolver as a forwarder on the localhost. (BZ#1549507) 6.7.6. Security redhat-support-tool does not work with the FUTURE crypto policy Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API. ( BZ#1802026 ) SELINUX=disabled in /etc/selinux/config does not work properly Disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks and race conditions and consequently also kernel panics. To work around this problem, disable SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux. (JIRA:RHELPLAN-34199) libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the dnf install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the respective module. (BZ#1666328) udica processes UBI 8 containers only when started with --env container=podman The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file. To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround. ( BZ#1763210 ) Removing the rpm-plugin-selinux package leads to removing all selinux-policy packages from the system Removing the rpm-plugin-selinux package disables SELinux on the machine. It also removes all selinux-policy packages from the system. Repeated installation of the rpm-plugin-selinux package then installs the selinux-policy-minimum SELinux policy, even if the selinux-policy-targeted policy was previously present on the system. However, the repeated installation does not update the SELinux configuration file to account for the change in policy. As a consequence, SELinux is disabled even upon reinstallation of the rpm-plugin-selinux package. To work around this problem: Enter the umount /sys/fs/selinux/ command. Manually install the missing selinux-policy-targeted package. Edit the /etc/selinux/config file so that the policy is equal to SELINUX=enforcing . Enter the command load_policy -i . As a result, SELinux is enabled and running the same policy as before. (BZ#1641631) SELinux prevents systemd-journal-gatewayd to call newfstatat() on shared memory files created by corosync SELinux policy does not contain a rule that allows the systemd-journal-gatewayd daemon to access files created by the corosync service. As a consequence, SELinux denies systemd-journal-gatewayd to call the newfstatat() function on shared memory files created by corosync . To work around this problem, create a local policy module with an allow rule which enables the described scenario. See the audit2allow(1) man page for more information on generating SELinux policy allow and dontaudit rules. As a result of the workaround, systemd-journal-gatewayd can call the function on shared memory files created by corosync with SELinux in enforcing mode. (BZ#1746398) Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. (JIRA:RHELPLAN-10431) Parameter not known errors in the rsyslog output with config.enabled In the rsyslog output, an unexpected bug occurs in configuration processing errors using the config.enabled directive. As a consequence, parameter not known errors are displayed while using the config.enabled directive except for the include() statements. To work around this problem, set config.enabled=on or use include() statements. (BZ#1659383) Certain rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in rsyslog : To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. ( BZ#1679512 ) Connections to servers with SHA-1 signatures do not work with GnuTLS SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries. To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy. (BZ#1628553) TLS 1.3 does not work in NSS in FIPS mode TLS 1.3 is not supported on systems working in FIPS mode. As a result, connections that require TLS 1.3 for interoperability do not function on a system working in FIPS mode. To enable the connections, disable the system's FIPS mode or enable support for TLS 1.2 in the peer. ( BZ#1724250 ) OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures. To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file: As a result, a TLS connection can be established in the described scenario. ( BZ#1685470 ) The OpenSSL TLS library does not detect if the PKCS#11 token supports creation of raw RSA or RSA-PSS signatures The TLS-1.3 protocol requires the support for RSA-PSS signature. If the PKCS#11 token does not support raw RSA or RSA-PSS signatures, the server applications which use OpenSSL TLS library will fail to work with the RSA key if it is held by the PKCS#11 token. As a result, TLS communication will fail. To work around this problem, configure server or client to use the TLS-1.2 version as the highest TLS protocol version available. ( BZ#1681178 ) OpenSSL generates a malformed status_request extension in the CertificateRequest message in TLS 1.3 OpenSSL servers send a malformed status_request extension in the CertificateRequest message if support for the status_request extension and client certificate-based authentication are enabled. In such case, OpenSSL does not interoperate with implementations compliant with the RFC 8446 protocol. As a result, clients that properly verify extensions in the 'CertificateRequest' message abort connections with the OpenSSL server. To work around this problem, disable support for the TLS 1.3 protocol on either side of the connection or disable support for status_request on the OpenSSL server. This will prevent the server from sending malformed messages. ( BZ#1749068 ) ssh-keyscan cannot retrieve RSA keys of servers in FIPS mode The SHA-1 algorithm is disabled for RSA signatures in FIPS mode, which prevents the ssh-keyscan utility from retrieving RSA keys of servers operating in that mode. To work around this problem, use ECDSA keys instead, or retrieve the keys locally from the /etc/ssh/ssh_host_rsa_key.pub file on the server. ( BZ#1744108 ) scap-security-guide PCI-DSS remediation of Audit rules does not work properly The scap-security-guide package contains a combination of remediation and a check that can result in one of the following scenarios: incorrect remediation of Audit rules scan evaluation containing false positives where passed rules are marked as failed Consequently, during the RHEL 8.1 installation process, scanning of the installed system reports some Audit rules as either failed or errored. To work around this problem, follow the instructions in the RHEL-8.1 workaround for remediating and scanning with the scap-security-guide PCI-DSS profile Knowledgebase article. ( BZ#1754919 ) Certain sets of interdependent rules in SSG can fail Remediation of SCAP Security Guide (SSG) rules in a benchmark can fail due to undefined ordering of rules and their dependencies. If two or more rules need to be executed in a particular order, for example, when one rule installs a component and another rule configures the same component, they can run in the wrong order and remediation reports an error. To work around this problem, run the remediation twice, and the second run fixes the dependent rules. ( BZ#1750755 ) A utility for security and compliance scanning of containers is not available In Red Hat Enterprise Linux 7, the oscap-docker utility can be used for scanning of Docker containers based on Atomic technologies. In Red Hat Enterprise Linux 8, the Docker- and Atomic-related OpenSCAP commands are not available. To work around this problem, see the Using OpenSCAP for scanning containers in RHEL 8 article on the Customer Portal. As a result, you can use only an unsupported and limited way for security and compliance scanning of containers in RHEL 8 at the moment. (BZ#1642373) OpenSCAP does not provide offline scanning of virtual machines and containers Refactoring of OpenSCAP codebase caused certain RPM probes to fail to scan VM and containers file systems in offline mode. For that reason, the following tools were removed from the openscap-utils package: oscap-vm and oscap-chroot . Also, the openscap-containers package was completely removed. (BZ#1618489) OpenSCAP rpmverifypackage does not work correctly The chdir and chroot system calls are called twice by the rpmverifypackage probe. Consequently, an error occurs when the probe is utilized during an OpenSCAP scan with custom Open Vulnerability and Assessment Language (OVAL) content. To work around this problem, do not use the rpmverifypackage_test OVAL test in your content or use only the content from the scap-security-guide package where rpmverifypackage_test is not used. (BZ#1646197) SCAP Workbench fails to generate results-based remediations from tailored profiles The following error occurs when trying to generate results-based remediation roles from a customized profile using the SCAP Workbench tool: To work around this problem, use the oscap command with the --tailoring-file option. (BZ#1640715) OSCAP Anaconda Addon does not install all packages in text mode The OSCAP Anaconda Addon plugin cannot modify the list of packages selected for installation by the system installer if the installation is running in text mode. Consequently, when a security policy profile is specified using Kickstart and the installation is running in text mode, any additional packages required by the security policy are not installed during installation. To work around this problem, either run the installation in graphical mode or specify all packages that are required by the security policy profile in the security policy in the %packages section in your Kickstart file. As a result, packages that are required by the security policy profile are not installed during RHEL installation without one of the described workarounds, and the installed system is not compliant with the given security policy profile. ( BZ#1674001 ) OSCAP Anaconda Addon does not correctly handle customized profiles The OSCAP Anaconda Addon plugin does not properly handle security profiles with customizations in separate files. Consequently, the customized profile is not available in the RHEL graphical installation even when you properly specify it in the corresponding Kickstart section. To work around this problem, follow the instructions in the Creating a single SCAP data stream from an original DS and a tailoring file Knowledgebase article. As a result of this workaround, you can use a customized SCAP profile in the RHEL graphical installation. (BZ#1691305) 6.7.7. Networking The formatting of the verbose output of arptables now matches the format of the utility on RHEL 7 In RHEL 8, the iptables-arptables package provides an nftables -based replacement of the arptables utility. Previously, the verbose output of arptables separated counter values only with a comma, while arptables on RHEL 7 separated the described output with both a space and a comma. As a consequence, if you used scripts created on RHEL 7 that parsed the output of the arptables -v -L command, you had to adjust these scripts. This incompatibility has been fixed. As a result, arptables on RHEL 8.1 now also separates counter values with both a space and a comma. (BZ#1676968) nftables does not support multi-dimensional IP set types The nftables packet-filtering framework does not support set types with concatenations and intervals. Consequently, you cannot use multi-dimensional IP set types, such as hash:net,port , with nftables . To work around this problem, use the iptables framework with the ipset tool if you require multi-dimensional IP set types. (BZ#1593711) IPsec network traffic fails during IPsec offloading when GRO is disabled IPsec offloading is not expected to work when Generic Receive Offload (GRO) is disabled on the device. If IPsec offloading is configured on a network interface and GRO is disabled on that device, IPsec network traffic fails. To work around this problem, keep GRO enabled on the device. (BZ#1649647) 6.7.8. Kernel The i40iw module does not load automatically on boot Due to many i40e NICs not supporting iWarp and the i40iw module not fully supporting suspend/resume, this module is not automatically loaded by default to ensure suspend/resume works properly. To work around this problem, manually edit the /lib/udev/rules.d/90-rdma-hw-modules.rules file to enable automated load of i40iw . Also note that if there is another RDMA device installed with a i40e device on the same machine, the non-i40e RDMA device triggers the rdma service, which loads all enabled RDMA stack modules, including the i40iw module. (BZ#1623712) Network interface is renamed to kdump-<interface-name> when fadump is used When firmware-assisted dump ( fadump ) is utilized to capture a vmcore and store it to a remote machine using SSH or NFS protocol, the network interface is renamed to kdump-<interface-name> if <interface-name> is generic, for example, *eth#, or net#. This problem occurs because the vmcore capture scripts in the initial RAM disk ( initrd ) add the kdump- prefix to the network interface name to secure persistent naming. The same initrd is used also for a regular boot, so the interface name is changed for the production kernel too. (BZ#1745507) Systems with a large amount of persistent memory experience delays during the boot process Systems with a large amount of persistent memory take a long time to boot because the initialization of the memory is serialized. Consequently, if there are persistent memory file systems listed in the /etc/fstab file, the system might timeout while waiting for devices to become available. To work around this problem, configure the DefaultTimeoutStartSec option in the /etc/systemd/system.conf file to a sufficiently large value. (BZ#1666538) KSM sometimes ignores NUMA memory policies When the kernel shared memory (KSM) feature is enabled with the merge_across_nodes=1 parameter, KSM ignores memory policies set by the mbind() function, and may merge pages from some memory areas to Non-Uniform Memory Access (NUMA) nodes that do not match the policies. To work around this problem, disable KSM or set the merge_across_nodes parameter to 0 if using NUMA memory binding with QEMU. As a result, NUMA memory policies configured for the KVM VM will work as expected. (BZ#1153521) The system enters the emergency mode at boot-time when fadump is enabled The system enters the emergency mode when fadump ( kdump ) or dracut squash module is enabled in the initramfs scheme because systemd manager fails to fetch the mount information and configure the LV partition to mount. To work around this problem, add the following kernel command line parameter rd.lvm.lv=<VG>/<LV> to discover and mount the failed LV partition appropriately. As a result, the system will boot successfully in the described scenario. (BZ#1750278) Using irqpoll in the kdump kernel command line causes a vmcore generation failure Due to an existing underlying problem with the nvme driver on the 64-bit ARM architectures running on the Amazon Web Services (AWS) cloud platforms, the vmcore generation fails if the irqpoll kdump command line argument is provided to the first kernel. Consequently, no vmcore is dumped in the /var/crash/ directory after a kernel crash. To work around this problem: Add irqpoll to the KDUMP_COMMANDLINE_REMOVE key in the /etc/sysconfig/kdump file. Restart the kdump service by running the systemctl restart kdump command. As a result, the first kernel correctly boots and the vmcore is expected to be captured upon the kernel crash. (BZ#1654962) Debug kernel fails to boot in crash capture environment in RHEL 8 Due to memory-demanding nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel, and a stack trace is generated instead. To work around this problem, increase the crash kernel memory accordingly. As a result, the debug kernel successfully boots in the crash capture environment. (BZ#1659609) softirq changes can cause the localhost interface to drop UDP packets when under heavy load Changes in the Linux kernel's software interrupt ( softirq ) handling are done to reduce denial of service (DOS) effects. Consequently, this leads to situations where the localhost interface drops User Datagram Protocol (UDP) packets under heavy load. To work around this problem, increase the size of the network device backlog buffer to value 6000: In Red Hat tests, this value was sufficient to prevent packet loss. More heavily loaded systems might require larger backlog values. Increased backlogs have the effect of potentially increasing latency on the localhost interface. The result is to increase the buffer and allow more packets to be waiting for processing, which reduces the chances of dropping localhost packets. (BZ#1779337) 6.7.9. Hardware enablement The HP NMI watchdog in some cases does not generate a crash dump The hpwdt driver for the HP NMI watchdog is sometimes not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. As a consequence, hpwdt in some cases cannot call a panic to generate a crash dump. (BZ#1602962) Installing RHEL 8.1 on a test system configured with a QL41000 card results in a kernel panic While installing RHEL 8.1 on a test system configured with a QL41000 card, the system is unable to handle the kernel NULL pointer dereference at 000000000000003c card. As a consequence, it results in a kernel panic error. There is no work around available for this issue. (BZ#1743456) The cxgb4 driver causes crash in the kdump kernel The kdump kernel crashes while trying to save information in the vmcore file. Consequently, the cxgb4 driver prevents the kdump kernel from saving a core for later analysis. To work around this problem, add the "novmcoredd" parameter to the kdump kernel command line to allow saving core files. (BZ#1708456) 6.7.10. File systems and storage Certain SCSI drivers might sometimes use an excessive amount of memory Certain SCSI drivers use a larger amount of memory than in RHEL 7. In certain cases, such as vPort creation on a Fibre Channel host bus adapter (HBA), the memory usage might be excessive, depending upon the system configuration. The increased memory usage is caused by memory preallocation in the block layer. Both the multiqueue block device scheduling (BLK-MQ) and the multiqueue SCSI stack (SCSI-MQ) preallocate memory for each I/O request in RHEL 8, leading to the increased memory usage. (BZ#1698297) VDO cannot suspend until UDS has finished rebuilding When a Virtual Data Optimizer (VDO) volume starts after an unclean system shutdown, it rebuilds the Universal Deduplication Service (UDS) index. If you try to suspend the VDO volume using the dmsetup suspend command while the UDS index is rebuilding, the suspend command might become unresponsive. The command finishes only after the rebuild is done. The unresponsiveness is noticeable only with VDO volumes that have a large UDS index, which causes the rebuild to take a longer time. ( BZ#1737639 ) An NFS 4.0 patch can result in reduced performance under an open-heavy workload Previously, a bug was fixed that, in some cases, could cause an NFS open operation to overlook the fact that a file had been removed or renamed on the server. However, the fix may cause slower performance with workloads that require many open operations. To work around this problem, it might help to use NFS version 4.1 or higher, which have been improved to grant delegations to clients in more cases, allowing clients to perform open operations locally, quickly, and safely. (BZ#1748451) 6.7.11. Dynamic programming languages, web and database servers nginx cannot load server certificates from hardware security tokens The nginx web server supports loading TLS private keys from hardware security tokens directly from PKCS#11 modules. However, it is currently impossible to load server certificates from hardware security tokens through the PKCS#11 URI. To work around this problem, store server certificates on the file system ( BZ#1668717 ) php-fpm causes SELinux AVC denials when php-opcache is installed with PHP 7.2 When the php-opcache package is installed, the FastCGI Process Manager ( php-fpm ) causes SELinux AVC denials. To work around this problem, change the default configuration in the /etc/php.d/10-opcache.ini file to the following: Note that this problem affects only the php:7.2 stream, not the php:7.3 one. ( BZ#1670386 ) 6.7.12. Compilers and development tools The ltrace tool does not report function calls Because of improvements to binary hardening applied to all RHEL components, the ltrace tool can no longer detect function calls in binary files coming from RHEL components. As a consequence, ltrace output is empty because it does not report any detected calls when used on such binary files. There is no workaround currently available. As a note, ltrace can correctly report calls in custom binary files built without the respective hardening flags. (BZ#1618748) 6.7.13. Identity Management AD users with expired accounts can be allowed to log in when using GSSAPI authentication The accountExpires attribute that SSSD uses to see whether an account has expired is not replicated to the global catalog by default. As a result, users with expired accounts can log in when using GSSAPI authentication. To work around this problem, the global catalog support can be disabled by specifying ad_enable_gc=False in the sssd.conf file. With this setting, users with expired accounts will be denied access when using GSSAPI authentication. Note that SSSD connects to each LDAP server individually in this scenario, which can increase the connection count. (BZ#1081046) Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system. ( BZ#1729215 ) Changing /etc/nsswitch.conf requires a manual system reboot Any change to the /etc/nsswitch.conf file, for example running the authselect select profile_id command, requires a system reboot so that all relevant processes use the updated version of the /etc/nsswitch.conf file. If a system reboot is not possible, restart the service that joins your system to Active Directory, which is the System Security Services Daemon (SSSD) or winbind . ( BZ#1657295 ) No information about required DNS records displayed when enabling support for AD trust in IdM When enabling support for Active Directory (AD) trust in Red Hat Enterprise Linux Identity Management (IdM) installation with external DNS management, no information about required DNS records is displayed. Forest trust to AD is not successful until the required DNS records are added. To work around this problem, run the 'ipa dns-update-system-records --dry-run' command to obtain a list of all DNS records required by IdM. When external DNS for IdM domain defines the required DNS records, establishing forest trust to AD is possible. ( BZ#1665051 ) SSSD returns incorrect LDAP group membership for local users If the System Security Services Daemon (SSSD) serves users from the local files, the files provider does not include group memberships from other domains. As a consequence, if a local user is a member of an LDAP group, the id local_user command does not return the user's LDAP group membership. To work around the problem, either revert the order of the databases where the system is looking up the group membership of users in the /etc/nsswitch.conf file, replacing sss files with files sss , or disable the implicit files domain by adding to the [sssd] section in the /etc/sssd/sssd.conf file. As a result, id local_user returns correct LDAP group membership for local users. ( BZ#1652562 ) Default PAM settings for systemd-user have changed in RHEL 8 which may influence SSSD behavior The Pluggable authentication modules (PAM) stack has changed in Red Hat Enterprise Linux 8. For example, the systemd user session now starts a PAM conversation using the systemd-user PAM service. This service now recursively includes the system-auth PAM service, which may include the pam_sss.so interface. This means that the SSSD access control is always called. Be aware of the change when designing access control rules for RHEL 8 systems. For example, you can add the systemd-user service to the allowed services list. Please note that for some access control mechanisms, such as IPA HBAC or AD GPOs, the systemd-user service is has been added to the allowed services list by default and you do not need to take any action. ( BZ#1669407 ) SSSD does not correctly handle multiple certificate matching rules with the same priority If a given certificate matches multiple certificate matching rules with the same priority, the System Security Services Daemon (SSSD) uses only one of the rules. As a workaround, use a single certificate matching rule whose LDAP filter consists of the filters of the individual rules concatenated with the | (or) operator. For examples of certificate matching rules, see the sss-certamp(5) man page. (BZ#1447945) Private groups fail to be created with auto_private_group = hybrid when multiple domains are defined Private groups fail to be created with the option auto_private_group = hybrid when multiple domains are defined and the hybrid option is used by any domain other than the first one. If an implicit files domain is defined along with an AD or LDAP domain in the sssd.conf`file and is not marked as `MPG_HYBRID , then SSSD fails to create a private group for a user who has uid=gid and the group with this gid does not exist in AD or LDAP. The sssd_nss responder checks for the value of the auto_private_groups option in the first domain only. As a consequence, in setups where multiple domains are configured, which includes the default setup on RHEL 8, the option auto_private_group has no effect. To work around this problem, set enable_files_domain = false in the sssd section of of sssd.conf . As a result, If the enable_files_domain option is set to false, then sssd does not add a domain with id_provider=files at the start of the list of active domains, and therefore this bug does not occur. (BZ#1754871) python-ply is not FIPS compatible The YACC module of the python-ply package uses the MD5 hashing algorithm to generate the fingerprint of a YACC signature. However, FIPS mode blocks the use of MD5, which is only allowed in non-security contexts. As a consequence, python-ply is not FIPS compatible. On a system in FIPS mode, all calls to ply.yacc.yacc() fail with the error message: The problem affects python-pycparser and some use cases of python-cffi . To work around this problem, modify the line 2966 of the file /usr/lib/python3.6/site-packages/ply/yacc.py , replacing sig = md5() with sig = md5(usedforsecurity=False) . As a result, python-ply can be used in FIPS mode. ( BZ#1747490 ) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 6.7.14. Desktop Limitations of the Wayland session With Red Hat Enterprise Linux 8, the GNOME environment and the GNOME Display Manager (GDM) use Wayland as the default session type instead of the X11 session, which was used with the major version of RHEL. The following features are currently unavailable or do not work as expected under Wayland : Multi-GPU setups are not supported under Wayland . X11 configuration utilities, such as xrandr , do not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. You can configure the display features using GNOME settings. Screen recording and remote desktop require applications to support the portal API on Wayland . Certain legacy applications do not support the portal API. Pointer accessibility is not available on Wayland . No clipboard manager is available. GNOME Shell on Wayland ignores keyboard grabs issued by most legacy X11 applications. You can enable an X11 application to issue keyboard grabs using the /org/gnome/mutter/wayland/xwayland-grab-access-rules GSettings key. By default, GNOME Shell on Wayland enables the following applications to issue keyboard grabs: GNOME Boxes Vinagre Xephyr virt-manager , virt-viewer , and remote-viewer vncviewer Wayland inside guest virtual machines (VMs) has stability and performance problems. RHEL automatically falls back to the X11 session when running in a VM. If you upgrade to RHEL 8 from a RHEL 7 system where you used the X11 GNOME session, your system continues to use X11 . The system also automatically falls back to X11 when the following graphics drivers are in use: The proprietary NVIDIA driver The cirrus driver The mga driver The aspeed driver You can disable the use of Wayland manually: To disable Wayland in GDM, set the WaylandEnable=false option in the /etc/gdm/custom.conf file. To disable Wayland in the GNOME session, select the legacy X11 option by using the cogwheel menu on the login screen after entering your login name. For more details on Wayland , see https://wayland.freedesktop.org/ . ( BZ#1797409 ) Drag-and-drop does not work between desktop and applications Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release. ( BZ#1717947 ) Disabling flatpak repositories from Software Repositories is not possible Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility. ( BZ#1668760 ) Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 as the host. (BZ#1583445) GNOME Shell on Wayland performs slowly when using a software renderer When using a software renderer, GNOME Shell as a Wayland compositor ( GNOME Shell on Wayland ) does not use a cacheable framebuffer for rendering the screen. Consequently, GNOME Shell on Wayland is slow. To workaround the problem, go to the GNOME Display Manager (GDM) login screen and switch to a session that uses the X11 protocol instead. As a result, the Xorg display server, which uses cacheable memory, is used, and GNOME Shell on Xorg in the described situation performs faster compared to GNOME Shell on Wayland . (BZ#1737553) System crash may result in fadump configuration loss This issue is observed on systems where firmware-assisted dump (fadump) is enabled, and the boot partition is located on a journaling file system such as XFS. A system crash might cause the boot loader to load an older initrd that does not have the dump capturing support enabled. Consequently, after recovery, the system does not capture the vmcore file, which results in fadump configuration loss. To work around this problem: If /boot is a separate partition, perform the following: Restart the kdump service Run the following commands as the root user, or using a user account with CAP_SYS_ADMIN rights: If /boot is not a separate partition, reboot the system. (BZ#1723501) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) 6.7.15. Graphics infrastructures radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, blacklist radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the machine and kdump . After starting kdump , the force_rebuild 1 line may be removed from the configuration file. Note that in this scenario, no graphics will be available during kdump , but kdump will work successfully. (BZ#1694705) 6.7.16. The web console Unprivileged users can access the Subscriptions page If a non-administrator navigates to the Subscriptions page of the web console, the web console displays a generic error message "Cockpit had an unexpected internal error". To work around this problem, sign in to the web console with a privileged user and make sure to check the Reuse my password for privileged tasks checkbox. ( BZ#1674337 ) 6.7.17. Virtualization Using cloud-init to provision virtual machines on Microsoft Azure fails Currently, it is not possible to use the cloud-init utility to provision a RHEL 8 virtual machine (VM) on the Microsoft Azure platform. To work around this problem, use one of the following methods: Use the WALinuxAgent package instead of cloud-init to provision VMs on Microsoft Azure. Add the following setting to the [main] section in the /etc/NetworkManager/NetworkManager.conf file: (BZ#1641190) RHEL 8 virtual machines on RHEL 7 hosts in some cases cannot be viewed in higher resolution than 1920x1200 Currently, when using a RHEL 8 virtual machine (VM) running on a RHEL 7 host system, certain methods of displaying the the graphical output of the VM, such as running the application in kiosk mode, cannot use greater resolution than 1920x1200. As a consequence, displaying VMs using those methods only works in resolutions up to 1920x1200, even if the host hardware supports higher resolutions. (BZ#1635295) Low GUI display performance in RHEL 8 virtual machines on a Windows Server 2019 host When using RHEL 8 as a guest operating system in graphical mode on a Windows Server 2019 host, the GUI display performance is low, and connecting to a console output of the guest currently takes significantly longer than expected. This is a known issue on Windows 2019 hosts and is pending a fix by Microsoft. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host. (BZ#1706541) Installing RHEL virtual machines sometimes fails Under certain circumstances, RHEL 7 and RHEL 8 virtual machines created using the virt-install utility fail to boot if the --location option is used. To work around this problem, use the --extra-args option instead and specify an installation tree reachable by the network, for example: This ensures that the RHEL installer finds the installation files correctly. (BZ#1677019) Displaying multiple monitors of virtual machines that use Wayland is not possible with QXL Using the remote-viewer utility to display more than one monitor of a virtual machine (VM) that is using the Wayland display server causes the VM to become unresponsive and the Waiting for display status message to be displayed indefinitely. To work around this problem, use virtio-gpu instead of qxl as the GPU device for VMs that use Wayland. (BZ#1642887) virsh iface-\* commands do not work consistently Currently, virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-\* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications. (BZ#1664592) Customizing an ESXi VM using cloud-init and rebooting the VM causes IP setting loss and makes booting the VM very slow Currently, if the cloud-init service is used to modify a virtual machine (VM) that runs on the VMware ESXi hypervisor to use static IP and the VM is then cloned, the new cloned VM in some cases takes a very long time to reboot. This is caused cloud-init rewriting the VM's static IP to DHCP and then searching for an available datasource. To work around this problem, you can uninstall cloud-init after the VM is booted for the first time. As a result, the subsequent reboots will not be slowed down. (BZ#1666961, BZ#1706482 ) RHEL 8 virtual machines sometimes cannot boot on Witherspoon hosts RHEL 8 virtual machines (VMs) that use the pseries-rhel7.6.0-sxxm machine type in some cases fail to boot on Power9 S922LC for HPC hosts (also known as Witherspoon) that use the DD2.2 or DD2.3 CPU. Attempting to boot such a VM instead generates the following error message: To work around this problem, configure the virtual machine's XML configuration as follows: ( BZ#1732726 , BZ#1751054 ) IBM POWER virtual machines do not work correctly with zero memory NUMA nodes Currently, when an IBM POWER virtual machine (VM) running on a RHEL 8 host is configured with a NUMA node that uses zero memory ( memory='0' ), the VM cannot boot. Therefore, Red Hat strongly recommends not using IBM POWER VMs with zero-memory NUMA nodes on RHEL 8. (BZ#1651474) Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a "Migration status: active" status. To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully. (BZ#1741436) SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the TOPOEXT CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough. ( BZ#1740002 ) Virtual machines sometimes fail to start when using many virtio-blk disks Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM's guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error. ( BZ#1719687 ) | [
"subscription-manager list --available",
"subscription-manager list --consumed",
"/usr/share/doc/vdo/examples/ansible/vdo.py",
"/usr/lib/python3.6/site-packages/ansible/modules/system/vdo.py",
"yum module install php:7.3",
"yum module install ruby:2.6",
"yum module install nodejs:12",
"yum module enable mariadb-devel:10.3 yum install Judy-devel",
"yum module install nginx:1.16",
"yum install gcc-toolset-9",
"scl enable gcc-toolset-9 tool",
"scl enable gcc-toolset-9 bash",
"virt-xml testguest --start --no-define --edit --boot network",
"\"Failed to add rule for system call ...\"",
"DIMM location: not present. DMI handle: 0x<ADDRESS>",
"'checkpointing a container requires at least CRIU 31100'",
"smartpqi 0000:23:00.0: failed to allocate PQI error buffer",
"xfs_info /mount-point | grep ftype",
"<memtune> <hard_limit unit='KiB'>N</hard_limit> </memtune>",
"update-crypto-policies --set LEGACY",
"~]# yum install network-scripts",
"Example contents of the `zipl.conf` file after the change:",
"[defaultboot] defaultauto prompt=1 timeout=5 target=/boot secure=0",
"url --url=https://SERVER/PATH --noverifyssl",
"inst.ks=<URL> inst.noverifyssl",
"skip_if_unavailable=false",
"--setopt=*.skip_if_unavailable=True",
"systemctl start systemd-resolved",
"dnf module enable libselinux-python dnf install libselinux-python",
"dnf module install libselinux-python:2.8/common",
"NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL",
"NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL",
"SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2",
"Error generating remediation role .../remediation.sh: Exit code of oscap was 1: [output truncated]",
"echo 6000 > /proc/sys/net/core/netdev_max_backlog",
"opcache.huge_code_pages=0",
"enable_files_domain=False",
"\"UnboundLocalError: local variable 'sig' referenced before assignment\"",
"The guest operating system reported that it failed with the following error code: 0x1E",
"fsfreeze -f fsfreeze -u",
"dracut_args --omit-drivers \"radeon\" force_rebuild 1",
"[main] dhcp=dhclient",
"--extra-args=\"inst.repo=https://some/url/tree/path\"",
"qemu-kvm: Requested safe indirect branch capability level not supported by kvm",
"<domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <qemu:commandline> <qemu:arg value='-machine'/> <qemu:arg value='cap-ibs=workaround'/> </qemu:commandline>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.1_release_notes/RHEL-8_1_0_release |
24.5. Certificate Authority ACL Rules | 24.5. Certificate Authority ACL Rules Certificate Authority access control list (CA ACL) rules define which profiles can be used to issue certificates to which users, services, or hosts. By associating profiles, principals, and groups, CA ACLs permit principals or groups to request certificates using particular profiles: an ACL can permit access to multiple profiles an ACL can have multiple users, services, hosts, user groups, and host groups associated with it For example, using CA ACLs, the administrator can restrict use of a profile intended for employees working from an office located in London only to users that are members of the London office-related group. Note By combining certificate profiles, described in Section 24.4, "Certificate Profiles" , and CA ACLs, the administrator can define and control access to custom certificate profiles. For a description of using profiles and CA ACLs to issue user certificates, see Section 24.6, "Using Certificate Profiles and ACLs to Issue User Certificates with the IdM CAs" . 24.5.1. CA ACL Management from the Command Line The caacl plug-in for management of CA ACL rules allows privileged users to add, display, modify, or delete a specified CA ACL. To display all commands supported by the plug-in, run the ipa caacl command: Note that to perform the caacl operations, you must be operating as a user who has the required permissions. IdM includes the following CA ACL-related permissions by default: System: Read CA ACLs Enables the user to read all attributes of the CA ACL. System: Add CA ACL Enables the user to add a new CA ACL. System: Delete CA ACL Enables the user to delete an existing CA ACL. System: Modify CA ACL Enables the user to modify an attribute of the CA ACL and to disable or enable the CA ACL. System: Manage CA ACL membership Enables the user to manage the CA, profile, user, host, and service membership in the CA ACL. All these permissions are included in the default CA Administrator privilege. For more information on IdM role-based access controls and managing permissions, see Section 10.4, "Defining Role-Based Access Controls" . This section describes only the most important aspects of using the ipa caacl commands for CA ACL management. For complete information about a command, run it with the --help option added, for example: Creating CA ACLs To create a new CA ACL, use the ipa caacl-add command. Running the command without any options starts an interactive session in which the ipa caacl-add script prompts your for the required information about the new CA ACL. New CA ACLs are enabled by default. The most notable options accepted by ipa caacl-add are the options that associate a CA ACL with a CA, certificate profile, user, host, or service category: --cacat --profilecat --usercat --hostcat --servicecat IdM only accepts the all value with these options, which associates the CA ACL with all CAs, profiles, users, hosts, or services. For example, to associate the CA ACL with all users and user groups: CA, profile, user, host, and service categories are an alternative to adding particular objects or groups of objects to a CA ACL, which is described in the section called "Adding Entries to CA ACLs and Removing Entries from CA ACLs" . Note that it is not possible to use a category and also add objects or groups of the same type; for example, you cannot use the --usercat=all option and then add a user to the CA ACL with the ipa caacl-add-user --users= user_name command. Note Requesting a certificate for a user or group using a certificate profile fails if the user or group are not added to the corresponding CA ACL. For example: You must either add the user or group to the CA ACL, as described in the section called "Adding Entries to CA ACLs and Removing Entries from CA ACLs" , or associate the CA ACL with the all user category. Displaying CA ACLs To display all CA ACLs, use the ipa caacl-find command: Note that ipa caacl-find accepts the --cacat , --profilecat , --usercat , --hostcat , and --servicecat options, which can be used to filter the results of the search to CA ACLs with the corresponding CA, certificate profile, user, host, or service category. Note that IdM only accepts the all category with these options. For more information about the options, see the section called "Creating CA ACLs" . To display information about a particular CA ACL, use the ipa caacl-show command: Modifying CA ACLs To modify an existing CA ACL, use the ipa caacl-mod command. Pass the required modifications using the command-line options accepted by ipa caacl-mod . For example, to modify the description of a CA ACL and associate the CA ACL with all certificate profiles: The most notable options accepted by ipa caacl-mod are the --cacat , --profilecat , --usercat , --hostcat , and --servicecat options. For a description of these options, see the section called "Creating CA ACLs" . Disabling and Enabling CA ACLs To disable a CA ACL, use the ipa caacl-disable command: A disabled CA ACL is not applied and cannot be used to request a certificate. Disabling a CA ACL does not remove it from IdM. To enable a disabled CA ACL, use the ipa caacl-enable command: Deleting CA ACLs To remove an existing CA ACL, use the ipa caacl-del command: Adding Entries to CA ACLs and Removing Entries from CA ACLs Using the ipa caacl-add-* and ipa caacl-remove-* commands, you can add new entries to a CA ACL or remove existing entries. ipa caacl-add-ca and ipa caacl-remove-ca Adds or removes a CA. ipa caacl-add-host and ipa caacl-remove-host Adds or removes a host or host group. ipa caacl-add-profile and ipa caacl-remove-profile Adds or removes a profile. ipa caacl-add-service and ipa caacl-remove-service Adds or removes a service. ipa caacl-add-user and ipa caacl-remove-user Adds or removes a user or group. For example: Note that it is not possible to add an object or a group of objects to a CA ACL and also use a category of the same object, as described in the section called "Creating CA ACLs" ; these settings are mutually exclusive. For example, if you attempt to run the ipa caacl-add-user --users= user_name command on a CA ACL specified with the --usercat=all option, the command fails: Note Requesting a certificate for a user or group using a certificate profile fails if the user or group are not added to the corresponding CA ACL. For example: You must either add the user or group to the CA ACL, or associate the CA ACL with the all user category, as described in the section called "Creating CA ACLs" . For detailed information on the required syntax for these commands and the available options, run the commands with the --help option added. For example: 24.5.2. CA ACL Management from the Web UI To manage CA ACLs from the IdM web UI: Open the Authentication tab and the Certificates subtab. Open the CA ACLs section. Figure 24.9. CA ACL Rules Management in the Web UI In the CA ACLs section, you can add new CA ACLs, display information about existing CA ACLs, modify their attributes, as well as enable, disable, or delete selected CA ACLs. For example, to modify an existing CA ACL: Click on the name of the CA ACL to open the CA ACL configuration page. In the CA ACL configuration page, fill in the required information. The Profiles and Permitted to have certificates issued sections allow you to associate the CA ACL with certificate profiles, users or user groups, hosts or host groups, or services. You can either add these objects using the Add buttons, or select the Anyone option to associate the CA ACL with all users, hosts, or services. Click Save to confirm the new configuration. Figure 24.10. Modifying a CA ACL Rule in the Web UI | [
"ipa caacl Manage CA ACL rules. EXAMPLES: Create a CA ACL \"test\" that grants all users access to the \"UserCert\" profile: ipa caacl-add test --usercat=all ipa caacl-add-profile test --certprofiles UserCert Display the properties of a named CA ACL: ipa caacl-show test Create a CA ACL to let user \"alice\" use the \"DNP3\" profile on \"DNP3-CA\": ipa caacl-add alice_dnp3 ipa caacl-add-ca alice_dnp3 --cas DNP3-CA ipa caacl-add-profile alice_dnp3 --certprofiles DNP3 ipa caacl-add-user alice_dnp3 --user=alice",
"ipa caacl-mod --help Usage: ipa [global-options] caacl-mod NAME [options] Modify a CA ACL. Options: -h, --help show this help message and exit --desc=STR Description --cacat=['all'] CA category the ACL applies to --profilecat=['all'] Profile category the ACL applies to",
"ipa caacl-add ACL name: smime_acl ------------------------ Added CA ACL \"smime_acl\" ------------------------ ACL name: smime_acl Enabled: TRUE",
"ipa caacl-add ca_acl_name --usercat=all",
"ipa cert-request CSR-FILE --principal user --profile-id profile_id ipa: ERROR Insufficient access: Principal 'user' is not permitted to use CA '.' with profile 'profile_id' for certificate issuance.",
"ipa caacl-find ----------------- 2 CA ACLs matched ----------------- ACL name: hosts_services_caIPAserviceCert Enabled: TRUE",
"ipa caacl-show ca_acl_name ACL name: ca_acl_name Enabled: TRUE Host category: all",
"ipa caacl-mod ca_acl_name --desc=\"New description\" --profilecat=all --------------------------- Modified CA ACL \"ca_acl_name\" --------------------------- ACL name: smime_acl Description: New description Enabled: TRUE Profile category: all",
"ipa caacl-disable ca_acl_name --------------------------- Disabled CA ACL \"ca_acl_name\" ---------------------------",
"ipa caacl-enable ca_acl_name --------------------------- Enabled CA ACL \"ca_acl_name\" ---------------------------",
"ipa caacl-del ca_acl_name",
"ipa caacl-add-user ca_acl_name --groups= group_name",
"ipa caacl-add-user ca_acl_name --users= user_name ipa: ERROR: users cannot be added when user category='all'",
"ipa cert-request CSR-FILE --principal user --profile-id profile_id ipa: ERROR Insufficient access: Principal 'user' is not permitted to use CA '.' with profile 'profile_id' for certificate issuance.",
"ipa caacl-add-user --help"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/ca-acls |
Keeping Red Hat OpenStack Platform Updated | Keeping Red Hat OpenStack Platform Updated Red Hat OpenStack Platform 17.0 Performing minor updates of Red Hat OpenStack Platform OpenStack Documentation Team [email protected] Abstract You can perform a minor update of your Red Hat OpenStack Platform (RHOSP) environment to keep it updated with the latest packages and containers. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/keeping_red_hat_openstack_platform_updated/index |
8.3. Working with Transaction History | 8.3. Working with Transaction History The yum history command allows users to review information about a timeline of Yum transactions, the dates and times they occurred, the number of packages affected, whether transactions succeeded or were aborted, and if the RPM database was changed between transactions. Additionally, this command can be used to undo or redo certain transactions. 8.3.1. Listing Transactions To display a list of twenty most recent transactions, as root , either run yum history with no additional arguments, or type the following at a shell prompt: yum history list To display all transactions, add the all keyword: yum history list all To display only transactions in a given range, use the command in the following form: yum history list start_id .. end_id You can also list only transactions regarding a particular package or packages. To do so, use the command with a package name or a glob expression: yum history list glob_expression For example, the list of the first five transactions looks as follows: All forms of the yum history list command produce tabular output with each row consisting of the following columns: ID - an integer value that identifies a particular transaction. Login user - the name of the user whose login session was used to initiate a transaction. This information is typically presented in the Full Name < username> form. For transactions that were not issued by a user (such as an automatic system update), System <unset> is used instead. Date and time - the date and time when a transaction was issued. Action(s) - a list of actions that were performed during a transaction as described in Table 8.1, "Possible values of the Action(s) field" . Altered - the number of packages that were affected by a transaction, possibly followed by additional information as described in Table 8.2, "Possible values of the Altered field" . Table 8.1. Possible values of the Action(s) field Action Abbreviation Description Downgrade D At least one package has been downgraded to an older version. Erase E At least one package has been removed. Install I At least one new package has been installed. Obsoleting O At least one package has been marked as obsolete. Reinstall R At least one package has been reinstalled. Update U At least one package has been updated to a newer version. Table 8.2. Possible values of the Altered field Symbol Description < Before the transaction finished, the rpmdb database was changed outside Yum. > After the transaction finished, the rpmdb database was changed outside Yum. * The transaction failed to finish. # The transaction finished successfully, but yum returned a non-zero exit code. E The transaction finished successfully, but an error or a warning was displayed. P The transaction finished successfully, but problems already existed in the rpmdb database. s The transaction finished successfully, but the --skip-broken command-line option was used and certain packages were skipped. Yum also allows you to display a summary of all past transactions. To do so, run the command in the following form as root : yum history summary To display only transactions in a given range, type: yum history summary start_id .. end_id Similarly to the yum history list command, you can also display a summary of transactions regarding a certain package or packages by supplying a package name or a glob expression: yum history summary glob_expression For instance, a summary of the transaction history displayed above would look like the following: All forms of the yum history summary command produce simplified tabular output similar to the output of yum history list . As shown above, both yum history list and yum history summary are oriented towards transactions, and although they allow you to display only transactions related to a given package or packages, they lack important details, such as package versions. To list transactions from the perspective of a package, run the following command as root : yum history package-list glob_expression For example, to trace the history of subscription-manager and related packages, type the following at a shell prompt: In this example, three packages were installed during the initial system installation: subscription-manager , subscription-manager-firstboot , and subscription-manager-gnome . In the third transaction, all these packages were updated from version 0.95.11 to version 0.95.17. | [
"~]# yum history list 1..5 Loaded plugins: product-id, refresh-packagekit, subscription-manager ID | Login user | Date and time | Action(s) | Altered ------------------------------------------------------------------------------- 5 | Jaromir ... <jhradilek> | 2011-07-29 15:33 | Install | 1 4 | Jaromir ... <jhradilek> | 2011-07-21 15:10 | Install | 1 3 | Jaromir ... <jhradilek> | 2011-07-16 15:27 | I, U | 73 2 | System <unset> | 2011-07-16 15:19 | Update | 1 1 | System <unset> | 2011-07-16 14:38 | Install | 1106 history list",
"~]# yum history summary 1..5 Loaded plugins: product-id, refresh-packagekit, subscription-manager Login user | Time | Action(s) | Altered ------------------------------------------------------------------------------- Jaromir ... <jhradilek> | Last day | Install | 1 Jaromir ... <jhradilek> | Last week | Install | 1 Jaromir ... <jhradilek> | Last 2 weeks | I, U | 73 System <unset> | Last 2 weeks | I, U | 1107 history summary",
"~]# yum history package-list subscription-manager\\* Loaded plugins: product-id, refresh-packagekit, subscription-manager ID | Action(s) | Package ------------------------------------------------------------------------------- 3 | Updated | subscription-manager-0.95.11-1.el6.x86_64 3 | Update | 0.95.17-1.el6_1.x86_64 3 | Updated | subscription-manager-firstboot-0.95.11-1.el6.x86_64 3 | Update | 0.95.17-1.el6_1.x86_64 3 | Updated | subscription-manager-gnome-0.95.11-1.el6.x86_64 3 | Update | 0.95.17-1.el6_1.x86_64 1 | Install | subscription-manager-0.95.11-1.el6.x86_64 1 | Install | subscription-manager-firstboot-0.95.11-1.el6.x86_64 1 | Install | subscription-manager-gnome-0.95.11-1.el6.x86_64 history package-list"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Yum-Transaction_History |
Chapter 8. Debezium connector for PostgreSQL | Chapter 8. Debezium connector for PostgreSQL The Debezium PostgreSQL connector captures row-level changes in the schemas of a PostgreSQL database. For information about the PostgreSQL versions that are compatible with the connector, see the Debezium Supported Configurations page . The first time it connects to a PostgreSQL server or cluster, the connector takes a consistent snapshot of all schemas. After that snapshot is complete, the connector continuously captures row-level changes that insert, update, and delete database content and that were committed to a PostgreSQL database. The connector generates data change event records and streams them to Kafka topics. For each table, the default behavior is that the connector streams all generated events to a separate Kafka topic for that table. Applications and services consume data change event records from that topic. Information and procedures for using a Debezium PostgreSQL connector is organized as follows: Section 8.1, "Overview of Debezium PostgreSQL connector" Section 8.2, "How Debezium PostgreSQL connectors work" Section 8.3, "Descriptions of Debezium PostgreSQL connector data change events" Section 8.4, "How Debezium PostgreSQL connectors map data types" Section 8.5, "Setting up PostgreSQL to run a Debezium connector" Section 8.6, "Deployment of Debezium PostgreSQL connectors" Section 8.7, "Monitoring Debezium PostgreSQL connector performance" Section 8.8, "How Debezium PostgreSQL connectors handle faults and problems" 8.1. Overview of Debezium PostgreSQL connector PostgreSQL's logical decoding feature was introduced in version 9.4. It is a mechanism that allows the extraction of the changes that were committed to the transaction log and the processing of these changes in a user-friendly manner with the help of an output plug-in . The output plug-in enables clients to consume the changes. The PostgreSQL connector contains two main parts that work together to read and process database changes: pgoutput is the standard logical decoding output plug-in in PostgreSQL 10+. This is the only supported logical decoding output plug-in in this Debezium release. This plug-in is maintained by the PostgreSQL community, and used by PostgreSQL itself for logical replication . This plug-in is always present so no additional libraries need to be installed. The Debezium connector interprets the raw replication event stream directly into change events. Java code (the actual Kafka Connect connector) that reads the changes produced by the logical decoding output plug-in by using PostgreSQL's streaming replication protocol and the PostgreSQL JDBC driver . The connector produces a change event for every row-level insert, update, and delete operation that was captured and sends change event records for each table in a separate Kafka topic. Client applications read the Kafka topics that correspond to the database tables of interest, and can react to every row-level event they receive from those topics. PostgreSQL normally purges write-ahead log (WAL) segments after some period of time. This means that the connector does not have the complete history of all changes that have been made to the database. Therefore, when the PostgreSQL connector first connects to a particular PostgreSQL database, it starts by performing a consistent snapshot of each of the database schemas. After the connector completes the snapshot, it continues streaming changes from the exact point at which the snapshot was made. This way, the connector starts with a consistent view of all of the data, and does not omit any changes that were made while the snapshot was being taken. The connector is tolerant of failures. As the connector reads changes and produces events, it records the WAL position for each event. If the connector stops for any reason (including communication failures, network problems, or crashes), upon restart the connector continues reading the WAL where it last left off. This includes snapshots. If the connector stops during a snapshot, the connector begins a new snapshot when it restarts. Important The connector relies on and reflects the PostgreSQL logical decoding feature, which has the following limitations: Logical decoding does not support DDL changes. This means that the connector is unable to report DDL change events back to consumers. Logical decoding replication slots are supported on only primary servers. When there is a cluster of PostgreSQL servers, the connector can run on only the active primary server. It cannot run on hot or warm standby replicas. If the primary server fails or is demoted, the connector stops. After the primary server has recovered, you can restart the connector. If a different PostgreSQL server has been promoted to primary , adjust the connector configuration before restarting the connector. Behavior when things go wrong describes how the connector responds if there is a problem. Important Debezium currently supports databases with UTF-8 character encoding only. With a single byte character encoding, it is not possible to correctly process strings that contain extended ASCII code characters. 8.2. How Debezium PostgreSQL connectors work To optimally configure and run a Debezium PostgreSQL connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata. Details are in the following topics: Section 8.2.2, "How Debezium PostgreSQL connectors perform database snapshots" Section 8.2.3, "Ad hoc snapshots" Section 8.2.4, "Incremental snapshots" Section 8.2.5, "How Debezium PostgreSQL connectors stream change event records" Section 8.2.6, "Default names of Kafka topics that receive Debezium PostgreSQL change event records" Section 8.2.7, "Debezium PostgreSQL connector-generated events that represent transaction boundaries" 8.2.1. Security for PostgreSQL connector To use the Debezium connector to stream changes from a PostgreSQL database, the connector must operate with specific privileges in the database. Although one way to grant the necessary privileges is to provide the user with superuser privileges, doing so potentially exposes your PostgreSQL data to unauthorized access. Rather than granting excessive privileges to the Debezium user, it is best to create a dedicated Debezium replication user to which you grant specific privileges. For more information about configuring privileges for the Debezium PostgreSQL user, see Setting up permissions . For more information about PostgreSQL logical replication security, see the PostgreSQL documentation . 8.2.2. How Debezium PostgreSQL connectors perform database snapshots Most PostgreSQL servers are configured to not retain the complete history of the database in the WAL segments. This means that the PostgreSQL connector would be unable to see the entire history of the database by reading only the WAL. Consequently, the first time that the connector starts, it performs an initial consistent snapshot of the database. You can find more information about snapshots in the following sections: Section 8.2.3, "Ad hoc snapshots" Section 8.2.4, "Incremental snapshots" Default workflow behavior of initial snapshots The default behavior for performing a snapshot consists of the following steps. You can change this behavior by setting the snapshot.mode connector configuration property to a value other than initial . Start a transaction with a SERIALIZABLE, READ ONLY, DEFERRABLE isolation level to ensure that subsequent reads in this transaction are against a single consistent version of the data. Any changes to the data due to subsequent INSERT , UPDATE , and DELETE operations by other clients are not visible to this transaction. Read the current position in the server's transaction log. Scan the database tables and schemas, generate a READ event for each row and write that event to the appropriate table-specific Kafka topic. Commit the transaction. Record the successful completion of the snapshot in the connector offsets. If the connector fails, is rebalanced, or stops after Step 1 begins but before Step 5 completes, upon restart the connector begins a new snapshot. After the connector completes its initial snapshot, the PostgreSQL connector continues streaming from the position that it read in Step 2. This ensures that the connector does not miss any updates. If the connector stops again for any reason, upon restart, the connector continues streaming changes from where it previously left off. Table 8.1. Options for the snapshot.mode connector configuration property Option Description always The connector always performs a snapshot when it starts. After the snapshot completes, the connector continues streaming changes from step 3 in the above sequence. This mode is useful in these situations: It is known that some WAL segments have been deleted and are no longer available. After a cluster failure, a new primary has been promoted. The always snapshot mode ensures that the connector does not miss any changes that were made after the new primary had been promoted but before the connector was restarted on the new primary. never The connector never performs snapshots. When a connector is configured this way, its behavior when it starts is as follows. If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. If no LSN has been stored, the connector starts streaming changes from the point in time when the PostgreSQL logical replication slot was created on the server. The never snapshot mode is useful only when you know all data of interest is still reflected in the WAL. initial (default) The connector performs a database snapshot when no Kafka offsets topic exists. After the database snapshot completes the Kafka offsets topic is written. If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. initial_only The connector performs a database snapshot and stops before streaming any change event records. If the connector had started but did not complete a snapshot before stopping, the connector restarts the snapshot process and stops when the snapshot completes. exported Deprecated, all modes are lockless. 8.2.3. Ad hoc snapshots By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only. However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. The following changes in a database might be cause for performing an ad hoc snapshot: The connector configuration is modified to capture a different set of tables. Kafka topics are deleted and must be rebuilt. Data corruption occurs due to a configuration error or some other problem. You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot . Ad hoc snapshots require the use of signaling tables . You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table. When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled. Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database. Also, the snapshot can capture a subset of the contents of the table(s) in the database. You specify the tables to capture by sending an execute-snapshot message to the signaling table. Set the type of the execute-snapshot signal to incremental , and provide the names of the tables to include in the snapshot, as described in the following table: Table 8.2. Example of an ad hoc execute-snapshot signal record Field Default Value type incremental Specifies the type of snapshot that you want to run. Setting the type is optional. Currently, you can request only incremental snapshots. data-collections N/A An array that contains regular expressions matching the fully-qualified names of the table to be snapshotted. The format of the names is the same as for the signal.data.collection configuration option. additional-condition N/A An optional string, which specifies a condition based on the column(s) of the table(s), to capture a subset of the contents of the table(s). surrogate-key N/A An optional string that specifies the column name that the connector uses as the primary key of a table during the snapshot process. Triggering an ad hoc snapshot You initiate an ad hoc snapshot by adding an entry with the execute-snapshot signal type to the signaling table. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time. Currently, the execute-snapshot action type triggers incremental snapshots only. For more information, see Incremental snapshots . 8.2.4. Incremental snapshots To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting . Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector . In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk . The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows. As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process: You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other. If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning. You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its table.include.list property. Incremental snapshot process When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size . Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ event. That event represents the value of the row when the snapshot for the chunk began. As a snapshot proceeds, it's likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT , UPDATE , or DELETE operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka. How Debezium resolves collisions among records with the same primary key In some cases, the UPDATE or DELETE events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ event for that row. When the snapshot eventually emits the corresponding READ event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka. Snapshot window To assist in resolving collisions between late-arriving READ events and streamed events that modify the same table row, Debezium employs a so-called snapshot window . The snapshot windows demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key.. For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE or DELETE operations for each change. As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ events for which no related transaction log events exist. Debezium emits these remaining READ events to the table's Kafka topic. The connector repeats the process for each snapshot chunk. Warning The Debezium connector for PostgreSQL does not support schema changes while an incremental snapshot is running. If a schema change is performed before the incremental snapshot start but after sending the signal then passthrough config option database.autosave is set to conservative to correctly process the schema change. 8.2.4.1. Triggering an incremental snapshot Currently, the only way to initiate an incremental snapshot is to send an ad hoc snapshot signal to the signaling table on the source database. You submit a signal to the signaling table as SQL INSERT queries. After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation. The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the kind of snapshot operation. Currently, the only valid option for snapshots operations is the default value, incremental . To specify the tables to include in the snapshot, provide a data-collections array that lists the tables or an array of regular expressions used to match tables, for example, {"data-collections": ["public.MyFirstTable", "public.MySecondTable"]} The data-collections array for an incremental snapshot signal has no default value. If the data-collections array is empty, Debezium detects that no action is required and does not perform a snapshot. Note If the name of a table that you want to include in a snapshot contains a dot ( . ) in the name of the database, schema, or table, to add the table to the data-collections array, you must escape each part of the name in double quotes. For example, to include a table that exists in the public schema and that has the name My.Table , use the following format: "public"."My.Table" . Prerequisites Signaling is enabled . A signaling data collection exists on the source database. The signaling data collection is specified in the signal.data.collection property. Using a source signaling channel to trigger an incremental snapshot Send a SQL query to add the ad hoc incremental snapshot request to the signaling table: INSERT INTO <signalTable> (id, type, data) VALUES ( '<id>' , '<snapshotType>' , '{"data-collections": [" <tableName> "," <tableName> "],"type":" <snapshotType> ","additional-condition":" <additional-condition> "}'); For example, INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{"data-collections": ["schema1.table1", "schema2.table2"], 4 "type":"incremental"}, 5 "additional-condition":"color=blue"}'); 6 The values of the id , type , and data parameters in the command correspond to the fields of the signaling table . The following table describes the parameters in the example: Table 8.3. Descriptions of fields in a SQL command for sending an incremental snapshot signal to the signaling table Item Value Description 1 myschema.debezium_signal Specifies the fully-qualified name of the signaling table on the source database. 2 ad-hoc-1 The id parameter specifies an arbitrary string that is assigned as the id identifier for the signal request. Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its own id string as a watermarking signal. 3 execute-snapshot The type parameter specifies the operation that the signal is intended to trigger. 4 data-collections A required component of the data field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot. The array lists regular expressions which match tables by their fully-qualified names, using the same format as you use to specify the name of the connector's signaling table in the signal.data.collection configuration property. 5 incremental An optional type component of the data field of a signal that specifies the kind of snapshot operation to run. Currently, the only valid option is the default value, incremental . If you do not specify a value, the connector runs an incremental snapshot. 6 additional-condition An optional string, which specifies a condition based on the column(s) of the table(s), to capture a subset of the contents of the tables. For more information about the additional-condition parameter, see Ad hoc incremental snapshots with additional-condition . Ad hoc incremental snapshots with additional-condition If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-condition parameter to the snapshot signal. The SQL query for a typical snapshot takes the following form: SELECT * FROM <tableName> .... By adding an additional-condition parameter, you append a WHERE condition to the SQL query, as in the following example: SELECT * FROM <tableName> WHERE <additional-condition> .... The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table: INSERT INTO <signalTable> (id, type, data) VALUES ( '<id>' , '<snapshotType>' , '{"data-collections": [" <tableName> "," <tableName> "],"type":" <snapshotType> ","additional-condition":" <additional-condition> "}'); For example, suppose you have a products table that contains the following columns: id (primary key) color quantity If you want an incremental snapshot of the products table to include only the data items where color=blue , you can use the following SQL statement to trigger the snapshot: INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-condition":"color=blue"}'); The additional-condition parameter also enables you to pass conditions that are based on more than one column. For example, using the products table from the example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue and quantity>10 : INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-condition":"color=blue AND quantity>10"}'); The following example, shows the JSON for an incremental snapshot event that is captured by a connector. Example: Incremental snapshot event message { "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "transaction":null } Item Field name Description 1 snapshot Specifies the type of snapshot operation to run. Currently, the only valid option is the default value, incremental . Specifying a type value in the SQL query that you submit to the signaling table is optional. If you do not specify a value, the connector runs an incremental snapshot. 2 op Specifies the event type. The value for snapshot events is r , signifying a READ operation. 8.2.4.2. Using the Kafka signaling channel to trigger an incremental snapshot You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot. The key of the Kafka message must match the value of the topic.prefix connector configuration option. The value of the message is a JSON object with type and data fields. The signal type is execute-snapshot , and the data field must have the following fields: Table 8.4. Execute snapshot data fields Field Default Value type incremental The type of the snapshot to be executed. Currently Debezium supports only the incremental type. See the section for more details. data-collections N/A An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. Specify the names by using the same format as is required for the signal.data.collection configuration option. additional-condition N/A An optional string that specifies a condition that the connector evaluates to designate a subset of columns to include in a snapshot. An example of the execute-snapshot Kafka message: Ad hoc incremental snapshots with additional-condition Debezium uses the additional-condition field to select a subset of a table's content. Typically, when Debezium runs a snapshot, it runs a SQL query such as: SELECT * FROM <tableName> ... . When the snapshot request includes an additional-condition , the additional-condition is appended to the SQL query, for example: SELECT * FROM <tableName> WHERE <additional-condition> ... . For example, given a products table with the columns id (primary key), color , and brand , if you want a snapshot to include only content for which color='blue' , when you request the snapshot, you could append an additional-condition statement to filter the content: You can use the additional-condition statement to pass conditions based on multiple columns. For example, using the same products table as in the example, if you want a snapshot to include only the content from the products table for which color='blue' , and brand='MyBrand' , you could send the following request: 8.2.4.3. Stopping an incremental snapshot You can also stop an incremental snapshot by sending a signal to the table on the source database. You submit a stop snapshot signal to the table by sending a SQL INSERT query. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it's in progress. The query that you submit specifies the snapshot operation of incremental , and, optionally, the tables of the current running snapshot to be removed. Prerequisites Signaling is enabled . A signaling data collection exists on the source database. The signaling data collection is specified in the signal.data.collection property. Using a source signaling channel to stop an incremental snapshot Send a SQL query to stop the ad hoc incremental snapshot to the signaling table: INSERT INTO <signalTable> (id, type, data) values ( '<id>' , 'stop-snapshot', '{"data-collections": [" <tableName> "," <tableName> "],"type":"incremental"}'); For example, INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{"data-collections": ["schema1.table1", "schema2.table2"], 4 "type":"incremental"}'); 5 The values of the id , type , and data parameters in the signal command correspond to the fields of the signaling table . The following table describes the parameters in the example: Table 8.5. Descriptions of fields in a SQL command for sending a stop incremental snapshot signal to the signaling table Item Value Description 1 myschema.debezium_signal Specifies the fully-qualified name of the signaling table on the source database. 2 ad-hoc-1 The id parameter specifies an arbitrary string that is assigned as the id identifier for the signal request. Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. 3 stop-snapshot Specifies type parameter specifies the operation that the signal is intended to trigger. 4 data-collections An optional component of the data field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot. The array lists regular expressions which match tables by their fully-qualified names, using the same format as you use to specify the name of the connector's signaling table in the signal.data.collection configuration property. If this component of the data field is omitted, the signal stops the entire incremental snapshot that is in progress. 5 incremental A required component of the data field of a signal that specifies the kind of snapshot operation that is to be stopped. Currently, the only valid option is incremental . If you do not specify a type value, the signal fails to stop the incremental snapshot. 8.2.4.4. Using the Kafka signaling channel to stop an incremental snapshot You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot. The key of the Kafka message must match the value of the topic.prefix connector configuration option. The value of the message is a JSON object with type and data fields. The signal type is stop-snapshot , and the data field must have the following fields: Table 8.6. Execute snapshot data fields Field Default Value type incremental The type of the snapshot to be executed. Currently Debezium supports only the incremental type. See the section for more details. data-collections N/A An optional array of comma-separated regular expressions that match the fully-qualified names of the tables to include in the snapshot. Specify the names by using the same format as is required for the signal.data.collection configuration option. The following example shows a typical stop-snapshot Kafka message: 8.2.5. How Debezium PostgreSQL connectors stream change event records The PostgreSQL connector typically spends the vast majority of its time streaming changes from the PostgreSQL server to which it is connected. This mechanism relies on PostgreSQL's replication protocol . This protocol enables clients to receive changes from the server as they are committed in the server's transaction log at certain positions, which are referred to as Log Sequence Numbers (LSNs). Whenever the server commits a transaction, a separate server process invokes a callback function from the logical decoding plug-in . This function processes the changes from the transaction, converts them to a specific format (Protobuf or JSON in the case of Debezium plug-in) and writes them on an output stream, which can then be consumed by clients. The Debezium PostgreSQL connector acts as a PostgreSQL client. When the connector receives changes it transforms the events into Debezium create , update , or delete events that include the LSN of the event. The PostgreSQL connector forwards these change events in records to the Kafka Connect framework, which is running in the same process. The Kafka Connect process asynchronously writes the change event records in the same order in which they were generated to the appropriate Kafka topic. Periodically, Kafka Connect records the most recent offset in another Kafka topic. The offset indicates source-specific position information that Debezium includes with each event. For the PostgreSQL connector, the LSN recorded in each change event is the offset. When Kafka Connect gracefully shuts down, it stops the connectors, flushes all event records to Kafka, and records the last offset received from each connector. When Kafka Connect restarts, it reads the last recorded offset for each connector, and starts each connector at its last recorded offset. When the connector restarts, it sends a request to the PostgreSQL server to send the events starting just after that position. Note The PostgreSQL connector retrieves schema information as part of the events sent by the logical decoding plug-in. However, the connector does not retrieve information about which columns compose the primary key. The connector obtains this information from the JDBC metadata (side channel). If the primary key definition of a table changes (by adding, removing or renaming primary key columns), there is a tiny period of time when the primary key information from JDBC is not synchronized with the change event that the logical decoding plug-in generates. During this tiny period, a message could be created with an inconsistent key structure. To prevent this inconsistency, update primary key structures as follows: Put the database or an application into a read-only mode. Let Debezium process all remaining events. Stop Debezium. Update the primary key definition in the relevant table. Put the database or the application into read/write mode. Restart Debezium. PostgreSQL 10+ logical decoding support ( pgoutput ) As of PostgreSQL 10+, there is a logical replication stream mode, called pgoutput that is natively supported by PostgreSQL. This means that a Debezium PostgreSQL connector can consume that replication stream without the need for additional plug-ins. This is particularly valuable for environments where installation of plug-ins is not supported or not allowed. For more information, see Setting up PostgreSQL . 8.2.6. Default names of Kafka topics that receive Debezium PostgreSQL change event records By default, the PostgreSQL connector writes change events for all INSERT , UPDATE , and DELETE operations that occur in a table to a single Apache Kafka topic that is specific to that table. The connector uses the following convention to name change event topics: topicPrefix.schemaName.tableName The following list provides definitions for the components of the default name: topicPrefix The topic prefix as specified by the topic.prefix configuration property. schemaName The name of the database schema in which the change event occurred. tableName The name of the database table in which the change event occurred. For example, suppose that fulfillment is the logical server name in the configuration for a connector that is capturing changes in a PostgreSQL installation that has a postgres database and an inventory schema that contains four tables: products , products_on_hand , customers , and orders . The connector would stream records to these four Kafka topics: fulfillment.inventory.products fulfillment.inventory.products_on_hand fulfillment.inventory.customers fulfillment.inventory.orders Now suppose that the tables are not part of a specific schema but were created in the default public PostgreSQL schema. The names of the Kafka topics would be: fulfillment.public.products fulfillment.public.products_on_hand fulfillment.public.customers fulfillment.public.orders The connector applies similar naming conventions to label its transaction metadata topics . If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing . 8.2.7. Debezium PostgreSQL connector-generated events that represent transaction boundaries Debezium can generate events that represent transaction boundaries and that enrich data change event messages. Limits on when Debezium receives transaction metadata Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available. For every transaction BEGIN and END , Debezium generates an event that contains the following fields: status BEGIN or END . id String representation of the unique transaction identifier composed of Postgres transaction ID itself and LSN of given operation separated by colon, i.e. the format is txID:LSN . ts_ms The time of a transaction boundary event ( BEGIN or END event) at the data source. If the data source does not provide Debezium with the event time, then the field instead represents the time at which Debezium processes the event. event_count (for END events) Total number of events emmitted by the transaction. data_collections (for END events) An array of pairs of data_collection and event_count elements that indicates the number of events that the connector emits for changes that originate from a data collection. Example { "status": "BEGIN", "id": "571:53195829", "ts_ms": 1486500577125, "event_count": null, "data_collections": null } { "status": "END", "id": "571:53195832", "ts_ms": 1486500577691, "event_count": 2, "data_collections": [ { "data_collection": "s1.a", "event_count": 1 }, { "data_collection": "s2.a", "event_count": 1 } ] } Unless overridden via the topic.transaction option, transaction events are written to the topic named <topic.prefix> .transaction . Change data event enrichment When transaction metadata is enabled the data message Envelope is enriched with a new transaction field. This field provides information about every event in the form of a composite of fields: id String representation of unique transaction identifier. total_order The absolute position of the event among all events generated by the transaction. data_collection_order The per-data collection position of the event among all events that were emitted by the transaction. Following is an example of a message: { "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "transaction": { "id": "571:53195832", "total_order": "1", "data_collection_order": "1" } } 8.3. Descriptions of Debezium PostgreSQL connector data change events The Debezium PostgreSQL connector generates a data change event for each row-level INSERT , UPDATE , and DELETE operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed. Debezium and Kafka Connect are designed around continuous streams of event messages . However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained. The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure: { "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, } Table 8.7. Overview of change event basic content Item Field name Description 1 schema The first schema field is part of the event key. It specifies a Kafka Connect schema that describes what is in the event key's payload portion. In other words, the first schema field describes the structure of the primary key, or the unique key if the table does not have a primary key, for the table that was changed. It is possible to override the table's primary key by setting the message.key.columns connector configuration property . In this case, the first schema field describes the structure of the key identified by that property. 2 payload The first payload field is part of the event key. It has the structure described by the schema field and it contains the key for the row that was changed. 3 schema The second schema field is part of the event value. It specifies the Kafka Connect schema that describes what is in the event value's payload portion. In other words, the second schema describes the structure of the row that was changed. Typically, this schema contains nested schemas. 4 payload The second payload field is part of the event value. It has the structure described by the schema field and it contains the actual data for the row that was changed. By default behavior is that the connector streams change event records to topics with names that are the same as the event's originating table . Note Starting with Kafka 0.10, Kafka can optionally record the event key and value with the timestamp at which the message was created (recorded by the producer) or written to the log by Kafka. Warning The PostgreSQL connector ensures that all Kafka Connect schema names adhere to the Avro schema name format . This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the schema and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character. This can lead to unexpected conflicts if the logical server name, a schema name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores. Details are in the following topics: Section 8.3.1, "About keys in Debezium PostgreSQL change events" Section 8.3.2, "About values in Debezium PostgreSQL change events" 8.3.1. About keys in Debezium PostgreSQL change events For a given table, the change event's key has a structure that contains a field for each column in the primary key of the table at the time the event was created. Alternatively, if the table has REPLICA IDENTITY set to FULL or USING INDEX there is a field for each unique key constraint. Consider a customers table defined in the public database schema and the example of a change event key for that table. Example table CREATE TABLE customers ( id SERIAL, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY(id) ); Example change event key If the topic.prefix connector configuration property has the value PostgreSQL_server , every change event for the customers table while it has this definition has the same key structure, which in JSON looks like this: { "schema": { 1 "type": "struct", "name": "PostgreSQL_server.public.customers.Key", 2 "optional": false, 3 "fields": [ 4 { "name": "id", "index": "0", "schema": { "type": "INT32", "optional": "false" } } ] }, "payload": { 5 "id": "1" }, } Table 8.8. Description of change event key Item Field name Description 1 schema The schema portion of the key specifies a Kafka Connect schema that describes what is in the key's payload portion. 2 PostgreSQL_server.inventory.customers.Key Name of the schema that defines the structure of the key's payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name . database-name . table-name . Key . In this example: PostgreSQL_server is the name of the connector that generated this event. inventory is the database that contains the table that was changed. customers is the table that was updated. 3 optional Indicates whether the event key must contain a value in its payload field. In this example, a value in the key's payload is required. A value in the key's payload field is optional when a table does not have a primary key. 4 fields Specifies each field that is expected in the payload , including each field's name, index, and schema. 5 payload Contains the key for the row for which this change event was generated. In this example, the key, contains a single id field whose value is 1 . Note Although the column.exclude.list and column.include.list connector configuration properties allow you to capture only a subset of table columns, all columns in a primary or unique key are always included in the event's key. Warning If the table does not have a primary or unique key, then the change event's key is null. The rows in a table without a primary or unique key constraint cannot be uniquely identified. 8.3.2. About values in Debezium PostgreSQL change events The value in a change event is a bit more complicated than the key. Like the key, the value has a schema section and a payload section. The schema section contains the schema that describes the Envelope structure of the payload section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure. Consider the same sample table that was used to show an example of a change event key: CREATE TABLE customers ( id SERIAL, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY(id) ); The value portion of a change event for a change to this table varies according to the REPLICA IDENTITY setting and the operation that the event is for. Details follow in these sections: Replica identity create events update events Primary key updates delete events Tombstone events Replica identity REPLICA IDENTITY is a PostgreSQL-specific table-level setting that determines the amount of information that is available to the logical decoding plug-in for UPDATE and DELETE events. More specifically, the setting of REPLICA IDENTITY controls what (if any) information is available for the values of the table columns involved, whenever an UPDATE or DELETE event occurs. There are 4 possible values for REPLICA IDENTITY : DEFAULT - The default behavior is that UPDATE and DELETE events contain the values for the primary key columns of a table if that table has a primary key. For an UPDATE event, only the primary key columns with changed values are present. If a table does not have a primary key, the connector does not emit UPDATE or DELETE events for that table. For a table without a primary key, the connector emits only create events. Typically, a table without a primary key is used for appending messages to the end of the table, which means that UPDATE and DELETE events are not useful. NOTHING - Emitted events for UPDATE and DELETE operations do not contain any information about the value of any table column. FULL - Emitted events for UPDATE and DELETE operations contain the values of all columns in the table. INDEX index-name - Emitted events for UPDATE and DELETE operations contain the values of the columns contained in the specified index. UPDATE events also contain the indexed columns with the updated values. create events The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers table: { "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "PostgreSQL_server.inventory.customers.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "PostgreSQL_server.inventory.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": false, "field": "schema" }, { "type": "string", "optional": false, "field": "table" }, { "type": "int64", "optional": true, "field": "txId" }, { "type": "int64", "optional": true, "field": "lsn" }, { "type": "int64", "optional": true, "field": "xmin" } ], "optional": false, "name": "io.debezium.connector.postgresql.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" } ], "optional": false, "name": "PostgreSQL_server.inventory.customers.Envelope" 4 }, "payload": { 5 "before": null, 6 "after": { 7 "id": 1, "first_name": "Anne", "last_name": "Kretchmar", "email": "[email protected]" }, "source": { 8 "version": "2.3.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": true, "db": "postgres", "sequence": "[\"24023119\",\"24023128\"]", "schema": "public", "table": "customers", "txId": 555, "lsn": 24023128, "xmin": null }, "op": "c", 9 "ts_ms": 1559033904863 10 } } Table 8.9. Descriptions of create event value fields Item Field name Description 1 schema The value's schema, which describes the structure of the value's payload. A change event's value schema is the same in every change event that the connector generates for a particular table. 2 name In the schema section, each name field specifies the schema for a field in the value's payload. PostgreSQL_server.inventory.customers.Value is the schema for the payload's before and after fields. This schema is specific to the customers table. Names of schemas for before and after fields are of the form logicalName . tableName .Value , which ensures that the schema name is unique in the database. This means that when using the Avro converter , the resulting Avro schema for each table in each logical source has its own evolution and history. 3 name io.debezium.connector.postgresql.Source is the schema for the payload's source field. This schema is specific to the PostgreSQL connector. The connector uses it for all events that it generates. 4 name PostgreSQL_server.inventory.customers.Envelope is the schema for the overall structure of the payload, where PostgreSQL_server is the connector name, inventory is the database, and customers is the table. 5 payload The value's actual data. This is the information that the change event is providing. It may appear that the JSON representations of the events are much larger than the rows they describe. This is because the JSON representation must include the schema and the payload portions of the message. However, by using the Avro converter , you can significantly decrease the size of the messages that the connector streams to Kafka topics. 6 before An optional field that specifies the state of the row before the event occurred. When the op field is c for create, as it is in this example, the before field is null since this change event is for new content. Note Whether or not this field is available is dependent on the REPLICA IDENTITY setting for each table. 7 after An optional field that specifies the state of the row after the event occurred. In this example, the after field contains the values of the new row's id , first_name , last_name , and email columns. 8 source Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes: Debezium version Connector type and name Database and table that contains the new row Stringified JSON array of additional offset information. The first value is always the last committed LSN, the second value is always the current LSN. Either value may be null . Schema name If the event was part of a snapshot ID of the transaction in which the operation was performed Offset of the operation in the database log Timestamp for when the change was made in the database 9 op Mandatory string that describes the type of operation that caused the connector to generate the event. In this example, c indicates that the operation created a row. Valid values are: c = create u = update d = delete r = read (applies to only snapshots) t = truncate m = message 10 ts_ms Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms , you can determine the lag between the source database update and Debezium. update events The value of a change event for an update in the sample customers table has the same schema as a create event for that table. Likewise, the event value's payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers table: { "schema": { ... }, "payload": { "before": { 1 "id": 1 }, "after": { 2 "id": 1, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "[email protected]" }, "source": { 3 "version": "2.3.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": false, "db": "postgres", "schema": "public", "table": "customers", "txId": 556, "lsn": 24023128, "xmin": null }, "op": "u", 4 "ts_ms": 1465584025523 5 } } Table 8.10. Descriptions of update event value fields Item Field name Description 1 before An optional field that contains values that were in the row before the database commit. In this example, only the primary key column, id , is present because the table's REPLICA IDENTITY setting is, by default, DEFAULT . + For an update event to contain the values of all columns in the row, you would have to change the customers table by running ALTER TABLE customers REPLICA IDENTITY FULL . 2 after An optional field that specifies the state of the row after the event occurred. In this example, the first_name value is now Anne Marie . 3 source Mandatory field that describes the source metadata for the event. The source field structure has the same fields as in a create event, but some values are different. The source metadata includes: Debezium version Connector type and name Database and table that contains the new row Schema name If the event was part of a snapshot (always false for update events) ID of the transaction in which the operation was performed Offset of the operation in the database log Timestamp for when the change was made in the database 4 op Mandatory string that describes the type of operation. In an update event value, the op field value is u , signifying that this row changed because of an update. 5 ts_ms Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms , you can determine the lag between the source database update and Debezium. Note Updating the columns for a row's primary/unique key changes the value of the row's key. When a key changes, Debezium outputs three events: a DELETE event and a tombstone event with the old key for the row, followed by an event with the new key for the row. Details are in the section. Primary key updates An UPDATE operation that changes a row's primary key field(s) is known as a primary key change. For a primary key change, in place of sending an UPDATE event record, the connector sends a DELETE event record for the old key and a CREATE event record for the new (updated) key. These events have the usual structure and content, and in addition, each one has a message header related to the primary key change: The DELETE event record has __debezium.newkey as a message header. The value of this header is the new primary key for the updated row. The CREATE event record has __debezium.oldkey as a message header. The value of this header is the (old) primary key that the updated row had. delete events The value in a delete change event has the same schema portion as create and update events for the same table. The payload portion in a delete event for the sample customers table looks like this: { "schema": { ... }, "payload": { "before": { 1 "id": 1 }, "after": null, 2 "source": { 3 "version": "2.3.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": false, "db": "postgres", "schema": "public", "table": "customers", "txId": 556, "lsn": 46523128, "xmin": null }, "op": "d", 4 "ts_ms": 1465581902461 5 } } Table 8.11. Descriptions of delete event value fields Item Field name Description 1 before Optional field that specifies the state of the row before the event occurred. In a delete event value, the before field contains the values that were in the row before it was deleted with the database commit. In this example, the before field contains only the primary key column because the table's REPLICA IDENTITY setting is DEFAULT . 2 after Optional field that specifies the state of the row after the event occurred. In a delete event value, the after field is null , signifying that the row no longer exists. 3 source Mandatory field that describes the source metadata for the event. In a delete event value, the source field structure is the same as for create and update events for the same table. Many source field values are also the same. In a delete event value, the ts_ms and lsn field values, as well as other values, might have changed. But the source field in a delete event value provides the same metadata: Debezium version Connector type and name Database and table that contained the deleted row Schema name If the event was part of a snapshot (always false for delete events) ID of the transaction in which the operation was performed Offset of the operation in the database log Timestamp for when the change was made in the database 4 op Mandatory string that describes the type of operation. The op field value is d , signifying that this row was deleted. 5 ts_ms Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms , you can determine the lag between the source database update and Debezium. A delete change event record provides a consumer with the information it needs to process the removal of this row. Warning For a consumer to be able to process a delete event generated for a table that does not have a primary key, set the table's REPLICA IDENTITY to FULL . When a table does not have a primary key and the table's REPLICA IDENTITY is set to DEFAULT or NOTHING , a delete event has no before field. PostgreSQL connector events are designed to work with Kafka log compaction . Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state. Tombstone events When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null . To make this possible, the PostgreSQL connector follows a delete event with a special tombstone event that has the same key but a null value. truncate events A truncate change event signals that a table has been truncated. The message key is null in this case, the message value looks like this: { "schema": { ... }, "payload": { "source": { 1 "version": "2.3.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": false, "db": "postgres", "schema": "public", "table": "customers", "txId": 556, "lsn": 46523128, "xmin": null }, "op": "t", 2 "ts_ms": 1559033904961 3 } } Table 8.12. Descriptions of truncate event value fields Item Field name Description 1 source Mandatory field that describes the source metadata for the event. In a truncate event value, the source field structure is the same as for create , update , and delete events for the same table, provides this metadata: Debezium version Connector type and name Database and table that contains the new row Schema name If the event was part of a snapshot (always false for delete events) ID of the transaction in which the operation was performed Offset of the operation in the database log Timestamp for when the change was made in the database 2 op Mandatory string that describes the type of operation. The op field value is t , signifying that this table was truncated. 3 ts_ms Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms , you can determine the lag between the source database update and Debezium. In case a single TRUNCATE statement applies to multiple tables, one truncate change event record for each truncated table will be emitted. Note that since truncate events represent a change made to an entire table and don't have a message key, unless you're working with topics with a single partition, there are no ordering guarantees for the change events pertaining to a table ( create , update , etc.) and truncate events for that table. For instance a consumer may receive an update event only after a truncate event for that table, when those events are read from different partitions. message events This event type is only supported through the pgoutput plugin on Postgres 14+ ( Postgres Documentation ) A message event signals that a generic logical decoding message has been inserted directly into the WAL typically with the pg_logical_emit_message function. The message key is a Struct with a single field named prefix in this case, carrying the prefix specified when inserting the message. The message value looks like this for transactional messages: { "schema": { ... }, "payload": { "source": { 1 "version": "2.3.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": false, "db": "postgres", "schema": "", "table": "", "txId": 556, "lsn": 46523128, "xmin": null }, "op": "m", 2 "ts_ms": 1559033904961, 3 "message": { 4 "prefix": "foo", "content": "Ymfy" } } } Unlike other event types, non-transactional messages will not have any associated BEGIN or END transaction events. The message value looks like this for non-transactional messages: { "schema": { ... }, "payload": { "source": { 1 "version": "2.3.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": false, "db": "postgres", "schema": "", "table": "", "lsn": 46523128, "xmin": null }, "op": "m", 2 "ts_ms": 1559033904961 3 "message": { 4 "prefix": "foo", "content": "Ymfy" } } Table 8.13. Descriptions of message event value fields Item Field name Description 1 source Mandatory field that describes the source metadata for the event. In a message event value, the source field structure will not have table or schema information for any message events and will only have txId if the message event is transactional. Debezium version Connector type and name Database name Schema name (always "" for message events) Table name (always "" for message events) If the event was part of a snapshot (always false for message events) ID of the transaction in which the operation was performed ( null for non-transactional message events) Offset of the operation in the database log Transactional messages: Timestamp for when the message was inserted into the WAL Non-Transactional messages; Timestamp for when the connector encounters the message 2 op Mandatory string that describes the type of operation. The op field value is m , signifying that this is a message event. 3 ts_ms Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. For transactional message events, the ts_ms attribute of the source object indicates the time that the change was made in the database for transactional message events. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms , you can determine the lag between the source database update and Debezium. For non-transactional message events, the source object's ts_ms indicates time at which the connector encounters the message event, while the payload.ts_ms indicates the time at which the connector processed the event. This difference is due to the fact that the commit timestamp is not present in Postgres's generic logical message format and non-transactional logical messages are not preceded by a BEGIN event (which has timestamp information). 4 message Field that contains the message metadata Prefix (text) Content (byte array that is encoded based on the binary handling mode setting) 8.4. How Debezium PostgreSQL connectors map data types The PostgreSQL connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. How that value is represented in the event depends on the PostgreSQL data type of the column. The following sections describe how the connector maps PostgreSQL data types to a literal type and a semantic type in event fields. literal type describes how the value is literally represented using Kafka Connect schema types: INT8 , INT16 , INT32 , INT64 , FLOAT32 , FLOAT64 , BOOLEAN , STRING , BYTES , ARRAY , MAP , and STRUCT . semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field. If the default data type conversions do not meet your needs, you can create a custom converter for the connector. Details are in the following sections: Basic types Temporal types TIMESTAMP type Decimal types HSTORE type Domain types Network address types PostGIS types Toasted values Basic types The following table describes how the connector maps basic types. Table 8.14. Mappings for PostgreSQL basic data types PostgreSQL data type Literal type (schema type) Semantic type (schema name) and Notes BOOLEAN BOOLEAN n/a BIT(1) BOOLEAN n/a BIT( > 1) BYTES io.debezium.data.Bits The length schema parameter contains an integer that represents the number of bits. The resulting byte[] contains the bits in little-endian form and is sized to contain the specified number of bits. For example, numBytes = n/8 + (n % 8 == 0 ? 0 : 1) where n is the number of bits. BIT VARYING[(M)] BYTES io.debezium.data.Bits The length schema parameter contains an integer that represents the number of bits (2^31 - 1 in case no length is given for the column). The resulting byte[] contains the bits in little-endian form and is sized based on the content. The specified size (M) is stored in the length parameter of the io.debezium.data.Bits type. SMALLINT , SMALLSERIAL INT16 n/a INTEGER , SERIAL INT32 n/a BIGINT , BIGSERIAL , OID INT64 n/a REAL FLOAT32 n/a DOUBLE PRECISION FLOAT64 n/a CHAR[(M)] STRING n/a VARCHAR[(M)] STRING n/a CHARACTER[(M)] STRING n/a CHARACTER VARYING[(M)] STRING n/a TIMESTAMPTZ , TIMESTAMP WITH TIME ZONE STRING io.debezium.time.ZonedTimestamp A string representation of a timestamp with timezone information, where the timezone is GMT. TIMETZ , TIME WITH TIME ZONE STRING io.debezium.time.ZonedTime A string representation of a time value with timezone information, where the timezone is GMT. INTERVAL [P] INT64 io.debezium.time.MicroDuration (default) The approximate number of microseconds for a time interval using the 365.25 / 12.0 formula for days per month average. INTERVAL [P] STRING io.debezium.time.Interval (when interval.handling.mode is set to string ) The string representation of the interval value that follows the pattern P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S , for example, P1Y2M3DT4H5M6.78S . BYTEA BYTES or STRING n/a Either the raw bytes (the default), a base64-encoded string, or a base64-url-safe-encoded String, or a hex-encoded string, based on the connector's binary handling mode setting. Debezium only supports Postgres bytea_output configuration of value hex . For more information about PostgreSQL binary data types, see the PostgreSQL documentation . JSON , JSONB STRING io.debezium.data.Json Contains the string representation of a JSON document, array, or scalar. XML STRING io.debezium.data.Xml Contains the string representation of an XML document. UUID STRING io.debezium.data.Uuid Contains the string representation of a PostgreSQL UUID value. POINT STRUCT io.debezium.data.geometry.Point Contains a structure with two FLOAT64 fields, (x,y) . Each field represents the coordinates of a geometric point. LTREE STRING io.debezium.data.Ltree Contains the string representation of a PostgreSQL LTREE value. CITEXT STRING n/a INET STRING n/a INT4RANGE STRING n/a Range of integer. INT8RANGE STRING n/a Range of bigint . NUMRANGE STRING n/a Range of numeric . TSRANGE STRING n/a Contains the string representation of a timestamp range without a time zone. TSTZRANGE STRING n/a Contains the string representation of a timestamp range with the local system time zone. DATERANGE STRING n/a Contains the string representation of a date range. It always has an exclusive upper-bound. ENUM STRING io.debezium.data.Enum Contains the string representation of the PostgreSQL ENUM value. The set of allowed values is maintained in the allowed schema parameter. Temporal types Other than PostgreSQL's TIMESTAMPTZ and TIMETZ data types, which contain time zone information, how temporal types are mapped depends on the value of the time.precision.mode connector configuration property. The following sections describe these mappings: time.precision.mode=adaptive time.precision.mode=adaptive_time_microseconds time.precision.mode=connect time.precision.mode=adaptive When the time.precision.mode property is set to adaptive , the default, the connector determines the literal type and semantic type based on the column's data type definition. This ensures that events exactly represent the values in the database. Table 8.15. Mappings when time.precision.mode is adaptive PostgreSQL data type Literal type (schema type) Semantic type (schema name) and Notes DATE INT32 io.debezium.time.Date Represents the number of days since the epoch. TIME(1) , TIME(2) , TIME(3) INT32 io.debezium.time.Time Represents the number of milliseconds past midnight, and does not include timezone information. TIME(4) , TIME(5) , TIME(6) INT64 io.debezium.time.MicroTime Represents the number of microseconds past midnight, and does not include timezone information. TIMESTAMP(1) , TIMESTAMP(2) , TIMESTAMP(3) INT64 io.debezium.time.Timestamp Represents the number of milliseconds since the epoch, and does not include timezone information. TIMESTAMP(4) , TIMESTAMP(5) , TIMESTAMP(6) , TIMESTAMP INT64 io.debezium.time.MicroTimestamp Represents the number of microseconds since the epoch, and does not include timezone information. time.precision.mode=adaptive_time_microseconds When the time.precision.mode configuration property is set to adaptive_time_microseconds , the connector determines the literal type and semantic type for temporal types based on the column's data type definition. This ensures that events exactly represent the values in the database, except all TIME fields are captured as microseconds. Table 8.16. Mappings when time.precision.mode is adaptive_time_microseconds PostgreSQL data type Literal type (schema type) Semantic type (schema name) and Notes DATE INT32 io.debezium.time.Date Represents the number of days since the epoch. TIME([P]) INT64 io.debezium.time.MicroTime Represents the time value in microseconds and does not include timezone information. PostgreSQL allows precision P to be in the range 0-6 to store up to microsecond precision. TIMESTAMP(1) , TIMESTAMP(2) , TIMESTAMP(3) INT64 io.debezium.time.Timestamp Represents the number of milliseconds past the epoch, and does not include timezone information. TIMESTAMP(4) , TIMESTAMP(5) , TIMESTAMP(6) , TIMESTAMP INT64 io.debezium.time.MicroTimestamp Represents the number of microseconds past the epoch, and does not include timezone information. time.precision.mode=connect When the time.precision.mode configuration property is set to connect , the connector uses Kafka Connect logical types. This may be useful when consumers can handle only the built-in Kafka Connect logical types and are unable to handle variable-precision time values. However, since PostgreSQL supports microsecond precision, the events generated by a connector with the connect time precision mode results in a loss of precision when the database column has a fractional second precision value that is greater than 3. Table 8.17. Mappings when time.precision.mode is connect PostgreSQL data type Literal type (schema type) Semantic type (schema name) and Notes DATE INT32 org.apache.kafka.connect.data.Date Represents the number of days since the epoch. TIME([P]) INT64 org.apache.kafka.connect.data.Time Represents the number of milliseconds since midnight, and does not include timezone information. PostgreSQL allows P to be in the range 0-6 to store up to microsecond precision, though this mode results in a loss of precision when P is greater than 3. TIMESTAMP([P]) INT64 org.apache.kafka.connect.data.Timestamp Represents the number of milliseconds since the epoch, and does not include timezone information. PostgreSQL allows P to be in the range 0-6 to store up to microsecond precision, though this mode results in a loss of precision when P is greater than 3. TIMESTAMP type The TIMESTAMP type represents a timestamp without time zone information. Such columns are converted into an equivalent Kafka Connect value based on UTC. For example, the TIMESTAMP value "2018-06-20 15:13:16.945104" is represented by an io.debezium.time.MicroTimestamp with the value "1529507596945104" when time.precision.mode is not set to connect . The timezone of the JVM running Kafka Connect and Debezium does not affect this conversion. PostgreSQL supports using +/-infinite values in TIMESTAMP columns. These special values are converted to timestamps with value 9223372036825200000 in case of positive infinity or -9223372036832400000 in case of negative infinity. This behavior mimics the standard behavior of the PostgreSQL JDBC driver. For reference, see the org.postgresql.PGStatement interface. Decimal types The setting of the PostgreSQL connector configuration property decimal.handling.mode determines how the connector maps decimal types. When the decimal.handling.mode property is set to precise , the connector uses the Kafka Connect org.apache.kafka.connect.data.Decimal logical type for all DECIMAL , NUMERIC and MONEY columns. This is the default mode. Table 8.18. Mappings when decimal.handling.mode is precise PostgreSQL data type Literal type (schema type) Semantic type (schema name) and Notes NUMERIC[(M[,D])] BYTES org.apache.kafka.connect.data.Decimal The scale schema parameter contains an integer representing how many digits the decimal point was shifted. DECIMAL[(M[,D])] BYTES org.apache.kafka.connect.data.Decimal The scale schema parameter contains an integer representing how many digits the decimal point was shifted. MONEY[(M[,D])] BYTES org.apache.kafka.connect.data.Decimal The scale schema parameter contains an integer representing how many digits the decimal point was shifted. The scale schema parameter is determined by the money.fraction.digits connector configuration property. There is an exception to this rule. When the NUMERIC or DECIMAL types are used without scale constraints, the values coming from the database have a different (variable) scale for each value. In this case, the connector uses io.debezium.data.VariableScaleDecimal , which contains both the value and the scale of the transferred value. Table 8.19. Mappings of DECIMAL and NUMERIC types when there are no scale constraints PostgreSQL data type Literal type (schema type) Semantic type (schema name) and Notes NUMERIC STRUCT io.debezium.data.VariableScaleDecimal Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form. DECIMAL STRUCT io.debezium.data.VariableScaleDecimal Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form. When the decimal.handling.mode property is set to double , the connector represents all DECIMAL , NUMERIC and MONEY values as Java double values and encodes them as shown in the following table. Table 8.20. Mappings when decimal.handling.mode is double PostgreSQL data type Literal type (schema type) Semantic type (schema name) NUMERIC[(M[,D])] FLOAT64 DECIMAL[(M[,D])] FLOAT64 MONEY[(M[,D])] FLOAT64 The last possible setting for the decimal.handling.mode configuration property is string . In this case, the connector represents DECIMAL , NUMERIC and MONEY values as their formatted string representation, and encodes them as shown in the following table. Table 8.21. Mappings when decimal.handling.mode is string PostgreSQL data type Literal type (schema type) Semantic type (schema name) NUMERIC[(M[,D])] STRING DECIMAL[(M[,D])] STRING MONEY[(M[,D])] STRING PostgreSQL supports NaN (not a number) as a special value to be stored in DECIMAL / NUMERIC values when the setting of decimal.handling.mode is string or double . In this case, the connector encodes NaN as either Double.NaN or the string constant NAN . HSTORE type The setting of the PostgreSQL connector configuration property hstore.handling.mode determines how the connector maps HSTORE values. When the dhstore.handling.mode property is set to json (the default), the connector represents HSTORE values as string representations of JSON values and encodes them as shown in the following table. When the hstore.handling.mode property is set to map , the connector uses the MAP schema type for HSTORE values. Table 8.22. Mappings for HSTORE data type PostgreSQL data type Literal type (schema type) Semantic type (schema name) and Notes HSTORE STRING io.debezium.data.Json Example: output representation using the JSON converter is {"key" : "val"} HSTORE MAP n/a Example: output representation using the JSON converter is {"key" : "val"} Domain types PostgreSQL supports user-defined types that are based on other underlying types. When such column types are used, Debezium exposes the column's representation based on the full type hierarchy. Important Capturing changes in columns that use PostgreSQL domain types requires special consideration. When a column is defined to contain a domain type that extends one of the default database types and the domain type defines a custom length or scale, the generated schema inherits that defined length or scale. When a column is defined to contain a domain type that extends another domain type that defines a custom length or scale, the generated schema does not inherit the defined length or scale because that information is not available in the PostgreSQL driver's column metadata. Network address types PostgreSQL has data types that can store IPv4, IPv6, and MAC addresses. It is better to use these types instead of plain text types to store network addresses. Network address types offer input error checking and specialized operators and functions. Table 8.23. Mappings for network address types PostgreSQL data type Literal type (schema type) Semantic type (schema name) and Notes INET STRING n/a IPv4 and IPv6 networks CIDR STRING n/a IPv4 and IPv6 hosts and networks MACADDR STRING n/a MAC addresses MACADDR8 STRING n/a MAC addresses in EUI-64 format PostGIS types The PostgreSQL connector supports all PostGIS data types . Table 8.24. Mappings of PostGIS data types PostGIS data type Literal type (schema type) Semantic type (schema name) and Notes GEOMETRY (planar) STRUCT io.debezium.data.geometry.Geometry Contains a structure with two fields: srid (INT32) - Spatial Reference System Identifier that defines what type of geometry object is stored in the structure. wkb (BYTES) - A binary representation of the geometry object encoded in the Well-Known-Binary format. For format details, see Open Geospatial Consortium Simple Features Access specification . GEOGRAPHY (spherical) STRUCT io.debezium.data.geometry.Geography Contains a structure with two fields: srid (INT32) - Spatial Reference System Identifier that defines what type of geography object is stored in the structure. wkb (BYTES) - A binary representation of the geometry object encoded in the Well-Known-Binary format. For format details, see Open Geospatial Consortium Simple Features Access specification . Toasted values PostgreSQL has a hard limit on the page size. This means that values that are larger than around 8 KBs need to be stored by using TOAST storage . This impacts replication messages that are coming from the database. Values that were stored by using the TOAST mechanism and that have not been changed are not included in the message, unless they are part of the table's replica identity. There is no safe way for Debezium to read the missing value out-of-bands directly from the database, as this would potentially lead to race conditions. Consequently, Debezium follows these rules to handle toasted values: Tables with REPLICA IDENTITY FULL - TOAST column values are part of the before and after fields in change events just like any other column. Tables with REPLICA IDENTITY DEFAULT - When receiving an UPDATE event from the database, any unchanged TOAST column value that is not part of the replica identity is not contained in the event. Similarly, when receiving a DELETE event, no TOAST columns, if any, are in the before field. As Debezium cannot safely provide the column value in this case, the connector returns a placeholder value as defined by the connector configuration property, unavailable.value.placeholder . Default values If a default value is specified for a column in the database schema, the PostgreSQL connector will attempt to propagate this value to the Kafka schema whenever possible. Most common data types are supported, including: BOOLEAN Numeric types ( INT , FLOAT , NUMERIC , etc.) Text types ( CHAR , VARCHAR , TEXT , etc.) Temporal types ( DATE , TIME , INTERVAL , TIMESTAMP , TIMESTAMPTZ ) JSON , JSONB , XML UUID Note that for temporal types, parsing of the default value is provided by PostgreSQL libraries; therefore, any string representation which is normally supported by PostgreSQL should also be supported by the connector. In the case that the default value is generated by a function rather than being directly specified in-line, the connector will instead export the equivalent of 0 for the given data type. These values include: FALSE for BOOLEAN 0 with appropriate precision, for numeric types Empty string for text/XML types {} for JSON types 1970-01-01 for DATE , TIMESTAMP , TIMESTAMPTZ types 00:00 for TIME EPOCH for INTERVAL 00000000-0000-0000-0000-000000000000 for UUID This support currently extends only to explicit usage of functions. For example, CURRENT_TIMESTAMP(6) is supported with parentheses, but CURRENT_TIMESTAMP is not. Important Support for the propagation of default values exists primarily to allow for safe schema evolution when using the PostgreSQL connector with a schema registry which enforces compatibility between schema versions. Due to this primary concern, as well as the refresh behaviours of the different plug-ins, the default value present in the Kafka schema is not guaranteed to always be in-sync with the default value in the database schema. Default values may appear 'late' in the Kafka schema, depending on when/how a given plugin triggers refresh of the in-memory schema. Values may never appear/be skipped in the Kafka schema if the default changes multiple times in-between refreshes Default values may appear 'early' in the Kafka schema, if a schema refresh is triggered while the connector has records waiting to be processed. This is due to the column metadata being read from the database at refresh time, rather than being present in the replication message. This may occur if the connector is behind and a refresh occurs, or on connector start if the connector was stopped for a time while updates continued to be written to the source database. This behaviour may be unexpected, but it is still safe. Only the schema definition is affected, while the real values present in the message will remain consistent with what was written to the source database. 8.5. Setting up PostgreSQL to run a Debezium connector This release of Debezium supports only the native pgoutput logical replication stream. To set up PostgreSQL so that it uses the pgoutput plug-in, you must enable a replication slot, and configure a user with sufficient privileges to perform the replication. Details are in the following topics: Section 8.5.1, "Configuring a replication slot for the Debezium pgoutput plug-in" Section 8.5.2, "Setting up PostgreSQL permissions for the Debezium connector" Section 8.5.3, "Setting privileges to enable Debezium to create PostgreSQL publications" Section 8.5.4, "Configuring PostgreSQL to allow replication with the Debezium connector host" Section 8.5.5, "Configuring PostgreSQL to manage Debezium WAL disk space consumption" Section 8.5.6, "Upgrading PostgreSQL databases that Debezium captures from" 8.5.1. Configuring a replication slot for the Debezium pgoutput plug-in PostgreSQL's logical decoding uses replication slots. To configure a replication slot, specify the following in the postgresql.conf file: These settings instruct the PostgreSQL server as follows: wal_level - Use logical decoding with the write-ahead log. max_wal_senders - Use a maximum of one separate process for processing WAL changes. max_replication_slots - Allow a maximum of one replication slot to be created for streaming WAL changes. Replication slots are guaranteed to retain all WAL entries that are required for Debezium even during Debezium outages. Consequently, it is important to closely monitor replication slots to avoid: Too much disk consumption Any conditions, such as catalog bloat, that can happen if a replication slot stays unused for too long For more information, see the PostgreSQL documentation for replication slots . Note Familiarity with the mechanics and configuration of the PostgreSQL write-ahead log is helpful for using the Debezium PostgreSQL connector. 8.5.2. Setting up PostgreSQL permissions for the Debezium connector Setting up a PostgreSQL server to run a Debezium connector requires a database user that can perform replications. Replication can be performed only by a database user that has appropriate permissions and only for a configured number of hosts. Although, by default, superusers have the necessary REPLICATION and LOGIN roles, as mentioned in Security , it is best not to provide the Debezium replication user with elevated privileges. Instead, create a Debezium user that has the minimum required privileges. Prerequisites PostgreSQL administrative permissions. Procedure To provide a user with replication permissions, define a PostgreSQL role that has at least the REPLICATION and LOGIN permissions, and then grant that role to the user. For example: CREATE ROLE <name> REPLICATION LOGIN; 8.5.3. Setting privileges to enable Debezium to create PostgreSQL publications Debezium streams change events for PostgreSQL source tables from publications that are created for the tables. Publications contain a filtered set of change events that are generated from one or more tables. The data in each publication is filtered based on the publication specification. The specification can be created by the PostgreSQL database administrator or by the Debezium connector. To permit the Debezium PostgreSQL connector to create publications and specify the data to replicate to them, the connector must operate with specific privileges in the database. There are several options for determining how publications are created. In general, it is best to manually create publications for the tables that you want to capture, before you set up the connector. However, you can configure your environment in a way that permits Debezium to create publications automatically, and to specify the data that is added to them. Debezium uses include list and exclude list properties to specify how data is inserted in the publication. For more information about the options for enabling Debezium to create publications, see publication.autocreate.mode . For Debezium to create a PostgreSQL publication, it must run as a user that has the following privileges: Replication privileges in the database to add the table to a publication. CREATE privileges on the database to add publications. SELECT privileges on the tables to copy the initial table data. Table owners automatically have SELECT permission for the table. To add tables to a publication, the user must be an owner of the table. But because the source table already exists, you need a mechanism to share ownership with the original owner. To enable shared ownership, you create a PostgreSQL replication group, and then add the existing table owner and the replication user to the group. Procedure Create a replication group. CREATE ROLE <replication_group> ; Add the original owner of the table to the group. GRANT REPLICATION_GROUP TO <original_owner> ; Add the Debezium replication user to the group. GRANT REPLICATION_GROUP TO <replication_user> ; Transfer ownership of the table to <replication_group> . ALTER TABLE <table_name> OWNER TO REPLICATION_GROUP; For Debezium to specify the capture configuration, the value of publication.autocreate.mode must be set to filtered . 8.5.4. Configuring PostgreSQL to allow replication with the Debezium connector host To enable Debezium to replicate PostgreSQL data, you must configure the database to permit replication with the host that runs the PostgreSQL connector. To specify the clients that are permitted to replicate with the database, add entries to the PostgreSQL host-based authentication file, pg_hba.conf . For more information about the pg_hba.conf file, see the PostgreSQL documentation. Procedure Add entries to the pg_hba.conf file to specify the Debezium connector hosts that can replicate with the database host. For example, pg_hba.conf file example: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Instructs the server to allow replication for <youruser> locally, that is, on the server machine. 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 Instructs the server to allow <youruser> on localhost to receive replication changes using IPV4 . 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 Instructs the server to allow <youruser> on localhost to receive replication changes using IPV6 . Note For more information about network masks, see the PostgreSQL documentation . 8.5.5. Configuring PostgreSQL to manage Debezium WAL disk space consumption In certain cases, it is possible for PostgreSQL disk space consumed by WAL files to spike or increase out of usual proportions. There are several possible reasons for this situation: The LSN up to which the connector has received data is available in the confirmed_flush_lsn column of the server's pg_replication_slots view. Data that is older than this LSN is no longer available, and the database is responsible for reclaiming the disk space. Also in the pg_replication_slots view, the restart_lsn column contains the LSN of the oldest WAL that the connector might require. If the value for confirmed_flush_lsn is regularly increasing and the value of restart_lsn lags then the database needs to reclaim the space. The database typically reclaims disk space in batch blocks. This is expected behavior and no action by a user is necessary. There are many updates in a database that is being tracked but only a tiny number of updates are related to the table(s) and schema(s) for which the connector is capturing changes. This situation can be easily solved with periodic heartbeat events. Set the heartbeat.interval.ms connector configuration property. The PostgreSQL instance contains multiple databases and one of them is a high-traffic database. Debezium captures changes in another database that is low-traffic in comparison to the other database. Debezium then cannot confirm the LSN as replication slots work per-database and Debezium is not invoked. As WAL is shared by all databases, the amount used tends to grow until an event is emitted by the database for which Debezium is capturing changes. To overcome this, it is necessary to: Enable periodic heartbeat record generation with the heartbeat.interval.ms connector configuration property. Regularly emit change events from the database for which Debezium is capturing changes. A separate process would then periodically update the table by either inserting a new row or repeatedly updating the same row. PostgreSQL then invokes Debezium, which confirms the latest LSN and allows the database to reclaim the WAL space. This task can be automated by means of the heartbeat.action.query connector configuration property. Setting up multiple connectors for same database server Debezium uses replication slots to stream changes from a database. These replication slots maintain the current position in form of a LSN (Log Sequence Number) which is pointer to a location in the WAL being consumed by the Debezium connector. This helps PostgreSQL keep the WAL available until it is processed by Debezium. A single replication slot can exist only for a single consumer or process - as different consumer might have different state and may need data from different position. Since a replication slot can only be used by a single connector, it is essential to create a unique replication slot for each Debezium connector. Although when a connector is not active, Postgres may allow other connector to consume the replication slot - which could be dangerous as it may lead to data loss as a slot will emit each change just once [ See More ]. In addition to replication slot, Debezium uses publication to stream events when using the pgoutput plugin. Similar to replication slot, publication is at database level and is defined for a set of tables. Thus, you'll need a unique publication for each connector, unless the connectors work on same set of tables. For more information about the options for enabling Debezium to create publications, see publication.autocreate.mode See slot.name and publication.name on how to set a unique replication slot name and publication name for each connector. 8.5.6. Upgrading PostgreSQL databases that Debezium captures from When you upgrade the PostgreSQL database that Debezium uses, you must take specific steps to protect against data loss and to ensure that Debezium continues to operate. In general, Debezium is resilient to interruptions caused by network failures and other outages. For example, when a database server that a connector monitors stops or crashes, after the connector re-establishes communication with the PostgreSQL server, it continues to read from the last position recorded by the log sequence number (LSN) offset. The connector retrieves information about the last recorded offset from the Kafka Connect offsets topic, and queries the configured PostgreSQL replication slot for a log sequence number (LSN) with the same value. For the connector to start and to capture change events from a PostgreSQL database, a replication slot must be present. However, as part of the PostgreSQL upgrade process, replication slots are removed, and the original slots are not restored after the upgrade completes. As a result, when the connector restarts and requests the last known offset from the replication slot, PostgreSQL cannot return the information. You can create a new replication slot, but you must do more than create a new slot to guard against data loss. A new replication slot can provide the LSNs only for changes the occur after you create the slot; it cannot provide the offsets for events that occurred before the upgrade. When the connector restarts, it first requests the last known offset from the Kafka offsets topic. It then sends a request to the replication slot to return information for the offset retrieved from the offsets topic. But the new replication slot cannot provide the information that the connector needs to resume streaming from the expected position. The connector then skips any existing change events in the log, and only resumes streaming from the most recent position in the log. This can lead to silent data loss: the connector emits no records for the skipped events, and it does not provide any information to indicate that events were skipped. For guidance about how to perform a PostgreSQL database upgrade so that Debezium can continue to capture events while minimizing the risk of data loss, see the following procedure. Procedure Temporarily stop applications that write to the database, or put them into a read-only mode. Back up the database. Temporarily disable write access to the database. Verify that any changes that occurred in the database before you blocked write operations are saved to the write-ahead log (WAL), and that the WAL LSN is reflected on the replication slot. Provide the connector with enough time to capture all event records that are written to the replication slot. This step ensures that all change events that occurred before the downtime are accounted for, and that they are saved to Kafka. Verify that the connector has finished consuming entries from the replication slot by checking the value of the flushed LSN. Shut down the connector gracefully by stopping Kafka Connect. Kafka Connect stops the connectors, flushes all event records to Kafka, and records the last offset received from each connector. Note As an alternative to stopping the entire Kafka Connect cluster, you can stop the connector by deleting it. Do not remove the offset topic, because it might be shared by other Kafka connectors. Later, after you restore write access to the database and you are ready to restart the connector, you must recreate the connector. As a PostgreSQL administrator, drop the replication slot on the primary database server. Do not use the slot.drop.on.stop property to drop the replication slot. This property is for testing only. Stop the database. Perform the upgrade using an approved PostgreSQL upgrade procedure, such as pg_upgrade , or pg_dump and pg_restore . (Optional) Use a standard Kafka tool to remove the connector offsets from the offset storage topic. For an example of how to remove connector offsets, see how to remove connector offsets in the Debezium community FAQ. Restart the database. As a PostgreSQL administrator, create a Debezium logical replication slot on the database. You must create the slot before enabling writes to the database. Otherwise, Debezium cannot capture the changes, resulting in data loss. For information about setting up a replication slot, see Section 8.5.1, "Configuring a replication slot for the Debezium pgoutput plug-in" . Verify that the publication that defines the tables for Debezium to capture is still present after the upgrade. If the publication is not available, connect to the database as a PostgreSQL administrator to create a new publication. If it was necessary to create a new publication in the step, update the Debezium connector configuration to add the name of the new publication to the publication.name property. In the connector configuration, rename the connector. In the connector configuration, set slot.name to the name of the Debezium replication slot. Verify that the new replication slot is available. Restore write access to the database and restart any applications that write to the database. In the connector configuration, set the snapshot.mode property to never , and then restart the connector. Note If you were unable to verify that Debezium finished reading all database changes in Step 6, you can configure the connector to perform a new snapshot by setting snapshot.mode=initial . If necessary, you can confirm whether the connector read all changes from the replication slot by checking the contents of a database backup that was taken immediately before the upgrade. Additional resources Configuring replication slots for Debezium . 8.6. Deployment of Debezium PostgreSQL connectors You can use either of the following methods to deploy a Debezium PostgreSQL connector: Use AMQ Streams to automatically create an image that includes the connector plug-in . This is the preferred method. Build a custom Kafka Connect container image from a Dockerfile . Additional resources Section 8.6.5, "Descriptions of Debezium PostgreSQL connector configuration properties" 8.6.1. PostgreSQL connector deployment using AMQ Streams Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use AMQ Streams to build a Kafka Connect container image that includes the connector plug-in. During the deployment process, you create and use the following custom resources (CRs): A KafkaConnect CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. A KafkaConnector CR that provides details that include information the connector uses to access the source database. After AMQ Streams starts the Kafka Connect pod, you start the connector by applying the KafkaConnector CR. In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Service Registry artifacts, or the Debezium scripting component. When AMQ Streams builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image. The spec.build.output parameter in the KafkaConnect CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically. Note If you use a KafkaConnect resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information. Additional resources Configuring Kafka Connect in Using AMQ Streams on OpenShift. Creating a new container image automatically using AMQ Streams in Deploying and Managing AMQ Streams on OpenShift. 8.6.2. Using AMQ Streams to deploy a Debezium PostgreSQL connector With earlier versions of AMQ Streams, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in AMQ Streams to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use. During the build process, the AMQ Streams Operator transforms input parameters in a KafkaConnect custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server. The newly created container is pushed to the container registry that is specified in .spec.build.output , and is used to deploy a Kafka Connect cluster. After AMQ Streams builds the Kafka Connect image, you create KafkaConnector custom resources to start the connectors that are included in the build. Prerequisites You have access to an OpenShift cluster on which the cluster Operator is installed. The AMQ Streams Operator is running. An Apache Kafka cluster is deployed as documented in Deploying and Upgrading AMQ Streams on OpenShift . Kafka Connect is deployed on AMQ Streams You have a Red Hat Integration license. The OpenShift oc CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource: To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub An account and permissions to create and manage images in the registry. To store the build image as a native OpenShift ImageStream An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams on OpenShift Container Platform . Procedure Log in to the OpenShift cluster. Create a Debezium KafkaConnect custom resource (CR) for the connector, or modify an existing one. For example, create a KafkaConnect CR with the name dbz-connect.yaml that specifies the metadata.annotations and spec.build properties. The following example shows an excerpt from a dbz-connect.yaml file that describes a KafkaConnect custom resource. Example 8.1. A dbz-connect.yaml file that defines a KafkaConnect custom resource that includes a Debezium connector In the example that follows, the custom resource is configured to download the following artifacts: The Debezium PostgreSQL connector archive. The Service Registry archive. The Service Registry is an optional component. Add the Service Registry component only if you intend to use Avro serialization with the connector. The Debezium scripting SMT archive and the associated scripting engine that you want to use with the Debezium connector. The SMT archive and scripting language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT . apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.5.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-postgres artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-postgres/2.3.4.Final-redhat-00001/debezium-connector-postgres-2.3.4.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat- <build-number> /apicurio-registry-distro-connect-converter-2.4.4.Final-redhat- <build-number> .zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.3.4.Final-redhat-00001/debezium-scripting-2.3.4.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ... Table 8.25. Descriptions of Kafka Connect configuration settings Item Description 1 Sets the strimzi.io/use-connector-resources annotation to "true" to enable the Cluster Operator to use KafkaConnector resources to configure connectors in this Kafka Connect cluster. 2 The spec.build configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts. 3 The build.output specifies the registry in which the newly built image is stored. 4 Specifies the name and image name for the image output. Valid values for output.type are docker to push into a container registry such as Docker Hub or Quay, or imagestream to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying the build.output in the KafkaConnect configuration, see the AMQ Streams Build schema reference in Configuring AMQ Streams on OpenShift. 5 The plugins configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-in name , and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component. 6 The value of artifacts.type specifies the file type of the artifact specified in the artifacts.url . Valid types are zip , tgz , or jar . Debezium connector archives are provided in .zip file format. The type value must match the type of the file that is referenced in the url field. 7 The value of artifacts.url specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server. 8 (Optional) Specifies the artifact type and url for downloading the Service Registry component. Include the Service Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Service Registry, instead of using the default JSON converter. 9 (Optional) Specifies the artifact type and url for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy. 10 (Optional) Specifies the artifact type and url for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT. Important If you use AMQ Streams to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components artifacts.url must specify the location of a JAR file, and the value of artifacts.type must also be set to jar . Invalid values cause the connector fails at runtime. To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries: groovy groovy-jsr223 (scripting agent) groovy-json (module for parsing JSON strings) As an alternative, the Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript. Apply the KafkaConnect build specification to the OpenShift cluster by entering the following command: oc create -f dbz-connect.yaml Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy. After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster. Create a KafkaConnector resource to define an instance of each connector that you want to deploy. For example, create the following KafkaConnector CR, and save it as postgresql-inventory-connector.yaml Example 8.2. postgresql-inventory-connector.yaml file that defines the KafkaConnector custom resource for a Debezium connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-postgresql 1 spec: class: io.debezium.connector.postgresql.PostgresConnector 2 tasksMax: 1 3 config: 4 database.hostname: postgresql.debezium-postgresql.svc.cluster.local 5 database.port: 5432 6 database.user: debezium 7 database.password: dbz 8 database.dbname: mydatabase 9 topic.prefix: inventory-connector-postgresql 10 table.include.list: public.inventory 11 ... Table 8.26. Descriptions of connector configuration settings Item Description 1 The name of the connector to register with the Kafka Connect cluster. 2 The name of the connector class. 3 The number of tasks that can operate concurrently. 4 The connector's configuration. 5 The address of the host database instance. 6 The port number of the database instance. 7 The name of the account that Debezium uses to connect to the database. 8 The password that Debezium uses to connect to the database user account. 9 The name of the database to capture changes from. 10 The topic prefix for the database instance or cluster. The specified name must be formed only from alphanumeric characters or underscores. Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster. This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector . 11 The list of tables from which the connector captures change events. Create the connector resource by running the following command: oc create -n <namespace> -f <kafkaConnector> .yaml For example, oc create -n debezium -f {context}-inventory-connector.yaml The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by spec.config.database.dbname in the KafkaConnector CR. After the connector pod is ready, Debezium is running. You are now ready to verify the Debezium PostgreSQL deployment . 8.6.3. Deploying a Debezium PostgreSQL connector by building a custom Kafka Connect container image from a Dockerfile To deploy a Debezium PostgreSQL connector, you need to build a custom Kafka Connect container image that contains the Debezium connector archive and push this container image to a container registry. You then need to create two custom resources (CRs): A KafkaConnect CR that defines your Kafka Connect instance. The image property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat AMQ Streams is deployed. AMQ Streams offers operators and images that bring Apache Kafka to OpenShift. A KafkaConnector CR that defines your Debezium Db2 connector. Apply this CR to the same OpenShift instance where you applied the KafkaConnect CR. Prerequisites PostgreSQL is running and you performed the steps to set up PostgreSQL to run a Debezium connector . AMQ Streams is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Upgrading AMQ Streams on OpenShift . Podman or Docker is installed. You have an account and permissions to create and manage containers in the container registry (such as quay.io or docker.io ) to which you plan to add the container that will run your Debezium connector. Procedure Create the Debezium PostgreSQL container for Kafka Connect: Create a Dockerfile that uses registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 as the base image. For example, from a terminal window, enter the following command: cat <<EOF >debezium-container-for-postgresql.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-postgres/2.3.4.Final-redhat-00001/debezium-connector-postgres-2.3.4.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-postgres-2.3.4.Final-redhat-00001-plugin.zip \ && rm debezium-connector-postgres-2.3.4.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ USER 1001 EOF Item Description 1 You can specify any file name that you want. 2 Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory. The command creates a Dockerfile with the name debezium-container-for-postgresql.yaml in the current directory. Build the container image from the debezium-container-for-postgresql.yaml Docker file that you created in the step. From the directory that contains the file, open a terminal window and enter one of the following commands: podman build -t debezium-container-for-postgresql:latest . docker build -t debezium-container-for-postgresql:latest . The build command builds a container image with the name debezium-container-for-postgresql . Push your custom image to a container registry such as quay.io or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands: podman push <myregistry.io> /debezium-container-for-postgresql:latest docker push <myregistry.io> /debezium-container-for-postgresql:latest Create a new Debezium PostgreSQL KafkaConnect custom resource (CR). For example, create a KafkaConnect CR with the name dbz-connect.yaml that specifies annotations and image properties. The following example shows an excerpt from a dbz-connect.yaml file that describes a KafkaConnect custom resource. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: image: debezium-container-for-postgresql 2 ... Item Description 1 metadata.annotations indicates to the Cluster Operator that KafkaConnector resources are used to configure connectors in this Kafka Connect cluster. 2 spec.image specifies the name of the image that you created to run your Debezium connector. This property overrides the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable in the Cluster Operator. Apply your KafkaConnect CR to the OpenShift Kafka instance by running the following command: oc create -f dbz-connect.yaml This updates your Kafka Connect environment in OpenShift to add a Kafka Connector instance that specifies the name of the image that you created to run your Debezium connector. Create a KafkaConnector custom resource that configures your Debezium PostgreSQL connector instance. You configure a Debezium PostgreSQL connector in a .yaml file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed. For the complete list of the configuration properties that you can set for the Debezium PostgreSQL connector, see PostgreSQL connector properties . The following example shows an excerpt from a custom resource that configures a Debezium connector that connects to a PostgreSQL server host, 192.168.99.100 , on port 5432 . This host has a database named sampledb , a schema named public , and inventory-connector-postgresql is the server's logical name. inventory-connector.yaml apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-postgresql 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.postgresql.PostgresConnector tasksMax: 1 2 config: 3 database.hostname: 192.168.99.100 4 database.port: 5432 database.user: debezium database.password: dbz database.dbname: sampledb topic.prefix: inventory-connector-postgresql 5 schema.include.list: public 6 plugin.name: pgoutput 7 ... 1 1 1 1 1 The name of the connector. 2 2 2 2 2 Only one task should operate at any one time. Because the PostgreSQL connector reads the PostgreSQL server's binlog , using a single connector task ensures proper order and event handling. The Kafka Connect service uses connectors to start one or more tasks that do the work, and it automatically distributes the running tasks across the cluster of Kafka Connect services. If any of the services stop or crash, those tasks will be redistributed to running services. 3 3 3 The connector's configuration. 4 4 4 The name of the database host that is running the PostgreSQL server. In this example, the database host name is 192.168.99.100 . 5 5 5 A unique topic prefix. The server name is the logical identifier for the PostgreSQL server or cluster of servers. This name is used as the prefix for all Kafka topics that receive change event records. 6 6 6 The connector captures changes in only the public schema. It is possible to configure the connector to capture changes in only the tables that you choose. For more information, see table.include.list . 7 7 7 The name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server. While the only supported value for PostgreSQL 10 and later is pgoutput , you must explicitly set plugin.name to pgoutput . Create your connector instance with Kafka Connect. For example, if you saved your KafkaConnector resource in the inventory-connector.yaml file, you would run the following command: oc apply -f inventory-connector.yaml This registers inventory-connector and the connector starts to run against the sampledb database as defined in the KafkaConnector CR. Results After the connector starts, it performs a consistent snapshot of the PostgreSQL server databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics. 8.6.4. Verifying that the Debezium PostgreSQL connector is running If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database. To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc): Verify the connector status. Verify that the connector generates topics. Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table. Prerequisites A Debezium connector is deployed to AMQ Streams on OpenShift. The OpenShift oc CLI client is installed. You have access to the OpenShift Container Platform web console. Procedure Check the status of the KafkaConnector resource by using one of the following methods: From the OpenShift Container Platform web console: Navigate to Home Search . On the Search page, click Resources to open the Select Resource box, and then type KafkaConnector . From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-postgresql . In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True . From a terminal window: Enter the following command: oc describe KafkaConnector <connector-name> -n <project> For example, oc describe KafkaConnector inventory-connector-postgresql -n debezium The command returns status information that is similar to the following output: Example 8.3. KafkaConnector resource status Name: inventory-connector-postgresql Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-postgresql Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-postgresql.inventory inventory-connector-postgresql.inventory.addresses inventory-connector-postgresql.inventory.customers inventory-connector-postgresql.inventory.geom inventory-connector-postgresql.inventory.orders inventory-connector-postgresql.inventory.products inventory-connector-postgresql.inventory.products_on_hand Events: <none> Verify that the connector created Kafka topics: From the OpenShift Container Platform web console. Navigate to Home Search . On the Search page, click Resources to open the Select Resource box, and then type KafkaTopic . From the KafkaTopics list, click the name of the topic that you want to check, for example, inventory-connector-postgresql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d . In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True . From a terminal window: Enter the following command: oc get kafkatopics The command returns status information that is similar to the following output: Example 8.4. KafkaTopic resource status Check topic content. From a terminal window, enter the following command: oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic= <topic-name > For example, oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-postgresql.inventory.products_on_hand The format for specifying the topic name is the same as the oc describe command returns in Step 1, for example, inventory-connector-postgresql.inventory.addresses . For each event in the topic, the command returns information that is similar to the following output: Example 8.5. Content of a Debezium change event In the preceding example, the payload value shows that the connector snapshot generated a read ( "op" ="r" ) event from the table inventory.products_on_hand . The "before" state of the product_id record is null , indicating that no value exists for the record. The "after" state shows a quantity of 3 for the item with product_id 101 . 8.6.5. Descriptions of Debezium PostgreSQL connector configuration properties The Debezium PostgreSQL connector has many configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows: Required configuration properties Advanced configuration properties Pass-through configuration properties The following configuration properties are required unless a default value is available. Table 8.27. Required connector configuration properties Property Default Description name No default Unique name for the connector. Attempting to register again with the same name will fail. This property is required by all Kafka Connect connectors. connector.class No default The name of the Java class for the connector. Always use a value of io.debezium.connector.postgresql.PostgresConnector for the PostgreSQL connector. tasks.max 1 The maximum number of tasks that should be created for this connector. The PostgreSQL connector always uses a single task and therefore does not use this value, so the default is always acceptable. plugin.name decoderbufs The name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server. The only supported value is pgoutput . You must explicitly set plugin.name to pgoutput . slot.name debezium The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the Debezium connector that you are configuring. Slot names must conform to PostgreSQL replication slot naming rules , which state: "Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character." slot.drop.on.stop false Whether or not to delete the logical replication slot when the connector stops in a graceful, expected way. The default behavior is that the replication slot remains configured for the connector when the connector stops. When the connector restarts, having the same replication slot enables the connector to start processing where it left off. Set to true in only testing or development environments. Dropping the slot allows the database to discard WAL segments. When the connector restarts it performs a new snapshot or it can continue from a persistent offset in the Kafka Connect offsets topic. publication.name dbz_publication The name of the PostgreSQL publication created for streaming changes when using pgoutput . This publication is created at start-up if it does not already exist and it includes all tables . Debezium then applies its own include/exclude list filtering, if configured, to limit the publication to change events for the specific tables of interest. The connector user must have superuser permissions to create this publication, so it is usually preferable to create the publication before starting the connector for the first time. If the publication already exists, either for all tables or configured with a subset of tables, Debezium uses the publication as it is defined. database.hostname No default IP address or hostname of the PostgreSQL database server. database.port 5432 Integer port number of the PostgreSQL database server. database.user No default Name of the PostgreSQL database user for connecting to the PostgreSQL database server. database.password No default Password to use when connecting to the PostgreSQL database server. database.dbname No default The name of the PostgreSQL database from which to stream the changes. topic.prefix No default Topic prefix that provides a namespace for the particular PostgreSQL database server or cluster in which Debezium is capturing changes. The prefix should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector. Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name. Warning Do not change the value of this property. If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. schema.include.list No default An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Any schema name not included in schema.include.list is excluded from having its changes captured. By default, all non-system schemas have their changes captured. To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the schema; it does not match substrings that might be present in a schema name. If you include this property in the configuration, do not also set the schema.exclude.list property. schema.exclude.list No default An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Any schema whose name is not included in schema.exclude.list has its changes captured, with the exception of system schemas. To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the schema; it does not match substrings that might be present in a schema name. If you include this property in the configuration, do not set the schema.include.list property. table.include.list No default An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you want to capture. When this property is set, the connector captures changes only from the specified tables. Each identifier is of the form schemaName . tableName . By default, the connector captures changes in every non-system table in each schema whose changes are being captured. To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the table; it does not match substrings that might be present in a table name. If you include this property in the configuration, do not also set the table.exclude.list property. table.exclude.list No default An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. Each identifier is of the form schemaName . tableName . When this property is set, the connector captures changes from every table that you do not specify. To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the table; it does not match substrings that might be present in a table name. If you include this property in the configuration, do not set the table.include.list property. column.include.list No default An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that should be included in change event record values. Fully-qualified names for columns are of the form schemaName . tableName . columnName . To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the expression is used to match the entire name string of the column; it does not match substrings that might be present in a column name. If you include this property in the configuration, do not also set the column.exclude.list property. column.exclude.list No default An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event record values. Fully-qualified names for columns are of the form schemaName . tableName . columnName . To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the expression is used to match the entire name string of the column; it does not match substrings that might be present in a column name. If you include this property in the configuration, do not set the column.include.list property. skip.messages.without.change false Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per column.include.list or column.exclude.list properties. Note: Only works when REPLICA IDENTITY of the table is set to FULL time.precision.mode adaptive Time, date, and timestamps can be represented with different kinds of precision: adaptive captures the time and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column's type. adaptive_time_microseconds captures the date, datetime and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column's type. An exception is TIME type fields, which are always captured as microseconds. connect always represents time and timestamp values by using Kafka Connect's built-in representations for Time , Date , and Timestamp , which use millisecond precision regardless of the database columns' precision. For more information, see temporal values . decimal.handling.mode precise Specifies how the connector should handle values for DECIMAL and NUMERIC columns: precise represents values by using java.math.BigDecimal to represent values in binary form in change events. double represents values by using double values, which might result in a loss of precision but which is easier to use. string encodes values as formatted strings, which are easy to consume but semantic information about the real type is lost. For more information, see Decimal types . hstore.handling.mode map Specifies how the connector should handle values for hstore columns: map represents values by using MAP . json represents values by using json string . This setting encodes values as formatted strings such as {"key" : "val"} . For more information, see PostgreSQL HSTORE type . interval.handling.mode numeric Specifies how the connector should handle values for interval columns: numeric represents intervals using approximate number of microseconds. string represents intervals exactly by using the string pattern representation P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S . For example: P1Y2M3DT4H5M6.78S . For more information, see PostgreSQL basic types . database.sslmode prefer Whether to use an encrypted connection to the PostgreSQL server. Options include: disable uses an unencrypted connection. allow attempts to use an unencrypted connection first and, failing that, a secure (encrypted) connection. prefer attempts to use a secure (encrypted) connection first and, failing that, an unencrypted connection. require uses a secure (encrypted) connection, and fails if one cannot be established. verify-ca behaves like require but also verifies the server TLS certificate against the configured Certificate Authority (CA) certificates, or fails if no valid matching CA certificates are found. verify-full behaves like verify-ca but also verifies that the server certificate matches the host to which the connector is trying to connect. For more information, see the PostgreSQL documentation . database.sslcert No default The path to the file that contains the SSL certificate for the client. For more information, see the PostgreSQL documentation . database.sslkey No default The path to the file that contains the SSL private key of the client. For more information, see the PostgreSQL documentation . database.sslpassword No default The password to access the client private key from the file specified by database.sslkey . For more information, see the PostgreSQL documentation . database.sslrootcert No default The path to the file that contains the root certificate(s) against which the server is validated. For more information, see the PostgreSQL documentation . database.tcpKeepAlive true Enable TCP keep-alive probe to verify that the database connection is still alive. For more information, see the PostgreSQL documentation . tombstones.on.delete true Controls whether a delete event is followed by a tombstone event. true - a delete operation is represented by a delete event and a subsequent tombstone event. false - only a delete event is emitted. After a source record is deleted, emitting a tombstone event (the default behavior) allows Kafka to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic. column.truncate.to. length .chars n/a An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Set length to a positive integer value, for example, column.truncate.to.20.chars . The fully-qualified name of a column observes the following format: <schemaName> . <tableName> . <columnName> . To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration. column.mask.with. length .chars n/a An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set length to a positive integer to replace data in the specified columns with the number of asterisk ( * ) characters specified by the length in the property name. Set length to 0 (zero) to replace data in the specified columns with an empty string. The fully-qualified name of a column observes the following format: schemaName . tableName . columnName . To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration. column.mask.hash. hashAlgorithm .with.salt. salt ; column.mask.hash.v2. hashAlgorithm .with.salt. salt n/a An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form <schemaName> . <tableName> . <columnName> . To match the name of a column Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. In the resulting change event record, the values for the specified columns are replaced with pseudonyms. A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt . Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. In the following example, CzQMA0cB5K is a randomly selected salt. If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. Depending on the hashAlgorithm used, the salt selected, and the actual data set, the resulting data set might not be completely masked. Hashing strategy version 2 should be used to ensure fidelity if the value is being hashed in different places or systems. column.propagate.source.type n/a An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records: __debezium.source.column.type __debezium.source.column.length __debezium.source.column.scale These parameters propagate a column's original type name and length (for variable-width types), respectively. Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases. The fully-qualified name of a column observes one of the following formats: databaseName . tableName . columnName , or databaseName . schemaName . tableName . columnName . To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. datatype.propagate.source.type n/a An optional, comma-separated list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema: __debezium.source.column.type __debezium.source.column.length __debezium.source.column.scale These parameters propagate a column's original type name and length (for variable-width types), respectively. Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases. The fully-qualified name of a column observes one of the following formats: databaseName . tableName . typeName , or databaseName . schemaName . tableName . typeName . To match the name of a data type, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the data type; the expression does not match substrings that might be present in a type name. For the list of PostgreSQL-specific data type names, see the PostgreSQL data type mappings . message.key.columns empty string A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables. By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns. To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format: <fully-qualified_tableName> : <keyColumn> , <keyColumn> To base a table key on multiple column names, insert commas between the column names. Each fully-qualified table name is a regular expression in the following format: <schemaName> . <tableName> The property can include entries for multiple tables. Use a semicolon to separate table entries in the list. The following example sets the message key for the tables inventory.customers and purchase.orders : inventory.customers:pk1,pk2;(.*).purchaseorders:pk3,pk4 For the table inventory.customer , the columns pk1 and pk2 are specified as the message key. For the purchaseorders tables in any schema, the columns pk3 and pk4 server as the message key. There is no limit to the number of columns that you use to create custom message keys. However, it's best to use the minimum number that are required to specify a unique key. Note that having this property set and REPLICA IDENTITY set to DEFAULT on the tables, will cause the tombstone events to not be created properly if the key columns are not part of the primary key of the table. Setting REPLICA IDENTITY to FULL is the only solution. publication.autocreate.mode all_tables Applies only when streaming changes by using the pgoutput plug-in . The setting determines how creation of a publication should work. Specify one of the following values: all_tables - If a publication exists, the connector uses it. If a publication does not exist, the connector creates a publication for all tables in the database for which the connector is capturing changes. For the connector to create a publication it must access the database through a database user account that has permission to create publications and perform replications. You grant the required permission by using the following SQL command CREATE PUBLICATION <publication_name> FOR ALL TABLES; . disabled - The connector does not attempt to create a publication. A database administrator or the user configured to perform replications must have created the publication before running the connector. If the connector cannot find the publication, the connector throws an exception and stops. filtered - If a publication exists, the connector uses it. If no publication exists, the connector creates a new publication for tables that match the current filter configuration as specified by the schema.include.list , schema.exclude.list , and table.include.list , and table.exclude.list connector configuration properties. For example: CREATE PUBLICATION <publication_name> FOR TABLE <tbl1, tbl2, tbl3> . If the publication exists, the connector updates the publication for tables that match the current filter configuration. For example: ALTER PUBLICATION <publication_name> SET TABLE <tbl1, tbl2, tbl3> . replica.identity.autoset.values empty string The setting determines the value for replica identity at table level. This option will overwrite the existing value in database. A comma-separated list of regular expressions that match fully-qualified tables and replica identity value to be used in the table. Each expression must match the pattern '<fully-qualified table name>:<replica identity>', where the table name could be defined as ( SCHEMA_NAME.TABLE_NAME ), and the replica identity values are: DEFAULT - Records the old values of the columns of the primary key, if any. This is the default for non-system tables. INDEX index_name - Records the old values of the columns covered by the named index, that must be unique, not partial, not deferrable, and include only columns marked NOT NULL. If this index is dropped, the behavior is the same as NOTHING. FULL - Records the old values of all columns in the row. NOTHING - Records no information about the old row. This is the default for system tables. For example, binary.handling.mode bytes Specifies how binary ( bytea ) columns should be represented in change events: bytes represents binary data as byte array. base64 represents binary data as base64-encoded strings. base64-url-safe represents binary data as base64-url-safe-encoded strings. hex represents binary data as hex-encoded (base16) strings. schema.name.adjustment.mode none Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings: none does not apply any adjustment. avro replaces the characters that cannot be used in the Avro type name with underscore. avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java field.name.adjustment.mode none Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings: none does not apply any adjustment. avro replaces the characters that cannot be used in the Avro type name with underscore. avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java For more information, see Avro naming . money.fraction.digits 2 Specifies how many decimal digits should be used when converting Postgres money type to java.math.BigDecimal , which represents the values in change events. Applicable only when decimal.handling.mode is set to precise . message.prefix.include.list No default An optional, comma-separated list of regular expressions that match the names of the logical decoding message prefixes that you want the connector to capture. By default, the connector captures all logical decoding messages. When this property is set, the connector captures only logical decoding message with the prefixes specified by the property. All other logical decoding messages are excluded. To match the name of a message prefix, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire message prefix string; the expression does not match substrings that might be present in a prefix. If you include this property in the configuration, do not also set the message.prefix.exclude.list property. For information about the structure of message events and about their ordering semantics, see message events . message.prefix.exclude.list No default An optional, comma-separated list of regular expressions that match the names of the logical decoding message prefixes that you do not want the connector to capture. When this property is set, the connector does not capture logical decoding messages that use the specified prefixes. All other messages are captured. To exclude all logical decoding messages, set the value of this property to .* . To match the name of a message prefix, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire message prefix string; the expression does not match substrings that might be present in a prefix. If you include this property in the configuration, do not also set message.prefix.include.list property. For information about the structure of message events and about their ordering semantics, see message events . The following advanced configuration properties have defaults that work in most situations and therefore rarely need to be specified in the connector's configuration. Table 8.28. Advanced connector configuration properties Property Default Description converters No default Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example, isbn You must set the converters property to enable the connector to use a custom converter. For each converter that you configure for a connector, you must also add a .type property, which specifies the fully-qualifed name of the class that implements the converter interface. The .type property uses the following format: <converterSymbolicName> .type For example, If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter. For example, snapshot.mode initial Specifies the criteria for performing a snapshot when the connector starts: initial - The connector performs a snapshot only when no offsets have been recorded for the logical server name. always - The connector performs a snapshot each time the connector starts. never - The connector never performs snapshots. When a connector is configured this way, its behavior when it starts is as follows. If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. If no LSN has been stored, the connector starts streaming changes from the point in time when the PostgreSQL logical replication slot was created on the server. The never snapshot mode is useful only when you know all data of interest is still reflected in the WAL. initial_only - The connector performs an initial snapshot and then stops, without processing any subsequent changes. exported - deprecated For more information, see the table of snapshot.mode options . snapshot.include.collection.list All tables specified in table.include.list An optional, comma-separated list of regular expressions that match the fully-qualified names ( <schemaName>.<tableName> ) of the tables to include in a snapshot. The specified items must be named in the connector's table.include.list property. This property takes effect only if the connector's snapshot.mode property is set to a value other than never . This property does not affect the behavior of incremental snapshots. To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. snapshot.lock.timeout.ms 10000 Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. How the connector performs snapshots provides details. snapshot.select.statement.overrides No default Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log. The property contains a comma-separated list of fully-qualified table names in the form <schemaName>.<tableName> . For example, "snapshot.select.statement.overrides": "inventory.products,customers.orders" For each table in the list, add a further configuration property that specifies the SELECT statement for the connector to run on the table when it takes a snapshot. The specified SELECT statement determines the subset of table rows to include in the snapshot. Use the following format to specify the name of this SELECT statement property: snapshot.select.statement.overrides. <schemaName> . <tableName> . For example, snapshot.select.statement.overrides.customers.orders . Example: From a customers.orders table that includes the soft-delete column, delete_flag , add the following properties if you want a snapshot to include only those records that are not soft-deleted: In the resulting snapshot, the connector includes only the records for which delete_flag = 0 . event.processing.failure.handling.mode fail Specifies how the connector should react to exceptions during processing of events: fail propagates the exception, indicates the offset of the problematic event, and causes the connector to stop. warn logs the offset of the problematic event, skips that event, and continues processing. skip skips the problematic event and continues processing. max.batch.size 2048 Positive integer value that specifies the maximum size of each batch of events that the connector processes. max.queue.size 8192 Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of max.queue.size to be larger than the value of max.batch.size . max.queue.size.in.bytes 0 A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. If max.queue.size is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you set max.queue.size=1000 , and max.queue.size.in.bytes=5000 , writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes. poll.interval.ms 500 Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 500 milliseconds. include.unknown.datatypes false Specifies connector behavior when the connector encounters a field whose data type is unknown. The default behavior is that the connector omits the field from the change event and logs a warning. Set this property to true if you want the change event to contain an opaque binary representation of the field. This lets consumers decode the field. You can control the exact representation by setting the binary handling mode property. Note Consumers risk backward compatibility issues when include.unknown.datatypes is set to true . Not only may the database-specific binary representation change between releases, but if the data type is eventually supported by Debezium, the data type will be sent downstream in a logical type, which would require adjustments by consumers. In general, when encountering unsupported data types, create a feature request so that support can be added. database.initial.statements No default A semicolon separated list of SQL statements that the connector executes when it establishes a JDBC connection to the database. To use a semicolon as a character and not as a delimiter, specify two consecutive semicolons, ;; . The connector may establish JDBC connections at its own discretion. Consequently, this property is useful for configuration of session parameters only, and not for executing DML statements. The connector does not execute these statements when it creates a connection for reading the transaction log. status.update.interval.ms 10000 Frequency for sending replication connection status updates to the server, given in milliseconds. The property also controls how frequently the database status is checked to detect a dead connection in case the database was shut down. heartbeat.interval.ms 0 Controls how frequently the connector sends heartbeat messages to a Kafka topic. The default behavior is that the connector does not send heartbeat messages. Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages. Heartbeat messages are needed when there are many updates in a database that is being tracked but only a tiny number of updates are related to the table(s) and schema(s) for which the connector is capturing changes. In this situation, the connector reads from the database transaction log as usual but rarely emits change records to Kafka. This means that no offset updates are committed to Kafka and the connector does not have an opportunity to send the latest retrieved LSN to the database. The database retains WAL files that contain events that have already been processed by the connector. Sending heartbeat messages enables the connector to send the latest retrieved LSN to the database, which allows the database to reclaim disk space being used by no longer needed WAL files. heartbeat.action.query No default Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. This is useful for resolving the situation described in WAL disk space consumption , where capturing changes from a low-traffic database on the same host as a high-traffic database prevents Debezium from processing WAL records and thus acknowledging WAL positions with the database. To address this situation, create a heartbeat table in the low-traffic database, and set this property to a statement that inserts records into that table, for example: INSERT INTO test_heartbeat_table (text) VALUES ('test_heartbeat') This allows the connector to receive changes from the low-traffic database and acknowledge their LSNs, which prevents unbounded WAL growth on the database host. schema.refresh.mode columns_diff Specify the conditions that trigger a refresh of the in-memory schema for a table. columns_diff is the safest mode. It ensures that the in-memory schema stays in sync with the database table's schema at all times. columns_diff_exclude_unchanged_toast instructs the connector to refresh the in-memory schema cache if there is a discrepancy with the schema derived from the incoming message, unless unchanged TOASTable data fully accounts for the discrepancy. This setting can significantly improve connector performance if there are frequently-updated tables that have TOASTed data that are rarely part of updates. However, it is possible for the in-memory schema to become outdated if TOASTable columns are dropped from the table. snapshot.delay.ms No default An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors. snapshot.fetch.size 10240 During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch. slot.stream.params No default Semicolon separated list of parameters to pass to the configured logical decoding plug-in. For example, add-tables=public.table,public.table2;include-lsn=true . slot.max.retries 6 If connecting to a replication slot fails, this is the maximum number of consecutive attempts to connect. slot.retry.delay.ms 10000 (10 seconds) The number of milliseconds to wait between retry attempts when the connector fails to connect to a replication slot. unavailable.value.placeholder __debezium_unavailable_value Specifies the constant that the connector provides to indicate that the original value is a toasted value that is not provided by the database. If the setting of unavailable.value.placeholder starts with the hex: prefix it is expected that the rest of the string represents hexadecimally encoded octets. For more information, see toasted values . provide.transaction.metadata false Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify true if you want the connector to do this. For more information, see Transaction metadata . flush.lsn.source true Determines whether the connector should commit the LSN of the processed records in the source postgres database so that the WAL logs can be deleted. Specify false if you don't want the connector to do this. Please note that if set to false LSN will not be acknowledged by Debezium and as a result WAL logs will not be cleared which might result in disk space issues. User is expected to handle the acknowledgement of LSN outside Debezium. retriable.restart.connector.wait.ms 10000 (10 seconds) The number of milliseconds to wait before restarting a connector after a retriable error occurs. skipped.operations t A comma-separated list of operation types that will be skipped during streaming. The operations include: c for inserts/create, u for updates, d for deletes, t for truncates, and none to not skip any operations. By default, truncate operations are skipped. signal.data.collection No default value Fully-qualified name of the data collection that is used to send signals to the connector. Use the following format to specify the collection name: <schemaName> . <tableName> signal.enabled.channels source List of the signaling channel names that are enabled for the connector. By default, the following channels are available: source kafka file jmx notification.enabled.channels No default List of notification channel names that are enabled for the connector. By default, the following channels are available: sink log jmx incremental.snapshot.chunk.size 1024 The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. xmin.fetch.interval.ms 0 How often, in milliseconds, the XMIN will be read from the replication slot. The XMIN value provides the lower bounds of where a new replication slot could start from. The default value of 0 disables tracking XMIN tracking. topic.naming.strategy io.debezium.schema.SchemaTopicNamingStrategy The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to SchemaTopicNamingStrategy . topic.delimiter . Specify the delimiter for topic name, defaults to . . topic.cache.size 10000 The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection. topic.heartbeat.prefix __debezium-heartbeat Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: topic.heartbeat.prefix . topic.prefix For example, if the topic prefix is fulfillment , the default topic name is __debezium-heartbeat.fulfillment . topic.transaction transaction Controls the name of the topic to which the connector sends transaction metadata messages. The topic name has this pattern: topic.prefix . topic.transaction For example, if the topic prefix is fulfillment , the default topic name is fulfillment.transaction . snapshot.max.threads 1 Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. Important Parallel initial snapshots is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . errors.max.retries -1 The maximum number of retries on retriable errors (e.g. connection errors) before failing (-1 = no limit, 0 = disabled, > 0 = num of retries). Pass-through connector configuration properties The connector also supports pass-through configuration properties that are used when creating the Kafka producer and consumer. Be sure to consult the Kafka documentation for all of the configuration properties for Kafka producers and consumers. The PostgreSQL connector does use the new consumer configuration properties . Debezium connector Kafka signals configuration properties Debezium provides a set of signal.* properties that control how the connector interacts with the Kafka signals topic. The following table describes the Kafka signal properties. Table 8.29. Kafka signals configuration properties Property Default Description signal.kafka.topic <topic.prefix>-signal The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. signal.kafka.groupId kafka-signal The name of the group ID that is used by Kafka consumers. signal.kafka.bootstrap.servers No default A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. signal.kafka.poll.timeout.ms 100 An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. Debezium connector pass-through signals Kafka consumer client configuration properties The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.* . For example, the connector passes properties such as signal.consumer.security.protocol=SSL to the Kafka consumer. Debezium strips the prefixes from the properties before it passes the properties to the Kafka signals consumer. Debezium connector sink notifications configuration properties The following table describes the notification properties. Table 8.30. Sink notification configuration properties Property Default Description notification.sink.topic.name No default The name of the topic that receives notifications from Debezium. This property is required when you configure the notification.enabled.channels property to include sink as one of the enabled notification channels. 8.7. Monitoring Debezium PostgreSQL connector performance The Debezium PostgreSQL connector provides two types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide. Snapshot metrics provide information about connector operation while performing a snapshot. Streaming metrics provide information about connector operation when the connector is capturing changes and streaming change event records. Debezium monitoring documentation provides details for how to expose these metrics by using JMX. 8.7.1. Monitoring Debezium during snapshots of PostgreSQL databases The MBean is debezium.postgres:type=connector-metrics,context=snapshot,server= <topic.prefix> . Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start. The following table lists the shapshot metrics that are available. Attributes Type Description LastEvent string The last snapshot event that the connector has read. MilliSecondsSinceLastEvent long The number of milliseconds since the connector has read and processed the most recent event. TotalNumberOfEventsSeen long The total number of events that this connector has seen since last started or reset. NumberOfEventsFiltered long The number of events that have been filtered by include/exclude list filtering rules configured on the connector. CapturedTables string[] The list of tables that are captured by the connector. QueueTotalCapacity int The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. QueueRemainingCapacity int The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. TotalTableCount int The total number of tables that are being included in the snapshot. RemainingTableCount int The number of tables that the snapshot has yet to copy. SnapshotRunning boolean Whether the snapshot was started. SnapshotPaused boolean Whether the snapshot was paused. SnapshotAborted boolean Whether the snapshot was aborted. SnapshotCompleted boolean Whether the snapshot completed. SnapshotDurationInSeconds long The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. SnapshotPausedDurationInSeconds long The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. RowsScanned Map<String, Long> Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. MaxQueueSizeInBytes long The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value. CurrentQueueSizeInBytes long The current volume, in bytes, of records in the queue. The connector also provides the following additional snapshot metrics when an incremental snapshot is executed: Attributes Type Description ChunkId string The identifier of the current snapshot chunk. ChunkFrom string The lower bound of the primary key set defining the current chunk. ChunkTo string The upper bound of the primary key set defining the current chunk. TableFrom string The lower bound of the primary key set of the currently snapshotted table. TableTo string The upper bound of the primary key set of the currently snapshotted table. 8.7.2. Monitoring Debezium PostgreSQL connector record streaming The MBean is debezium.postgres:type=connector-metrics,context=streaming,server= <topic.prefix> . The following table lists the streaming metrics that are available. Attributes Type Description LastEvent string The last streaming event that the connector has read. MilliSecondsSinceLastEvent long The number of milliseconds since the connector has read and processed the most recent event. TotalNumberOfEventsSeen long The total number of events that this connector has seen since the last start or metrics reset. TotalNumberOfCreateEventsSeen long The total number of create events that this connector has seen since the last start or metrics reset. TotalNumberOfUpdateEventsSeen long The total number of update events that this connector has seen since the last start or metrics reset. TotalNumberOfDeleteEventsSeen long The total number of delete events that this connector has seen since the last start or metrics reset. NumberOfEventsFiltered long The number of events that have been filtered by include/exclude list filtering rules configured on the connector. CapturedTables string[] The list of tables that are captured by the connector. QueueTotalCapacity int The length the queue used to pass events between the streamer and the main Kafka Connect loop. QueueRemainingCapacity int The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. Connected boolean Flag that denotes whether the connector is currently connected to the database server. MilliSecondsBehindSource long The number of milliseconds between the last change event's timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running. NumberOfCommittedTransactions long The number of processed transactions that were committed. SourceEventPosition Map<String, String> The coordinates of the last received event. LastTransactionId string Transaction identifier of the last processed transaction. MaxQueueSizeInBytes long The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value. CurrentQueueSizeInBytes long The current volume, in bytes, of records in the queue. 8.8. How Debezium PostgreSQL connectors handle faults and problems Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record. If a fault does happen then the system does not lose any events. However, while it is recovering from the fault, it might repeat some change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events. Details are in the following sections: Configuration and startup errors PostgreSQL becomes unavailable Cluster failures Kafka Connect process stops gracefully Kafka Connect process crashes Kafka becomes unavailable Connector is stopped for a duration Configuration and startup errors In the following situations, the connector fails when trying to start, reports an error/exception in the log, and stops running: The connector's configuration is invalid. The connector cannot successfully connect to PostgreSQL by using the specified connection parameters. The connector is restarting from a previously-recorded position in the PostgreSQL WAL (by using the LSN) and PostgreSQL no longer has that history available. In these cases, the error message has details about the problem and possibly a suggested workaround. After you correct the configuration or address the PostgreSQL problem, restart the connector. PostgreSQL becomes unavailable When the connector is running, the PostgreSQL server that it is connected to could become unavailable for any number of reasons. If this happens, the connector fails with an error and stops. When the server is available again, restart the connector. The PostgreSQL connector externally stores the last processed offset in the form of a PostgreSQL LSN. After a connector restarts and connects to a server instance, the connector communicates with the server to continue streaming from that particular offset. This offset is available as long as the Debezium replication slot remains intact. Never drop a replication slot on the primary server or you will lose data. For information about failure cases in which a slot has been removed, see the section. Cluster failures As of release 12, PostgreSQL allows logical replication slots only on primary servers . This means that you can point a Debezium PostgreSQL connector to only the active primary server of a database cluster. Also, replication slots themselves are not propagated to replicas. If the primary server goes down, a new primary must be promoted. Note Some managed PostgresSQL services (AWS RDS and GCP CloudSQL for example) implement replication to a standby via disk replication. This means that the replication slot does get replicated and will remain available after a failover. The new primary must have a replication slot that is configured for use by the pgoutput plug-in and the database in which you want to capture changes. Only then can you point the connector to the new server and restart the connector. There are important caveats when failovers occur and you should pause Debezium until you can verify that you have an intact replication slot that has not lost data. After a failover: There must be a process that re-creates the Debezium replication slot before allowing the application to write to the new primary. This is crucial. Without this process, your application can miss change events. You might need to verify that Debezium was able to read all changes in the slot before the old primary failed . One reliable method of recovering and verifying whether any changes were lost is to recover a backup of the failed primary to the point immediately before it failed. While this can be administratively difficult, it allows you to inspect the replication slot for any unconsumed changes. Kafka Connect process stops gracefully Suppose that Kafka Connect is being run in distributed mode and a Kafka Connect process is stopped gracefully. Prior to shutting down that process, Kafka Connect migrates the process's connector tasks to another Kafka Connect process in that group. The new connector tasks start processing exactly where the prior tasks stopped. There is a short delay in processing while the connector tasks are stopped gracefully and restarted on the new processes. Kafka Connect process crashes If the Kafka Connector process stops unexpectedly, any connector tasks it was running terminate without recording their most recently processed offsets. When Kafka Connect is being run in distributed mode, Kafka Connect restarts those connector tasks on other processes. However, PostgreSQL connectors resume from the last offset that was recorded by the earlier processes. This means that the new replacement tasks might generate some of the same change events that were processed just prior to the crash. The number of duplicate events depends on the offset flush period and the volume of data changes just before the crash. Because there is a chance that some events might be duplicated during a recovery from failure, consumers should always anticipate some duplicate events. Debezium changes are idempotent, so a sequence of events always results in the same state. In each change event record, Debezium connectors insert source-specific information about the origin of the event, including the PostgreSQL server's time of the event, the ID of the server transaction, and the position in the write-ahead log where the transaction changes were written. Consumers can keep track of this information, especially the LSN, to determine whether an event is a duplicate. Kafka becomes unavailable As the connector generates change events, the Kafka Connect framework records those events in Kafka by using the Kafka producer API. Periodically, at a frequency that you specify in the Kafka Connect configuration, Kafka Connect records the latest offset that appears in those change events. If the Kafka brokers become unavailable, the Kafka Connect process that is running the connectors repeatedly tries to reconnect to the Kafka brokers. In other words, the connector tasks pause until a connection can be re-established, at which point the connectors resume exactly where they left off. Connector is stopped for a duration If the connector is gracefully stopped, the database can continue to be used. Any changes are recorded in the PostgreSQL WAL. When the connector restarts, it resumes streaming changes where it left off. That is, it generates change event records for all database changes that were made while the connector was stopped. A properly configured Kafka cluster is able to handle massive throughput. Kafka Connect is written according to Kafka best practices, and given enough resources a Kafka Connect connector can also handle very large numbers of database change events. Because of this, after being stopped for a while, when a Debezium connector restarts, it is very likely to catch up with the database changes that were made while it was stopped. How quickly this happens depends on the capabilities and performance of Kafka and the volume of changes being made to the data in PostgreSQL. | [
"INSERT INTO <signalTable> (id, type, data) VALUES ( '<id>' , '<snapshotType>' , '{\"data-collections\": [\" <tableName> \",\" <tableName> \"],\"type\":\" <snapshotType> \",\"additional-condition\":\" <additional-condition> \"}');",
"INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{\"data-collections\": [\"schema1.table1\", \"schema2.table2\"], 4 \"type\":\"incremental\"}, 5 \"additional-condition\":\"color=blue\"}'); 6",
"SELECT * FROM <tableName> .",
"SELECT * FROM <tableName> WHERE <additional-condition> .",
"INSERT INTO <signalTable> (id, type, data) VALUES ( '<id>' , '<snapshotType>' , '{\"data-collections\": [\" <tableName> \",\" <tableName> \"],\"type\":\" <snapshotType> \",\"additional-condition\":\" <additional-condition> \"}');",
"INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{\"data-collections\": [\"schema1.products\"],\"type\":\"incremental\", \"additional-condition\":\"color=blue\"}');",
"INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{\"data-collections\": [\"schema1.products\"],\"type\":\"incremental\", \"additional-condition\":\"color=blue AND quantity>10\"}');",
"{ \"before\":null, \"after\": { \"pk\":\"1\", \"value\":\"New data\" }, \"source\": { \"snapshot\":\"incremental\" 1 }, \"op\":\"r\", 2 \"ts_ms\":\"1620393591654\", \"transaction\":null }",
"Key = `test_connector` Value = `{\"type\":\"execute-snapshot\",\"data\": {\"data-collections\": [\"schema1.table1\", \"schema1.table2\"], \"type\": \"INCREMENTAL\"}}`",
"Key = `test_connector` Value = `{\"type\":\"execute-snapshot\",\"data\": {\"data-collections\": [\"schema1.products\"], \"type\": \"INCREMENTAL\", \"additional-condition\":\"color='blue'\"}}`",
"Key = `test_connector` Value = `{\"type\":\"execute-snapshot\",\"data\": {\"data-collections\": [\"schema1.products\"], \"type\": \"INCREMENTAL\", \"additional-condition\":\"color='blue' AND brand='MyBrand'\"}}`",
"INSERT INTO <signalTable> (id, type, data) values ( '<id>' , 'stop-snapshot', '{\"data-collections\": [\" <tableName> \",\" <tableName> \"],\"type\":\"incremental\"}');",
"INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{\"data-collections\": [\"schema1.table1\", \"schema2.table2\"], 4 \"type\":\"incremental\"}'); 5",
"Key = `test_connector` Value = `{\"type\":\"stop-snapshot\",\"data\": {\"data-collections\": [\"schema1.table1\", \"schema1.table2\"], \"type\": \"INCREMENTAL\"}}`",
"{ \"status\": \"BEGIN\", \"id\": \"571:53195829\", \"ts_ms\": 1486500577125, \"event_count\": null, \"data_collections\": null } { \"status\": \"END\", \"id\": \"571:53195832\", \"ts_ms\": 1486500577691, \"event_count\": 2, \"data_collections\": [ { \"data_collection\": \"s1.a\", \"event_count\": 1 }, { \"data_collection\": \"s2.a\", \"event_count\": 1 } ] }",
"{ \"before\": null, \"after\": { \"pk\": \"2\", \"aa\": \"1\" }, \"source\": { }, \"op\": \"c\", \"ts_ms\": \"1580390884335\", \"transaction\": { \"id\": \"571:53195832\", \"total_order\": \"1\", \"data_collection_order\": \"1\" } }",
"{ \"schema\": { 1 }, \"payload\": { 2 }, \"schema\": { 3 }, \"payload\": { 4 }, }",
"CREATE TABLE customers ( id SERIAL, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY(id) );",
"{ \"schema\": { 1 \"type\": \"struct\", \"name\": \"PostgreSQL_server.public.customers.Key\", 2 \"optional\": false, 3 \"fields\": [ 4 { \"name\": \"id\", \"index\": \"0\", \"schema\": { \"type\": \"INT32\", \"optional\": \"false\" } } ] }, \"payload\": { 5 \"id\": \"1\" }, }",
"CREATE TABLE customers ( id SERIAL, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY(id) );",
"{ \"schema\": { 1 \"type\": \"struct\", \"fields\": [ { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"id\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"first_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"last_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"email\" } ], \"optional\": true, \"name\": \"PostgreSQL_server.inventory.customers.Value\", 2 \"field\": \"before\" }, { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"id\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"first_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"last_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"email\" } ], \"optional\": true, \"name\": \"PostgreSQL_server.inventory.customers.Value\", \"field\": \"after\" }, { \"type\": \"struct\", \"fields\": [ { \"type\": \"string\", \"optional\": false, \"field\": \"version\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"connector\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"name\" }, { \"type\": \"int64\", \"optional\": false, \"field\": \"ts_ms\" }, { \"type\": \"boolean\", \"optional\": true, \"default\": false, \"field\": \"snapshot\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"db\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"schema\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"table\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"txId\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"lsn\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"xmin\" } ], \"optional\": false, \"name\": \"io.debezium.connector.postgresql.Source\", 3 \"field\": \"source\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"op\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"ts_ms\" } ], \"optional\": false, \"name\": \"PostgreSQL_server.inventory.customers.Envelope\" 4 }, \"payload\": { 5 \"before\": null, 6 \"after\": { 7 \"id\": 1, \"first_name\": \"Anne\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"source\": { 8 \"version\": \"2.3.4.Final\", \"connector\": \"postgresql\", \"name\": \"PostgreSQL_server\", \"ts_ms\": 1559033904863, \"snapshot\": true, \"db\": \"postgres\", \"sequence\": \"[\\\"24023119\\\",\\\"24023128\\\"]\", \"schema\": \"public\", \"table\": \"customers\", \"txId\": 555, \"lsn\": 24023128, \"xmin\": null }, \"op\": \"c\", 9 \"ts_ms\": 1559033904863 10 } }",
"{ \"schema\": { ... }, \"payload\": { \"before\": { 1 \"id\": 1 }, \"after\": { 2 \"id\": 1, \"first_name\": \"Anne Marie\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"source\": { 3 \"version\": \"2.3.4.Final\", \"connector\": \"postgresql\", \"name\": \"PostgreSQL_server\", \"ts_ms\": 1559033904863, \"snapshot\": false, \"db\": \"postgres\", \"schema\": \"public\", \"table\": \"customers\", \"txId\": 556, \"lsn\": 24023128, \"xmin\": null }, \"op\": \"u\", 4 \"ts_ms\": 1465584025523 5 } }",
"{ \"schema\": { ... }, \"payload\": { \"before\": { 1 \"id\": 1 }, \"after\": null, 2 \"source\": { 3 \"version\": \"2.3.4.Final\", \"connector\": \"postgresql\", \"name\": \"PostgreSQL_server\", \"ts_ms\": 1559033904863, \"snapshot\": false, \"db\": \"postgres\", \"schema\": \"public\", \"table\": \"customers\", \"txId\": 556, \"lsn\": 46523128, \"xmin\": null }, \"op\": \"d\", 4 \"ts_ms\": 1465581902461 5 } }",
"{ \"schema\": { ... }, \"payload\": { \"source\": { 1 \"version\": \"2.3.4.Final\", \"connector\": \"postgresql\", \"name\": \"PostgreSQL_server\", \"ts_ms\": 1559033904863, \"snapshot\": false, \"db\": \"postgres\", \"schema\": \"public\", \"table\": \"customers\", \"txId\": 556, \"lsn\": 46523128, \"xmin\": null }, \"op\": \"t\", 2 \"ts_ms\": 1559033904961 3 } }",
"{ \"schema\": { ... }, \"payload\": { \"source\": { 1 \"version\": \"2.3.4.Final\", \"connector\": \"postgresql\", \"name\": \"PostgreSQL_server\", \"ts_ms\": 1559033904863, \"snapshot\": false, \"db\": \"postgres\", \"schema\": \"\", \"table\": \"\", \"txId\": 556, \"lsn\": 46523128, \"xmin\": null }, \"op\": \"m\", 2 \"ts_ms\": 1559033904961, 3 \"message\": { 4 \"prefix\": \"foo\", \"content\": \"Ymfy\" } } }",
"{ \"schema\": { ... }, \"payload\": { \"source\": { 1 \"version\": \"2.3.4.Final\", \"connector\": \"postgresql\", \"name\": \"PostgreSQL_server\", \"ts_ms\": 1559033904863, \"snapshot\": false, \"db\": \"postgres\", \"schema\": \"\", \"table\": \"\", \"lsn\": 46523128, \"xmin\": null }, \"op\": \"m\", 2 \"ts_ms\": 1559033904961 3 \"message\": { 4 \"prefix\": \"foo\", \"content\": \"Ymfy\" } }",
"wal_level=logical max_wal_senders=1 max_replication_slots=1",
"CREATE ROLE <name> REPLICATION LOGIN;",
"CREATE ROLE <replication_group> ;",
"GRANT REPLICATION_GROUP TO <original_owner> ;",
"GRANT REPLICATION_GROUP TO <replication_user> ;",
"ALTER TABLE <table_name> OWNER TO REPLICATION_GROUP;",
"local replication <youruser> trust 1 host replication <youruser> 127.0.0.1/32 trust 2 host replication <youruser> ::1/128 trust 3",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 1 spec: version: 3.5.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-postgres artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-postgres/2.3.4.Final-redhat-00001/debezium-connector-postgres-2.3.4.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat- <build-number> /apicurio-registry-distro-connect-converter-2.4.4.Final-redhat- <build-number> .zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.3.4.Final-redhat-00001/debezium-scripting-2.3.4.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093",
"create -f dbz-connect.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-postgresql 1 spec: class: io.debezium.connector.postgresql.PostgresConnector 2 tasksMax: 1 3 config: 4 database.hostname: postgresql.debezium-postgresql.svc.cluster.local 5 database.port: 5432 6 database.user: debezium 7 database.password: dbz 8 database.dbname: mydatabase 9 topic.prefix: inventory-connector-postgresql 10 table.include.list: public.inventory 11",
"create -n <namespace> -f <kafkaConnector> .yaml",
"create -n debezium -f {context}-inventory-connector.yaml",
"cat <<EOF >debezium-container-for-postgresql.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-postgres/2.3.4.Final-redhat-00001/debezium-connector-postgres-2.3.4.Final-redhat-00001-plugin.zip && unzip debezium-connector-postgres-2.3.4.Final-redhat-00001-plugin.zip && rm debezium-connector-postgres-2.3.4.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ USER 1001 EOF",
"build -t debezium-container-for-postgresql:latest .",
"docker build -t debezium-container-for-postgresql:latest .",
"push <myregistry.io> /debezium-container-for-postgresql:latest",
"docker push <myregistry.io> /debezium-container-for-postgresql:latest",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 1 spec: image: debezium-container-for-postgresql 2",
"create -f dbz-connect.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-postgresql 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.postgresql.PostgresConnector tasksMax: 1 2 config: 3 database.hostname: 192.168.99.100 4 database.port: 5432 database.user: debezium database.password: dbz database.dbname: sampledb topic.prefix: inventory-connector-postgresql 5 schema.include.list: public 6 plugin.name: pgoutput 7",
"apply -f inventory-connector.yaml",
"describe KafkaConnector <connector-name> -n <project>",
"describe KafkaConnector inventory-connector-postgresql -n debezium",
"Name: inventory-connector-postgresql Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-postgresql Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-postgresql.inventory inventory-connector-postgresql.inventory.addresses inventory-connector-postgresql.inventory.customers inventory-connector-postgresql.inventory.geom inventory-connector-postgresql.inventory.orders inventory-connector-postgresql.inventory.products inventory-connector-postgresql.inventory.products_on_hand Events: <none>",
"get kafkatopics",
"NAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-postgresql--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True",
"exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh > --bootstrap-server localhost:9092 > --from-beginning > --property print.key=true > --topic= <topic-name >",
"exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh > --bootstrap-server localhost:9092 > --from-beginning > --property print.key=true > --topic=inventory-connector-postgresql.inventory.products_on_hand",
"{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"}],\"optional\":false,\"name\":\"inventory-connector-postgresql.inventory.products_on_hand.Key\"},\"payload\":{\"product_id\":101}} {\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"quantity\"}],\"optional\":true,\"name\":\"inventory-connector-postgresql.inventory.products_on_hand.Value\",\"field\":\"before\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"quantity\"}],\"optional\":true,\"name\":\"inventory-connector-postgresql.inventory.products_on_hand.Value\",\"field\":\"after\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"version\"},{\"type\":\"string\",\"optional\":false,\"field\":\"connector\"},{\"type\":\"string\",\"optional\":false,\"field\":\"name\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"ts_ms\"},{\"type\":\"string\",\"optional\":true,\"name\":\"io.debezium.data.Enum\",\"version\":1,\"parameters\":{\"allowed\":\"true,last,false\"},\"default\":\"false\",\"field\":\"snapshot\"},{\"type\":\"string\",\"optional\":false,\"field\":\"db\"},{\"type\":\"string\",\"optional\":true,\"field\":\"sequence\"},{\"type\":\"string\",\"optional\":true,\"field\":\"table\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"server_id\"},{\"type\":\"string\",\"optional\":true,\"field\":\"gtid\"},{\"type\":\"string\",\"optional\":false,\"field\":\"file\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"pos\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"row\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"thread\"},{\"type\":\"string\",\"optional\":true,\"field\":\"query\"}],\"optional\":false,\"name\":\"io.debezium.connector.postgresql.Source\",\"field\":\"source\"},{\"type\":\"string\",\"optional\":false,\"field\":\"op\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"ts_ms\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"id\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"total_order\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"data_collection_order\"}],\"optional\":true,\"field\":\"transaction\"}],\"optional\":false,\"name\": \"inventory-connector-postgresql.inventory.products_on_hand.Envelope\" }, \"payload\" :{ \"before\" : null , \"after\" :{ \"product_id\":101,\"quantity\":3 },\"source\":{\"version\":\"2.3.4.Final-redhat-00001\",\"connector\":\"postgresql\",\"name\":\"inventory-connector-postgresql\",\"ts_ms\":1638985247805,\"snapshot\":\"true\",\"db\":\"inventory\",\"sequence\":null,\"table\":\"products_on_hand\",\"server_id\":0,\"gtid\":null,\"file\":\"postgresql-bin.000003\",\"pos\":156,\"row\":0,\"thread\":null,\"query\":null}, \"op\" : \"r\" ,\"ts_ms\":1638985247805,\"transaction\":null}}",
"column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName",
"schema1.*:FULL,schema2.table2:NOTHING,schema2.table3:INDEX idx_name",
"isbn.type: io.debezium.test.IsbnConverter",
"isbn.schema.name: io.debezium.postgresql.type.Isbn",
"\"snapshot.select.statement.overrides\": \"customer.orders\", \"snapshot.select.statement.overrides.customer.orders\": \"SELECT * FROM [customers].[orders] WHERE delete_flag = 0 ORDER BY id DESC\""
]
| https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/debezium_user_guide/debezium-connector-for-postgresql |
Chapter 3. Configuring Fencing with Conga | Chapter 3. Configuring Fencing with Conga This chapter describes how to configure fencing in Red Hat High Availability Add-On using Conga . Note Conga is a graphical user interface that you can use to administer the Red Hat High Availability Add-On. Note, however, that in order to use this interface effectively you need to have a good and clear understanding of the underlying concepts. Learning about cluster configuration by exploring the available features in the user interface is not recommended, as it may result in a system that is not robust enough to keep all services running when components fail. Section 3.2, "Configuring Fence Devices" 3.1. Configuring Fence Daemon Properties Clicking on the Fence Daemon tab displays the Fence Daemon Properties page, which provides an interface for configuring Post Fail Delay and Post Join Delay . The values you configure for these parameters are general fencing properties for the cluster. To configure specific fence devices for the nodes of the cluster, use the Fence Devices menu item of the cluster display, as described in Section 3.2, "Configuring Fence Devices" . The Post Fail Delay parameter is the number of seconds the fence daemon ( fenced ) waits before fencing a node (a member of the fence domain) after the node has failed. The Post Fail Delay default value is 0 . Its value may be varied to suit cluster and network performance. The Post Join Delay parameter is the number of seconds the fence daemon ( fenced ) waits before fencing a node after the node joins the fence domain. The Post Join Delay default value is 6 . A typical setting for Post Join Delay is between 20 and 30 seconds, but can vary according to cluster and network performance. Enter the values required and click Apply for changes to take effect. Note For more information about Post Join Delay and Post Fail Delay , see the fenced (8) man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/ch-config-conga-ca |
Builds using BuildConfig | Builds using BuildConfig OpenShift Container Platform 4.17 Builds Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/builds_using_buildconfig/index |
6.2. Debugging P2V conversions | 6.2. Debugging P2V conversions Problems encountered during P2V conversion can be more easily explained to engineers or support services if debugging messages are enabled when running virt-p2v . P2V debugging is available in Red Hat Enterprise Linux 6.5 and above. To enable P2V debugging, select the Enable server-side debugging check box on the convert screen in the virt-p2v client before clicking the Convert button. This instructs the server to write LIBGUESTFS_TRACE and LIBGUESTFS_DEBUG output during the virt-p2v conversion process. Refer to Chapter 5, Converting physical machines to virtual machines for instructions on using virt-p2v . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-p2v_debug |
Chapter 21. Introducing distributed tracing | Chapter 21. Introducing distributed tracing Distributed tracing tracks the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. Distributed tracing complements the monitoring of metrics in Grafana dashboards, as well as the component loggers. Support for tracing is built in to the following Kafka components: MirrorMaker to trace messages from a source cluster to a target cluster Kafka Connect to trace messages consumed and produced by Kafka Connect Kafka Bridge to trace messages between Kafka and HTTP client applications Tracing is not supported for Kafka brokers. You enable and configure tracing for these components through their custom resources. You add tracing configuration using spec.template properties. You enable tracing by specifying a tracing type using the spec.tracing.type property: opentelemetry Specify type: opentelemetry to use OpenTelemetry. By Default, OpenTelemetry uses the OTLP (OpenTelemetry Protocol) exporter and endpoint to get trace data. You can specify other tracing systems supported by OpenTelemetry, including Jaeger tracing. To do this, you change the OpenTelemetry exporter and endpoint in the tracing configuration. jaeger Specify type:jaeger to use OpenTracing and the Jaeger client to get trace data. Note Support for type: jaeger tracing is deprecated. The Jaeger clients are now retired and the OpenTracing project archived. As such, we cannot guarantee their support for future Kafka versions. If possible, we will maintain the support for type: jaeger tracing until June 2023 and remove it afterwards. Please migrate to OpenTelemetry as soon as possible. 21.1. Tracing options Use OpenTelemetry or OpenTracing (deprecated) with the Jaeger tracing system. OpenTelemetry and OpenTracing provide API specifications that are independent from the tracing or monitoring system. You use the APIs to instrument application code for tracing. Instrumented applications generate traces for individual requests across the distributed system. Traces are composed of spans that define specific units of work over time. Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the tracing APIs and provides client libraries for instrumentation. The Jaeger user interface allows you to query, filter, and analyze trace data. The Jaeger user interface showing a simple query Additional resources Jaeger documentation OpenTelemetry documentation OpenTracing documentation 21.2. Environment variables for tracing Use environment variables when you are enabling tracing for Kafka components or initializing a tracer for Kafka clients. Tracing environment variables are subject to change. For the latest information, see the OpenTelemetry documentation and OpenTracing documentation . The following tables describe the key environment variables for setting up a tracer. Table 21.1. OpenTelemetry environment variables Property Required Description OTEL_SERVICE_NAME Yes The name of the Jaeger tracing service for OpenTelemetry. OTEL_EXPORTER_JAEGER_ENDPOINT Yes The exporter used for tracing. OTEL_TRACES_EXPORTER Yes The exporter used for tracing. Set to otlp by default. If using Jaeger tracing, you need to set this environment variable as jaeger . If you are using another tracing implementation, specify the exporter used . Table 21.2. OpenTracing environment variables Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. 21.3. Setting up distributed tracing Enable distributed tracing in Kafka components by specifying a tracing type in the custom resource. Instrument tracers in Kafka clients for end-to-end tracking of messages. To set up distributed tracing, follow these procedures in order: Enable tracing for MirrorMaker, Kafka Connect, and the Kafka Bridge Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument clients with tracers: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing 21.3.1. Prerequisites Before setting up distributed tracing, make sure Jaeger backend components are deployed to your OpenShift cluster. We recommend using the Jaeger operator for deploying Jaeger on your OpenShift cluster. For deployment instructions, see the Jaeger documentation . Note Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this content. 21.3.2. Enabling tracing in MirrorMaker, Kafka Connect, and Kafka Bridge resources Distributed tracing is supported for MirrorMaker, MirrorMaker 2, Kafka Connect, and the AMQ Streams Kafka Bridge. Configure the custom resource of the component to specify and enable a tracer service. Enabling tracing in a resource triggers the following events: Interceptor classes are updated in the integrated consumers and producers of the component. For MirrorMaker, MirrorMaker 2, and Kafka Connect, the tracing agent initializes a tracer based on the tracing configuration defined in the resource. For the Kafka Bridge, a tracer based on the tracing configuration defined in the resource is initialized by the Kafka Bridge itself. You can enable tracing that uses OpenTelemetry or OpenTracing. Tracing in MirrorMaker and MirrorMaker 2 For MirrorMaker and MirrorMaker 2, messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker or MirrorMaker 2 component. Tracing in Kafka Connect For Kafka Connect, only messages produced and consumed by Kafka Connect are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. Tracing in the Kafka Bridge For the Kafka Bridge, messages produced and consumed by the Kafka Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the Kafka Bridge are also traced. To have end-to-end tracing, you must configure tracing in your HTTP clients. Procedure Perform these steps for each KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , and KafkaBridge resource. In the spec.template property, configure the tracer service. Use the tracing environment variables as template configuration properties. For OpenTelemetry, set the spec.tracing.type property to opentelemetry . For OpenTracing, set the spec.tracing.type property to jaeger . Example tracing configuration for Kafka Connect using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for MirrorMaker using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: #... template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for MirrorMaker 2 using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for the Kafka Bridge using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: #... template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for Kafka Connect using OpenTracing apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Example tracing configuration for MirrorMaker using OpenTracing apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: #... template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Example tracing configuration for MirrorMaker 2 using OpenTracing apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Example tracing configuration for the Kafka Bridge using OpenTracing apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: #... template: bridgeContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Create or update the resource: oc apply -f <resource_configuration_file> 21.3.3. Initializing tracing for Kafka clients Initialize a tracer, then instrument your client applications for distributed tracing. You can instrument Kafka producer and consumer clients, and Kafka Streams API applications. You can initialize a tracer for OpenTracing or OpenTelemetry. Configure and initialize a tracer using a set of tracing environment variables . Procedure In each client application add the dependencies for the tracer: Add the Maven dependencies to the pom.xml file for the client application: Dependencies for OpenTelemetry <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.19.0.redhat-00002</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-{OpenTelemetryKafkaClient}</artifactId> <version>1.19.0.redhat-00002</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.19.0.redhat-00002</version> </dependency> Dependencies for OpenTracing <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.8.1.redhat-00002</version> </dependency> <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00006</version> </dependency> Define the configuration of the tracer using the tracing environment variables . Create a tracer, which is initialized with the environment variables: Creating a tracer for OpenTelemetry OpenTelemetry ot = GlobalOpenTelemetry.get(); Creating a tracer for OpenTracing Tracer tracer = Configuration.fromEnv().getTracer(); Register the tracer as a global tracer: GlobalTracer.register(tracer); Instrument your client: Section 21.3.4, "Instrumenting producers and consumers for tracing" Section 21.3.5, "Instrumenting Kafka Streams applications for tracing" 21.3.4. Instrumenting producers and consumers for tracing Instrument application code to enable tracing in Kafka producers and consumers. Use a decorator pattern or interceptors to instrument your Java producer and consumer application code for tracing. You can then record traces when messages are produced or retrieved from a topic. OpenTelemetry and OpenTracing instrumentation projects provide classes that support instrumentation of producers and consumers. Decorator instrumentation For decorator instrumentation, create a modified producer or consumer instance for tracing. Decorator instrumentation is different for OpenTelemetry and OpenTracing. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the consumer or producer configuration. Interceptor instrumentation is the same for OpenTelemetry and OpenTracing. Prerequisites You have initialized tracing for the client . You enable instrumentation in producer and consumer applications by adding the tracing JARs as dependencies to your project. Procedure Perform these steps in the application code of each producer and consumer application. Instrument your client application code using either a decorator pattern or interceptors. To use a decorator pattern, create a modified producer or consumer instance to send or receive messages. You pass the original KafkaProducer or KafkaConsumer class. Example decorator instrumentation for OpenTelemetry // Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton("mytopic")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); Example decorator instrumentation for OpenTracing //producer instance KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); TracingKafkaProducer.send(...) //consumer instance KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); tracingConsumer.subscribe(Collections.singletonList("mytopic")); ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use interceptors, set the interceptor class in the producer or consumer configuration. You use the KafkaProducer and KafkaConsumer classes in the usual way. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer configuration using interceptors senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...); Example consumer configuration using interceptors consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList("messages")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 21.3.5. Instrumenting Kafka Streams applications for tracing Instrument application code to enable tracing in Kafka Streams API applications. Use a decorator pattern or interceptors to instrument your Kafka Streams API applications for tracing. You can then record traces when messages are produced or retrieved from a topic. Decorator instrumentation For decorator instrumentation, create a modified Kafka Streams instance for tracing. The OpenTracing instrumentation project provides a TracingKafkaClientSupplier class that supports instrumentation of Kafka Streams. You create a wrapped instance of the TracingKafkaClientSupplier supplier interface, which provides tracing instrumentation for Kafka Streams. For OpenTelemetry, the process is the same but you need to create a custom TracingKafkaClientSupplier class to provide the support. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the Kafka Streams producer and consumer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in Kafka Streams applications by adding the tracing JARs as dependencies to your project. To instrument Kafka Streams with OpenTelemetry, you'll need to write a custom TracingKafkaClientSupplier . The custom TracingKafkaClientSupplier can extend Kafka's DefaultKafkaClientSupplier , overriding the producer and consumer creation methods to wrap the instances with the telemetry-related code. Example custom TracingKafkaClientSupplier private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } } Procedure Perform these steps for each Kafka Streams API application. To use a decorator pattern, create an instance of the TracingKafkaClientSupplier supplier interface, then provide the supplier interface to KafkaStreams . Example decorator instrumentation KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); To use interceptors, set the interceptor class in the Kafka Streams producer and consumer configuration. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer and consumer configuration using interceptors props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); 21.3.6. Introducing a different OpenTelemetry tracing system Instead of the default OTLP system, you can specify other tracing systems that are supported by OpenTelemetry. You do this by adding the required artifacts to the Kafka image provided with AMQ Streams. Any required implementation specific environment variables must also be set. You then enable the new tracing implementation using the OTEL_TRACES_EXPORTER environment variable. This procedure shows how to implement Zipkin tracing. Procedure Add the tracing artifacts to the /opt/kafka/libs/ directory of the AMQ Streams Kafka image. You can use the Kafka container image on the Red Hat Ecosystem Catalog as a base image for creating a new custom image. OpenTelemetry artifact for Zipkin io.opentelemetry:opentelemetry-exporter-zipkin Set the tracing exporter and endpoint for the new tracing implementation. Example Zikpin tracer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #... 1 Specifies the Zipkin endpoint to connect to. 2 The Zipkin exporter. 21.3.7. Custom span names A tracing span is a logical unit of work in Jaeger, with an operation name, start time, and duration. Spans have built-in names, but you can specify custom span names in your Kafka client instrumentation where used. Specifying custom span names is optional and only applies when using a decorator pattern in producer and consumer client instrumentation or Kafka Streams instrumentation . 21.3.7.1. Specifying span names for OpenTelemetry Custom span names cannot be specified directly with OpenTelemetry. Instead, you retrieve span names by adding code to your client application to extract additional tags and attributes. Example code to extract attributes //Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("prod_start"), "prod1"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("prod_end"), "prod2"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("con_start"), "con1"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("con_end"), "con2"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")); System.setProperty("otel.traces.exporter", "jaeger"); System.setProperty("otel.service.name", "myapp1"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build(); 21.3.7.2. Specifying span names for OpenTracing To specify custom span names for OpenTracing, pass a BiFunction object as an additional argument when instrumenting producers and consumers. For more information on built-in names and specifying custom span names to instrument client application code in a decorator pattern, see the OpenTracing Apache Kafka client instrumentation . | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apply -f <resource_configuration_file>",
"<dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.19.0.redhat-00002</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-{OpenTelemetryKafkaClient}</artifactId> <version>1.19.0.redhat-00002</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.19.0.redhat-00002</version> </dependency>",
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.8.1.redhat-00002</version> </dependency> <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00006</version> </dependency>",
"OpenTelemetry ot = GlobalOpenTelemetry.get();",
"Tracer tracer = Configuration.fromEnv().getTracer();",
"GlobalTracer.register(tracer);",
"// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"//producer instance KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); TracingKafkaProducer.send(...) //consumer instance KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); tracingConsumer.subscribe(Collections.singletonList(\"mytopic\")); ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);",
"consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());",
"io.opentelemetry:opentelemetry-exporter-zipkin",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #",
"//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/assembly-distributed-tracing-str |
Providing Feedback on Red Hat Documentation | Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_capsule_server/providing-feedback-on-red-hat-documentation_capsule |
B.32.2. RHSA-2011:0290 - Moderate: java-1.6.0-ibm security update | B.32.2. RHSA-2011:0290 - Moderate: java-1.6.0-ibm security update Updated java-1.6.0-ibm packages that fix one security issue are now available for Red Hat Enterprise Linux 4 Extras, and Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The IBM 1.6.0 Java release includes the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. CVE-2010-4476 A denial of service flaw was found in the way certain strings were converted to Double objects. A remote attacker could use this flaw to cause Java based applications to hang, for example, if they parsed Double values in a specially-crafted HTTP request. All users of java-1.6.0-ibm are advised to upgrade to these updated packages, containing the IBM 1.6.0 SR9 Java release. All running instances of IBM Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0290 |
Chapter 7. Removing Service Telemetry Framework from the Red Hat OpenShift Container Platform environment | Chapter 7. Removing Service Telemetry Framework from the Red Hat OpenShift Container Platform environment Remove Service Telemetry Framework (STF) from an Red Hat OpenShift Container Platform environment if you no longer require the STF functionality. To remove STF from the Red Hat OpenShift Container Platform environment, you must perform the following tasks: Delete the namespace. Remove the cert-manager Operator. Remove the Cluster Observability Operator. 7.1. Deleting the namespace To remove the operational resources for STF from Red Hat OpenShift Container Platform, delete the namespace. Procedure Run the oc delete command: USD oc delete project service-telemetry Verify that the resources have been deleted from the namespace: USD oc get all No resources found. 7.2. Removing the cert-manager Operator for Red Hat OpenShift If you are not using the cert-manager Operator for Red Hat OpenShift for any other applications, delete the Subscription, ClusterServiceVersion, and CustomResourceDefinitions. For more information about removing the cert-manager for Red Hat OpenShift Operator, see Removing cert-manager Operator for Red Hat OpenShift in the OpenShift Container Platform Documentation . Additional resources Deleting Operators from a cluster . 7.3. Removing the Cluster Observability Operator If you are not using the Cluster Observability Operator for any other applications, delete the Subscription, ClusterServiceVersion, and CustomResourceDefinitions. For more information about removing the Cluster Observability Operator, see Uninstalling the Cluster Observability Operator using the web console in the OpenShift Container Platform Documentation . Additional resources Deleting Operators from a cluster . | [
"oc delete project service-telemetry",
"oc get all No resources found."
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/service_telemetry_framework_1.5/assembly-removing-stf-from-the-openshift-environment_assembly |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the AMQ Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2024-08-12 11:04:59 UTC | [
"dnf install <package_name>",
"dnf install <path_to_download_package>"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/using_your_subscription |
Part II. Clair on Red Hat Quay | Part II. Clair on Red Hat Quay This guide contains procedures for running Clair on Red Hat Quay in both standalone and OpenShift Container Platform Operator deployments. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/vulnerability_reporting_with_clair_on_red_hat_quay/testing-clair-with-quay |
High Availability Add-On Reference | High Availability Add-On Reference Red Hat Enterprise Linux 7 Reference guide for configuration and management of the High Availability Add-On Steven Levine Red Hat Customer Content Services [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index |
Chapter 1. Red Hat OpenShift Service on AWS quick start guide | Chapter 1. Red Hat OpenShift Service on AWS quick start guide Note If you are looking for a comprehensive getting started guide for Red Hat OpenShift Service on AWS (ROSA), see Comprehensive guide to getting started with Red Hat OpenShift Service on AWS . For additional information on ROSA installation, see Installing Red Hat OpenShift Service on AWS (ROSA) interactive walkthrough . Follow this guide to quickly create a Red Hat OpenShift Service on AWS (ROSA) cluster using Red Hat OpenShift Cluster Manager on the Red Hat Hybrid Cloud Console , grant user access, deploy your first application, and learn how to revoke user access and delete your cluster. The procedures in this document enable you to create a cluster that uses AWS Security Token Service (STS). For more information about using AWS STS with ROSA clusters, see Using the AWS Security Token Service . 1.1. Prerequisites You reviewed the introduction to Red Hat OpenShift Service on AWS (ROSA) , and the documentation on ROSA architecture models and architecture concepts . You have read the documentation on limits and scalability and the guidelines for planning your environment . You have reviewed the detailed AWS prerequisites for ROSA with STS . You have the AWS service quotas that are required to run a ROSA cluster . 1.2. Setting up the environment Before you create a Red Hat OpenShift Service on AWS (ROSA) cluster, you must set up your environment by completing the following tasks: Verify ROSA prerequisites against your AWS and Red Hat accounts. Install and configure the required command line interface (CLI) tools. Verify the configuration of the CLI tools. You can follow the procedures in this section to complete these setup requirements. Verifying ROSA prerequisites Use the steps in this procedure to enable Red Hat OpenShift Service on AWS (ROSA) in your AWS account. Prerequisites You have a Red Hat account. You have an AWS account. Note Consider using a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one . Procedure Sign in to the AWS Management Console . Navigate to the ROSA service . Click Get started . The Verify ROSA prerequisites page opens. Under ROSA enablement , ensure that a green check mark and You previously enabled ROSA are displayed. If not, follow these steps: Select the checkbox beside I agree to share my contact information with Red Hat . Click Enable ROSA . After a short wait, a green check mark and You enabled ROSA message are displayed. Under Service Quotas , ensure that a green check and Your quotas meet the requirements for ROSA are displayed. If you see Your quotas don't meet the minimum requirements , take note of the quota type and the minimum listed in the error message. See Amazon's documentation on requesting a quota increase for guidance. It may take several hours for Amazon to approve your quota request. Under ELB service-linked role , ensure that a green check mark and AWSServiceRoleForElasticLoadBalancing already exists are displayed. Click Continue to Red Hat . The Get started with Red Hat OpenShift Service on AWS (ROSA) page opens in a new tab. You have already completed Step 1 on this page, and can now continue with Step 2. Additional resources Troubleshoot ROSA enablement errors Installing and configuring the required CLI tools Several command line interface (CLI) tools are required to deploy and work with your cluster. Prerequisites You have an AWS account. You have a Red Hat account. Procedure Log in to your Red Hat and AWS accounts to access the download page for each required tool. Log in to your Red Hat account at console.redhat.com . Log in to your AWS account at aws.amazon.com . Install and configure the latest AWS CLI ( aws ). Install the AWS CLI by following the AWS Command Line Interface documentation appropriate for your workstation. Configure the AWS CLI by specifying your aws_access_key_id , aws_secret_access_key , and region in the .aws/credentials file. For more information, see AWS Configuration basics in the AWS documentation. Note You can optionally use the AWS_DEFAULT_REGION environment variable to set the default AWS region. Query the AWS API to verify if the AWS CLI is installed and configured correctly: USD aws sts get-caller-identity --output text Example output <aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id> Install and configure the latest ROSA CLI ( rosa ). Navigate to Downloads . Find Red Hat OpenShift Service on AWS command line interface (`rosa) in the list of tools and click Download . The rosa-linux.tar.gz file is downloaded to your default download location. Extract the rosa binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive: USD tar xvf rosa-linux.tar.gz Move the rosa binary file to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user: USD sudo mv rosa /usr/local/bin/rosa Verify that the ROSA CLI is installed correctly by querying the rosa version: USD rosa version Example output 1.2.47 Your ROSA CLI is up to date. Log in to the ROSA CLI using an offline access token. Run the login command: USD rosa login Example output To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: Navigate to the URL listed in the command output to view your offline access token. Enter the offline access token at the command line prompt to log in. ? Copy the token and paste it here: ******************* [full token length omitted] Note In the future you can specify the offline access token by using the --token="<offline_access_token>" argument when you run the rosa login command. Verify that you are logged in and confirm that your credentials are correct before proceeding: USD rosa whoami Example output AWS Account ID: <aws_account_number> AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::<aws_account_number>:user/<aws_user_name> OCM API: https://api.openshift.com OCM Account ID: <red_hat_account_id> OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: <org_id> OCM Organization Name: Your organization OCM Organization External ID: <external_org_id> Install and configure the latest OpenShift CLI ( oc ). Use the ROSA CLI to download the oc CLI. The following command downloads the latest version of the CLI to the current working directory: USD rosa download openshift-client Extract the oc binary file from the downloaded archive. The following example extracts the files from a Linux tar archive: USD tar xvf openshift-client-linux.tar.gz Move the oc binary to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user: USD sudo mv oc /usr/local/bin/oc Verify that the oc CLI is installed correctly: USD rosa verify openshift-client Example output I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.17.3 1.3. Creating a ROSA cluster with AWS STS using the default auto mode Red Hat OpenShift Cluster Manager is a managed service on the Red Hat Hybrid Cloud Console where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. The procedures in this document use the auto modes in OpenShift Cluster Manager to immediately create the required Identity and Access Management (IAM) resources using the current AWS account. The required resources include the account-wide IAM roles and policies, cluster-specific Operator roles and policies, and OpenID Connect (OIDC) identity provider. When using the OpenShift Cluster Manager Hybrid Cloud Console to create a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the STS, you can select the default options to create the cluster quickly. Before you can use the OpenShift Cluster Manager Hybrid Cloud Console to deploy ROSA with STS clusters, you must associate your AWS account with your Red Hat organization and create the required account-wide STS roles and policies. Overview of the default cluster specifications You can quickly create a Red Hat OpenShift Service on AWS (ROSA) cluster with the Security Token Service (STS) by using the default installation options. The following summary describes the default cluster specifications. Table 1.1. Default ROSA with STS cluster specifications Component Default specifications Accounts and roles Default IAM role prefix: ManagedOpenShift No cluster admin role created Cluster settings Default cluster version: Latest Default AWS region for installations using the Red Hat OpenShift Cluster Manager Hybrid Cloud Console: us-east-1 (US East, North Virginia) Availability: Single zone for the data plane EC2 Instance Metadata Service (IMDS) is enabled and allows the use of IMDSv1 or IMDSv2 (token optional) Monitoring for user-defined projects: Enabled Encryption Cloud storage is encrypted at rest Additional etcd encryption is not enabled The default AWS Key Management Service (KMS) key is used as the encryption key for persistent data Control plane node configuration Control plane node instance type: m5.2xlarge (8 vCPU, 32 GiB RAM) Control plane node count: 3 Infrastructure node configuration Infrastructure node instance type: r5.xlarge (4 vCPU, 32 GiB RAM) Infrastructure node count: 2 Compute node machine pool Compute node instance type: m5.xlarge (4 vCPU 16, GiB RAM) Compute node count: 2 Autoscaling: Not enabled No additional node labels Networking configuration Cluster privacy: Public You must have configured your own Virtual Private Cloud (VPC) No cluster-wide proxy is configured Classless Inter-Domain Routing (CIDR) ranges Machine CIDR: 10.0.0.0/16 Service CIDR: 172.30.0.0/16 Pod CIDR: 10.128.0.0/14 Host prefix: /23 Cluster roles and policies Mode used to create the Operator roles and the OpenID Connect (OIDC) provider: auto Note For installations that use OpenShift Cluster Manager on the Hybrid Cloud Console, the auto mode requires an admin-privileged OpenShift Cluster Manager role. Default Operator role prefix: <cluster_name>-<4_digit_random_string> Cluster update strategy Individual updates 1 hour grace period for node draining Understanding AWS account association Before you can use Red Hat OpenShift Cluster Manager on the Red Hat Hybrid Cloud Console to create Red Hat OpenShift Service on AWS (ROSA) clusters that use the AWS Security Token Service (STS), you must associate your AWS account with your Red Hat organization. You can associate your account by creating and linking the following IAM roles. OpenShift Cluster Manager role Create an OpenShift Cluster Manager IAM role and link it to your Red Hat organization. You can apply basic or administrative permissions to the OpenShift Cluster Manager role. The basic permissions enable cluster maintenance using OpenShift Cluster Manager. The administrative permissions enable automatic deployment of the cluster-specific Operator roles and the OpenID Connect (OIDC) provider using OpenShift Cluster Manager. User role Create a user IAM role and link it to your Red Hat user account. The Red Hat user account must exist in the Red Hat organization that is linked to your OpenShift Cluster Manager role. The user role is used by Red Hat to verify your AWS identity when you use the OpenShift Cluster Manager Hybrid Cloud Console to install a cluster and the required STS resources. Associating your AWS account with your Red Hat organization Before using Red Hat OpenShift Cluster Manager on the Red Hat Hybrid Cloud Console to create Red Hat OpenShift Service on AWS (ROSA) clusters that use the AWS Security Token Service (STS), create an OpenShift Cluster Manager IAM role and link it to your Red Hat organization. Then, create a user IAM role and link it to your Red Hat user account in the same Red Hat organization. Procedure Create an OpenShift Cluster Manager role and link it to your Red Hat organization: Note To enable automatic deployment of the cluster-specific Operator roles and the OpenID Connect (OIDC) provider using the OpenShift Cluster Manager Hybrid Cloud Console, you must apply the administrative privileges to the role by choosing the Admin OCM role command in the Accounts and roles step of creating a ROSA cluster. For more information about the basic and administrative privileges for the OpenShift Cluster Manager role, see Understanding AWS account association . Note If you choose the Basic OCM role command in the Accounts and roles step of creating a ROSA cluster in the OpenShift Cluster Manager Hybrid Cloud Console, you must deploy a ROSA cluster using manual mode. You will be prompted to configure the cluster-specific Operator roles and the OpenID Connect (OIDC) provider in a later step. USD rosa create ocm-role Select the default values at the prompts to quickly create and link the role. Create a user role and link it to your Red Hat user account: USD rosa create user-role Select the default values at the prompts to quickly create and link the role. Note The Red Hat user account must exist in the Red Hat organization that is linked to your OpenShift Cluster Manager role. Creating the account-wide STS roles and policies Before using the Red Hat OpenShift Cluster Manager Hybrid Cloud Console to create Red Hat OpenShift Service on AWS (ROSA) clusters that use the AWS Security Token Service (STS), create the required account-wide STS roles and policies, including the Operator policies. Procedure If they do not exist in your AWS account, create the required account-wide STS roles and policies: USD rosa create account-roles Select the default values at the prompts to quickly create the roles and policies. Creating a cluster with the default options using OpenShift Cluster Manager When using Red Hat OpenShift Cluster Manager on the Red Hat Hybrid Cloud Console to create a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS), you can select the default options to create the cluster quickly. You can also use the admin OpenShift Cluster Manager IAM role to enable automatic deployment of the cluster-specific Operator roles and the OpenID Connect (OIDC) provider. Procedure Navigate to OpenShift Cluster Manager and select Create cluster . On the Create an OpenShift cluster page, select Create cluster in the Red Hat OpenShift Service on AWS (ROSA) row. Verify that your AWS account ID is listed in the Associated AWS accounts drop-down menu and that the installer, support, worker, and control plane account role Amazon Resource Names (ARNs) are listed on the Accounts and roles page. Note If your AWS account ID is not listed, check that you have successfully associated your AWS account with your Red Hat organization. If your account role ARNs are not listed, check that the required account-wide STS roles exist in your AWS account. Click . On the Cluster details page, provide a name for your cluster in the Cluster name field. Leave the default values in the remaining fields and click . Note Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com . If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string. To customize the subdomain, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. To deploy a cluster quickly, leave the default options in the Cluster settings , Networking , Cluster roles and policies , and Cluster updates pages and click on each page. On the Review your ROSA cluster page, review the summary of your selections and click Create cluster to start the installation. Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable , which is located directly under Delete Protection: Disabled . This will prevent your cluster from being deleted. To disable delete protection, select Disable . By default, clusters are created with the delete protection feature disabled. Verification You can check the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready . Note If the installation fails or the cluster State does not change to Ready after about 40 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations . For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS . 1.4. Creating a cluster administrator user for quick cluster access Before configuring an identity provider, you can create a user with cluster-admin privileges for immediate access to your Red Hat OpenShift Service on AWS (ROSA) cluster. Note The cluster administrator user is useful when you need quick access to a newly deployed cluster. However, consider configuring an identity provider and granting cluster administrator privileges to the identity provider users as required. For more information about setting up an identity provider for your ROSA cluster, see Configuring an identity provider and granting cluster access . Procedure Create a cluster administrator user: USD rosa create admin --cluster=<cluster_name> 1 1 Replace <cluster_name> with the name of your cluster. Example output W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster '<cluster_name>'. I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user. I: To login, run the following command: oc login https://api.example-cluster.wxyz.p1.openshiftapps.com:6443 --username cluster-admin --password d7Rca-Ba4jy-YeXhs-WU42J I: It may take up to a minute for the account to become active. Note It might take approximately one minute for the cluster-admin user to become active. Additional resource For steps to log in to the ROSA web console, see Accessing a cluster through the web console . 1.5. Configuring an identity provider and granting cluster access Red Hat OpenShift Service on AWS (ROSA) includes a built-in OAuth server. After your ROSA cluster is created, you must configure OAuth to use an identity provider. You can then add members to your configured identity provider to grant them access to your cluster. You can also grant the identity provider users with cluster-admin or dedicated-admin privileges as required. Configuring an identity provider You can configure different identity provider types for your Red Hat OpenShift Service on AWS (ROSA) cluster. Supported types include GitHub, GitHub Enterprise, GitLab, Google, LDAP, OpenID Connect and htpasswd identity providers. Important The htpasswd identity provider option is included only to enable the creation of a single, static administration user. htpasswd is not supported as a general-use identity provider for Red Hat OpenShift Service on AWS. The following procedure configures a GitHub identity provider as an example. Procedure Go to github.com and log in to your GitHub account. If you do not have an existing GitHub organization to use for identity provisioning for your ROSA cluster, create one. Follow the steps in the GitHub documentation . Configure a GitHub identity provider for your cluster that is restricted to the members of your GitHub organization. Configure an identity provider using the interactive mode: USD rosa create idp --cluster=<cluster_name> --interactive 1 1 Replace <cluster_name> with the name of your cluster. Example output I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <github_org_name> 1 ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<github_org_name>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<cluster_name>/<random_string>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<cluster_name>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<cluster_name>/<random_string>.p1.openshiftapps.com - Click on 'Register application' ... 1 Replace <github_org_name> with the name of your GitHub organization. Follow the URL in the output and select Register application to register a new OAuth application in your GitHub organization. By registering the application, you enable the OAuth server that is built into ROSA to authenticate members of your GitHub organization into your cluster. Note The fields in the Register a new OAuth application GitHub form are automatically filled with the required values through the URL defined by the ROSA CLI. Use the information from your GitHub OAuth application page to populate the remaining rosa create idp interactive prompts. Continued example output ... ? Client ID: <github_client_id> 1 ? Client Secret: [? for help] <github_client_secret> 2 ? GitHub Enterprise Hostname (optional): ? Mapping method: claim 3 I: Configuring IDP for cluster '<cluster_name>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<cluster_name>.<random_string>.p1.openshiftapps.com and click on github-1. 1 Replace <github_client_id> with the client ID for your GitHub OAuth application. 2 Replace <github_client_secret> with a client secret for your GitHub OAuth application. 3 Specify claim as the mapping method. Note It might take approximately two minutes for the identity provider configuration to become active. If you have configured a cluster-admin user, you can watch the OAuth pods redeploy with the updated configuration by running oc get pods -n openshift-authentication --watch . Enter the following command to verify that the identity provider has been configured correctly: USD rosa list idps --cluster=<cluster_name> Example output NAME TYPE AUTH URL github-1 GitHub https://oauth-openshift.apps.<cluster_name>.<random_string>.p1.openshiftapps.com/oauth2callback/github-1 Additional resource For detailed steps to configure each of the supported identity provider types, see Configuring identity providers for STS . Granting user access to a cluster You can grant a user access to your Red Hat OpenShift Service on AWS (ROSA) cluster by adding them to your configured identity provider. You can configure different types of identity providers for your ROSA cluster. The following example procedure adds a user to a GitHub organization that is configured for identity provision to the cluster. Procedure Navigate to github.com and log in to your GitHub account. Invite users that require access to the ROSA cluster to your GitHub organization. Follow the steps in Inviting users to join your organization in the GitHub documentation. Granting administrator privileges to a user After you have added a user to your configured identity provider, you can grant the user cluster-admin or dedicated-admin privileges for your Red Hat OpenShift Service on AWS (ROSA) cluster. Procedure To configure cluster-admin privileges for an identity provider user: Grant the user cluster-admin privileges: USD rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1 1 Replace <idp_user_name> and <cluster_name> with the name of the identity provider user and your cluster name. Example output I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>' Verify if the user is listed as a member of the cluster-admins group: USD rosa list users --cluster=<cluster_name> Example output ID GROUPS <idp_user_name> cluster-admins To configure dedicated-admin privileges for an identity provider user: Grant the user dedicated-admin privileges: USD rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> Example output I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>' Verify if the user is listed as a member of the dedicated-admins group: USD rosa list users --cluster=<cluster_name> Example output ID GROUPS <idp_user_name> dedicated-admins Additional resources Cluster administration role Customer administrator user 1.6. Accessing a cluster through the web console After you have created a cluster administrator user or added a user to your configured identity provider, you can log into your Red Hat OpenShift Service on AWS (ROSA) cluster through the web console. Procedure Obtain the console URL for your cluster: USD rosa describe cluster -c <cluster_name> | grep Console 1 1 Replace <cluster_name> with the name of your cluster. Example output Console URL: https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.com Go to the console URL in the output of the preceding step and log in. If you created a cluster-admin user, log in by using the provided credentials. If you configured an identity provider for your cluster, select the identity provider name in the Log in with... dialog and complete any authorization requests that are presented by your provider. 1.7. Deploying an application from the Developer Catalog From the Red Hat OpenShift Service on AWS web console, you can deploy a test application from the Developer Catalog and expose it with a route. Prerequisites You logged in to the Red Hat Hybrid Cloud Console . You created a Red Hat OpenShift Service on AWS cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider. Procedure Go to the Cluster List page in OpenShift Cluster Manager . Click the options icon (...) to the cluster you want to view. Click Open console . Your cluster console opens in a new browser window. Log in to your Red Hat account with your configured identity provider credentials. In the Administrator perspective, select Home Projects Create Project . Enter a name for your project and optionally add a Display Name and Description . Click Create to create the project. Switch to the Developer perspective and select +Add . Verify that the selected Project is the one that you just created. In the Developer Catalog dialog, select All services . In the Developer Catalog page, select Languages JavaScript from the menu. Click Node.js , and then click Create to open the Create Source-to-Image application page. Note You might need to click Clear All Filters to display the Node.js option. In the Git section, click Try sample . Add a unique name in the Name field. The value will be used to name the associated resources. Confirm that Deployment and Create a route are selected. Click Create to deploy the application. It will take a few minutes for the pods to deploy. Optional: Check the status of the pods in the Topology pane by selecting your Node.js app and reviewing its sidebar. You must wait for the nodejs build to complete and for the nodejs pod to be in a Running state before continuing. When the deployment is complete, click the route URL for the application, which has a format similar to the following: A new tab in your browser opens with a message similar to the following: Optional: Delete the application and clean up the resources that you created: In the Administrator perspective, navigate to Home Projects . Click the action menu for your project and select Delete Project . 1.8. Revoking administrator privileges and user access You can revoke cluster-admin or dedicated-admin privileges from a user by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . To revoke cluster access from a user, you must remove the user from your configured identity provider. Follow the procedures in this section to revoke administrator privileges or cluster access from a user. Revoking administrator privileges from a user Follow the steps in this section to revoke cluster-admin or dedicated-admin privileges from a user. Procedure To revoke cluster-admin privileges from an identity provider user: Revoke the cluster-admin privilege: USD rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1 1 Replace <idp_user_name> and <cluster_name> with the name of the identity provider user and your cluster name. Example output ? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>' Verify that the user is not listed as a member of the cluster-admins group: USD rosa list users --cluster=<cluster_name> Example output W: There are no users configured for cluster '<cluster_name>' To revoke dedicated-admin privileges from an identity provider user: Revoke the dedicated-admin privilege: USD rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> Example output ? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>' Verify that the user is not listed as a member of the dedicated-admins group: USD rosa list users --cluster=<cluster_name> Example output W: There are no users configured for cluster '<cluster_name>' Revoking user access to a cluster You can revoke cluster access for an identity provider user by removing them from your configured identity provider. You can configure different types of identity providers for your ROSA cluster. The following example procedure revokes cluster access for a member of a GitHub organization that is configured for identity provision to the cluster. Procedure Navigate to github.com and log in to your GitHub account. Remove the user from your GitHub organization. Follow the steps in Removing a member from your organization in the GitHub documentation. 1.9. Deleting a ROSA cluster and the AWS STS resources You can delete a ROSA cluster that uses the AWS Security Token Service (STS) by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . You can also use the ROSA CLI to delete the AWS Identity and Access Management (IAM) account-wide roles, the cluster-specific Operator roles, and the OpenID Connect (OIDC) provider. To delete the account-wide inline and Operator policies, you can use the AWS IAM Console. Important Account-wide IAM roles and policies might be used by other ROSA clusters in the same AWS account. You must only remove the resources if they are not required by other clusters. Procedure Delete a cluster and watch the logs, replacing <cluster_name> with the name or ID of your cluster: USD rosa delete cluster --cluster=<cluster_name> --watch Important You must wait for the cluster deletion to complete before you remove the IAM roles, policies, and OIDC provider. The account-wide roles are required to delete the resources created by the installer. The cluster-specific Operator roles are required to clean-up the resources created by the OpenShift Operators. The Operators use the OIDC provider to authenticate. Delete the OIDC provider that the cluster Operators use to authenticate: USD rosa delete oidc-provider -c <cluster_id> --mode auto 1 1 Replace <cluster_id> with the ID of the cluster. Note You can use the -y option to automatically answer yes to the prompts. Delete the cluster-specific Operator IAM roles: USD rosa delete operator-roles -c <cluster_id> --mode auto 1 1 Replace <cluster_id> with the ID of the cluster. Delete the account-wide roles: Important Account-wide IAM roles and policies might be used by other ROSA clusters in the same AWS account. You must only remove the resources if they are not required by other clusters. USD rosa delete account-roles --prefix <prefix> --mode auto 1 1 You must include the --<prefix> argument. Replace <prefix> with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, ManagedOpenShift . Delete the account-wide inline and Operator IAM policies that you created for ROSA deployments that use STS: Log in to the AWS IAM Console . Navigate to Access management Policies and select the checkbox for one of the account-wide policies. With the policy selected, click on Actions Delete to open the delete policy dialog. Enter the policy name to confirm the deletion and select Delete to delete the policy. Repeat this step to delete each of the account-wide inline and Operator policies for the cluster. 1.10. steps Adding services to a cluster using the OpenShift Cluster Manager console Managing compute nodes Configuring the monitoring stack 1.11. Additional resources For more information about setting up accounts and ROSA clusters using AWS STS, see Understanding the ROSA with STS deployment workflow . For more information about setting up accounts and ROSA clusters without using AWS STS, see Understanding the ROSA deployment workflow . For more information about upgrading your cluster, see Upgrading ROSA Classic clusters . | [
"aws sts get-caller-identity --output text",
"<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>",
"tar xvf rosa-linux.tar.gz",
"sudo mv rosa /usr/local/bin/rosa",
"rosa version",
"1.2.47 Your ROSA CLI is up to date.",
"rosa login",
"To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:",
"? Copy the token and paste it here: ******************* [full token length omitted]",
"rosa whoami",
"AWS Account ID: <aws_account_number> AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::<aws_account_number>:user/<aws_user_name> OCM API: https://api.openshift.com OCM Account ID: <red_hat_account_id> OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: <org_id> OCM Organization Name: Your organization OCM Organization External ID: <external_org_id>",
"rosa download openshift-client",
"tar xvf openshift-client-linux.tar.gz",
"sudo mv oc /usr/local/bin/oc",
"rosa verify openshift-client",
"I: Verifying whether OpenShift command-line tool is available I: Current OpenShift Client Version: 4.17.3",
"rosa create ocm-role",
"rosa create user-role",
"rosa create account-roles",
"rosa create admin --cluster=<cluster_name> 1",
"W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster '<cluster_name>'. I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user. I: To login, run the following command: oc login https://api.example-cluster.wxyz.p1.openshiftapps.com:6443 --username cluster-admin --password d7Rca-Ba4jy-YeXhs-WU42J I: It may take up to a minute for the account to become active.",
"rosa create idp --cluster=<cluster_name> --interactive 1",
"I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <github_org_name> 1 ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<github_org_name>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<cluster_name>/<random_string>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<cluster_name>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<cluster_name>/<random_string>.p1.openshiftapps.com - Click on 'Register application'",
"? Client ID: <github_client_id> 1 ? Client Secret: [? for help] <github_client_secret> 2 ? GitHub Enterprise Hostname (optional): ? Mapping method: claim 3 I: Configuring IDP for cluster '<cluster_name>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<cluster_name>.<random_string>.p1.openshiftapps.com and click on github-1.",
"rosa list idps --cluster=<cluster_name>",
"NAME TYPE AUTH URL github-1 GitHub https://oauth-openshift.apps.<cluster_name>.<random_string>.p1.openshiftapps.com/oauth2callback/github-1",
"rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1",
"I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"ID GROUPS <idp_user_name> cluster-admins",
"rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"ID GROUPS <idp_user_name> dedicated-admins",
"rosa describe cluster -c <cluster_name> | grep Console 1",
"Console URL: https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.com",
"https://nodejs-<project>.<cluster_name>.<hash>.<region>.openshiftapps.com/",
"Welcome to your Node.js application on OpenShift",
"rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1",
"? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"W: There are no users configured for cluster '<cluster_name>'",
"rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>'",
"rosa list users --cluster=<cluster_name>",
"W: There are no users configured for cluster '<cluster_name>'",
"rosa delete cluster --cluster=<cluster_name> --watch",
"rosa delete oidc-provider -c <cluster_id> --mode auto 1",
"rosa delete operator-roles -c <cluster_id> --mode auto 1",
"rosa delete account-roles --prefix <prefix> --mode auto 1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/getting_started/rosa-quickstart-guide-ui |
Chapter 41. Maven settings and repositories for Red Hat Process Automation Manager | Chapter 41. Maven settings and repositories for Red Hat Process Automation Manager When you create a Red Hat Process Automation Manager project, Business Central uses the Maven repositories that are configured for Business Central. You can use the Maven global or user settings to direct all Red Hat Process Automation Manager projects to retrieve dependencies from the public Red Hat Process Automation Manager repository by modifying the Maven project object model (POM) file ( pom.xml ). You can also configure Business Central and KIE Server to use an external Maven repository or prepare a Maven mirror for offline use. For more information about Red Hat Process Automation Manager packaging and deployment options, see Packaging and deploying an Red Hat Process Automation Manager project . 41.1. Configuring Maven using the project configuration file ( pom.xml ) To use Maven for building and managing your Red Hat Process Automation Manager projects, you must create and configure the POM file ( pom.xml ). This file holds configuration information for your project. For more information, see Apache Maven Project . Procedure Generate a Maven project. A pom.xml file is automatically generated when you create a Maven project. Edit the pom.xml file to add more dependencies and new repositories. Maven downloads all of the JAR files and the dependent JAR files from the Maven repository when you compile and package your project. Find the schema for the pom.xml file at http://maven.apache.org/maven-v4_0_0.xsd . For more information about POM files, see Apache Maven Project POM . 41.2. Modifying the Maven settings file Red Hat Process Automation Manager uses Maven settings.xml file to configure it's Maven execution. You must create and activate a profile in the settings.xml file and declare the Maven repositories used by your Red Hat Process Automation Manager projects. For information about the Maven settings.xml file, see the Apache Maven Project Setting Reference . Procedure In the settings.xml file, declare the repositories that your Red Hat Process Automation Manager projects use. Usually, this is either the online Red Hat Process Automation Manager Maven repository or the Red Hat Process Automation Manager Maven repository that you download from the Red Hat Customer Portal and any repositories for custom artifacts that you want to use. Ensure that Business Central or KIE Server is configured to use the settings.xml file. For example, specify the kie.maven.settings.custom=<SETTINGS_FILE_PATH> property where <SETTINGS_FILE_PATH> is the path to the settings.xml file. On Red Hat JBoss Web Server, for KIE Server add -Dkie.maven.settings.custom=<SETTINGS_FILE_PATH> to the CATALINA_OPTS section of the setenv.sh (Linux) or setenv.bat (Windows) file. For standalone Business Central, enter the following command: 41.3. Adding Maven dependencies for Red Hat Process Automation Manager To use the correct Maven dependencies in your Red Hat Process Automation Manager project, add the Red Hat Business Automation bill of materials (BOM) files to the project's pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Procedure Declare the Red Hat Business Automation BOM in the pom.xml file: <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies> Declare dependencies required for your project in the <dependencies> tag. After you import the product BOM into your project, the versions of the user-facing product dependencies are defined so you do not need to specify the <version> sub-element of these <dependency> elements. However, you must use the <dependency> element to declare dependencies which you want to use in your project. For standalone projects that are not authored in Business Central, specify all dependencies required for your projects. In projects that you author in Business Central, the basic decision engine and process engine dependencies are provided automatically by Business Central. For a basic Red Hat Process Automation Manager project, declare the following dependencies, depending on the features that you want to use: Embedded process engine dependencies <!-- Public KIE API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <!-- Core dependencies for process engine --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow-builder</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-bpmn2</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-runtime-manager</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-persistence-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-query-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-audit</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <!-- Dependency needed for default WorkItemHandler implementations. --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-workitems-core</artifactId> </dependency> <!-- Logging dependency. You can use any logging framework compatible with slf4j. --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency> For a Red Hat Process Automation Manager project that uses CDI, you typically declare the following dependencies: CDI-enabled process engine dependencies <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-cdi</artifactId> </dependency> For a basic Red Hat Process Automation Manager project, declare the following dependencies: Embedded decision engine dependencies <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency> To use KIE Server, declare the following dependencies: Client application KIE Server dependencies <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency> To create a remote client for Red Hat Process Automation Manager, declare the following dependency: Client dependency <dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency> When creating a JAR file that includes assets, such as rules and process definitions, specify the packaging type for your Maven project as kjar and use org.kie:kie-maven-plugin to process the kjar packaging type located under the <project> element. In the following example, USD{kie.version} is the Maven library version listed in What is the mapping between Red Hat Process Automation Manager and the Maven library version? : <packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build> 41.4. Preparing a Maven mirror repository for offline use If your Red Hat Process Automation Manager deployment does not have outgoing access to the public Internet, you must prepare a Maven repository with a mirror of all the necessary artifacts and make this repository available to your environment. Note You do not need to complete this procedure if your Red Hat Process Automation Manager deployment is connected to the Internet. Prerequisites A computer that has outgoing access to the public Internet is available. Procedure On the computer that has an outgoing connection to the public Internet, complete the following steps: Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download and extract the Red Hat Process Automation Manager 7.13.5 Offliner Content List ( rhpam-7.13.5-offliner.zip ) product deliverable file. Extract the contents of the rhpam-7.13.5-offliner.zip file into any directory. Change to the directory and enter the following command: This command creates the repository subdirectory and downloads the necessary artifacts into this subdirectory. This is the mirror repository. If a message reports that some downloads have failed, run the same command again. If downloads fail again, contact Red Hat support. If you developed services outside of Business Central and they have additional dependencies, add the dependencies to the mirror repository. If you developed the services as Maven projects, you can use the following steps to prepare these dependencies automatically. Complete the steps on the computer that has an outgoing connection to the public Internet. Create a backup of the local Maven cache directory ( ~/.m2/repository ) and then clear the directory. Build the source of your projects using the mvn clean install command. For every project, enter the following command to ensure that Maven downloads all runtime dependencies for all the artifacts generated by the project: Replace /path/to/project/pom.xml with the path of the pom.xml file of the project. Copy the contents of the local Maven cache directory ( ~/.m2/repository ) to the repository subdirectory that was created. Copy the contents of the repository subdirectory to a directory on the computer on which you deployed Red Hat Process Automation Manager. This directory becomes the offline Maven mirror repository. Create and configure a settings.xml file for your Red Hat Process Automation Manager deployment as described in Section 41.2, "Modifying the Maven settings file" . Make the following changes in the settings.xml file: Under the <profile> tag, if a <repositories> or <pluginRepositores> tag is missing, add the missing tags. Under <repositories> add the following content: <repository> <id>offline-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> Replace /path/to/repo with the full path to the local Maven mirror repository directory. Under <pluginRepositories> add the following content: <repository> <id>offline-plugin-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> Replace /path/to/repo with the full path to the local Maven mirror repository directory. | [
"java -jar rhpam-7.13.5-business-central-standalone.jar --cli-script=application-script.cli -Dkie.maven.settings.custom=<SETTINGS_FILE_PATH>",
"<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies>",
"<!-- Public KIE API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <!-- Core dependencies for process engine --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow-builder</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-bpmn2</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-runtime-manager</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-persistence-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-query-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-audit</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <!-- Dependency needed for default WorkItemHandler implementations. --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-workitems-core</artifactId> </dependency> <!-- Logging dependency. You can use any logging framework compatible with slf4j. --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency>",
"<dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-cdi</artifactId> </dependency>",
"<dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency>",
"<dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency>",
"<dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency>",
"<packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build>",
"./offline-repo-builder.sh offliner.txt",
"mvn -e -DskipTests dependency:go-offline -f /path/to/project/pom.xml --batch-mode -Djava.net.preferIPv4Stack=true",
"<repository> <id>offline-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository>",
"<repository> <id>offline-plugin-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository>"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/maven-repo-using-con_install-on-jws |
Chapter 2. New Features and Enhancements | Chapter 2. New Features and Enhancements 2.1. Security Support for automatic update of credentials in a credential store Elytron now automates adding and updating a credential to a previously defined credential store when you configure a credential reference that specifies both the store and clear-text attributes. With this update, you do not need to add a credential to an existing credential store before you can reference it from a credential-reference . The automated process reduces the number of steps you need to perform for referencing new credentials in different subsystems. New role mapper regex-role-mapper in Elytron Elytron now provides a new role mapper, regex-role-mapper , to define a regular expression (regex) based mapping of security roles. You can use regex-role-mapper to translate a list of roles to simpler roles. For example: *-admin to admin *-user to user With regex-role-mapper , you do not need to implement your own custom component to translate security roles. For more information, see regex-role-mapper Attributes . Accessing IP address of remote client You can now add the source-address-role-decoder role decoder to the elytron subsystem. By configuring this role decoder, you can gain additional information from a remote client when making authorization decisions. The source-address-role-decoder extracts the IP address of a remote client and checks that it matches the IP address specified in the pattern attribute or the source-address attribute. If the IP address of the remote client matches the IP address specified in either attribute, the roles attribute then assigns roles to the user. When you have configured source-address-role-decoder , you can reference it in the role-decoder attribute of the security domain . The aggregate-role-decoder role decoder The aggregate-role-decoder consists of two or more role decoders. After each specified role decoder completes its operation, it adds roles to the aggregate-role-decoder . You can use aggregate-role-decoder to make authorization decisions by adding role decoders that assign roles for a user. Further, aggregate-role-decoder provides you with a convenient way to aggregate the roles returned from each role decoder. Using TLS protocol version 1.3 with JDK 11 Elytron now provides the ability to use Transport Layer Security (TLS) Protocol version 1.3 for JBoss EAP running against JDK 11. TLS 1.3 is disabled by default. You can enable TLS 1.3 by configuring the new cipher-suite-names attribute in the SSL Context resource definition in the elytron subsystem. Compared with TLS 1.2, you might experience reduced performance when running TLS 1.3 with JDK 11. Diminished performance might occur when a very large number of TLS 1.3 requests are being made. A system upgrade to a newer JDK version can improve performance. Test your setup with TLS 1.3 for performance degradation before enabling it in production. Enable support for the TLS 1.3 protocol with the OpenSSL provider for TLS JBoss EAP 7.4 includes support for the Transport Layer Security (TLS) protocol version 1.3. The use of TLS 1.3 protocol with the OpenSSL provider for TLS is disabled by default. You can enable OpenSSL provider for TLS by either configuring the providers attribute in the ssl-context configuration or by registering the OpenSSL provider ahead of all globally registered providers using the initial-providers attribute in the Elytron subsystem configuration. You can enable support for the TLS 1.3 protocol with the OpenSSL provider for TLS by configuring the cipher-suite-names attribute in the ssl-context configuration. Compared with TLS 1.2, you might experience reduced performance when running TLS 1.3 with JDK 11. Diminished performance might occur when a very large number of TLS 1.3 requests are being made. A system upgrade to a newer JDK version can improve performance. Test your setup with TLS 1.3 for performance degradation before enabling it in production. Re-enable support for the TLS 1.1 protocol in your JDK configuration Newer versions of JDK might disable the Transport Layer Security (TLS) protocol version 1.1 by default. If your JBoss EAP 7.4 configuration must comply with the Federal Information Processing Standard (FIPS), you might need to re-enable support for the TLS 1.1 protocol in your JDK configuration. For more information about TLS protocols compatible with JBoss EAP 7.4, see the Red Hat JBoss Enterprise Application Platform (EAP) 7 Supported Configurations page on the Red Hat Customer Portal. Using SSH credentials to connect to a remote Git SSH repository With JBoss EAP 7.4, you can use SSH credentials to connect to a remote Git SSH repository. This repository can manage your server configuration data, properties files, and deployments. You must use the elytron configuration file to specify SSH credentials. You can then start your standalone server instance and have a remote Git SSH repository manage your server configuration file history. If necessary, you can generate SSH keys by using one of the following methods: The elytron-tool.sh script The OpenSSH command line For information about connecting to a remote Git SSH repository, see Using a remote Git SSH repository . New principal transformer added to the elytron subsystem JBoss EAP 7.4 includes a new principal transformer, case-principal-transformer , in the elytron subsystem. You can use the case-principal-transformer to change a principal's username to either uppercase or lowercase characters. Ability to automatically generate a self-signed certificate With JBoss EAP 7.4, you can automatically generate a self-signed certificate. Use a self-signed certificate only in a test environment. Do not use a self-signed certificate in a production environment. To use this new feature, in the undertow subsystem, update the configuration of the http-listener . After you update the configuration, and if no keystore file exists, the first time JBoss EAP receives an HTTPS request, the system automatically generates a self-signed certificate. JBoss EAP logs a warning when a self-signed certificate is used. Configuration of multiple security realms to support failover With JBoss EAP 7.4, you can configure a failover security realm. If the security realm is not available, JBoss EAP uses the failover realm. The following code illustrates an example configuration: <failover-realm name="myfailoverrealm" delegate-realm="LdapRealm" failover-realm="LocalRealm" /> Distributed identities across multiple security realms With JBoss EAP 7.4, you can configure a distributed security realm, which sequentially invokes a list of configured realms until a realm with the identity is found. The following code illustrates an example configuration: <distributed-realm name="mymainrealm" realms="realm1 realm2 realm3" /> Access to external credentials over HTTP in the elytron subsystem With JBoss EAP 7.4, JBoss EAP can authenticate a user based on credentials established externally when using HTTP authentication. To use this capability, configure a security domain to use the External mechanism when authenticating users. Use the Elytron client authentication configuration with the RESTEasy client The JBoss EAP 7.4 release integrates the RESTEasy client with the Elytron client. The RESTEasy client uses authentication information, such as credentials, bearer tokens, and SSL configurations, from an Elytron client configuration. You can specify the Elytron client configuration that the RESTEasy client can use in the following ways: By providing the wildfly-config.xml file to the Elytron client. The Elytron client searches the class path for wildfly-config.xml or META-INF/wildfly-config.xml . Alternatively, you can use the wildfly.config.url system property to specify the path for the wildfly-config.xml file. By using the Elytron client API to programmatically specify the authentication configuration. Secret key credential store for providing initial secret key You can now provide an initial secret key to the application server process using a new type of credential store named secret-key-credential-store . With this credential store, you get more robust security than password-based encryption because you can now manage your own initial secret. For information about providing an initial key to JBoss EAP, see Providing an initial key to JBoss EAP to unlock secured resources . Additionally, you can now generate secret keys, and also export and import previously generated secret keys, for all credential stores. You can also use existing credential stores for storing secret keys, and management operations to maintain them. For more information, see Credential store operations using the JBoss EAP management CLI . Encrypted expressions for securing security-sensitive strings You can now use encrypted expressions to securely store security-sensitive strings in the management model. Elytron encrypts plain text strings using Advanced Encryption Standard (AES) encryption and decrypts the encrypted expression dynamically at runtime using a SecretKey key stored in a credential store. You can configure encrypted expressions using the new resource expression-encryption in the elytron subsystem. Use the create-expression management operation to create encrypted expressions. For information about encrypted expressions, see Encrypted expressions in Elytron . Note Use the credential store for storing passwords. The password vault is deprecated and will be removed in a future release. Updates to elytron-tool You can use the elytron-tool with both the existing and new credential stores. Use the credential-store command to manage secret keys and to generate encrypted tokens for use in expressions. 2.2. Server management Support for Microsoft Windows Server 2019 You can use the Microsoft Windows Server 2019 virtual operating system while using JBoss EAP 7.4 in Microsoft Azure. Use a global directory to distribute shared libraries across deployments In JBoss EAP 7.3 and earlier versions, you could not create and configure a global directory to distribute shared libraries across deployments running on a server. These capabilities have been added to the ee subsystem. A global directory offers a better alternative to the global module approach. For example, if you want to change the name of a library listed in a global module, you must remove the global module, change the library's name, and then add the library to a new global module. If you change the name of a library that is listed in the global directory, you only need to restart the server to make the library name change available for all deployments. Using a global directory is also a better solution if you want to share multiple libraries across deployed applications. For more information, see Define global modules in the JBoss EAP Configuration Guide . Support for read-only server configuration directories In JBoss EAP 7.3 and earlier versions, servers fail to start if the configuration directory is configured as read-only. JBoss EAP 7.4 introduces the ability to use a read-only server configuration directory. If the configuration directory is read-only, include the --read-only-server-config switch in a command to start the server. Ability to pass JBoss Module parameters In the configuration files for JBoss EAP 7.3 and earlier versions, JBoss Modules did not include the ability to pass module parameters. In the script configuration files for JBoss EAP 7.4 you can now add a MODULE_OPTS=-javaagent:my-agent.jar environment variable to pass JBoss Module parameters. You can use this capability when you previously were required to add the log manager on the boot class path. Infinispan APIs Previously, the Infinispan APIs were flagged as private within EAP as they are a part of the Red Hat Data Grid project. These APIs are now fully included and supported in JBoss EAP7.4. The modules included are: org.infinispan org.infinispan.client.hotrod org.infinispan.commons Configurable option to allow requests during startup Added the option for a graceful startup mode for when user requests need to occur earlier in the startup process. This is supported for both managed domains, and standalone servers. For servers in a managed domain, the server-group element now supports the graceful-startup argument. The default for this is set to true . In a standalone server, set the command line option --graceful-startup=false to the required value. Configurable common script file added You can now use the file, common.conf , to customize your JBoss EAP instance environments. The file allows you to set common environment variables usable by all scripts in the USDJBOSS_HOME/bin directory. You can add the file to USDJBOSS_HOME/bin or add the path to the file in the COMMON_CONF environment variable. This functionality does support batch scripts and powershell scripts, with common.conf.bat and common.conf.ps1 respectively. 2.3. Management CLI Enhancement to the command CLI command The CLI command command has a new --node-child argument that you can use to edit the properties or manage the operations of a specific child node. Note Before you use the --node-child argument, check that the child node exists in the management model. Use the command add --node-child --help CLI command to view a description of the --node-child argument. New role decoder added to the elytron subsystem In JBoss EAP 7.4, you can use the management CLI to add the source-address-role-decoder role decoder to the elytron subsystem. By configuring this role decoder in the mappers element, you can gain additional information from a remote client when making authorization decisions. You can configure the following attributes for source-address-role-decoder : Attribute Description pattern A regular expression that specifies the IP address of a remote client or the IP addresses of remote clients to match. source-address Specifies the IP address of the remote client. roles Provides the list of roles to assign to a user if the IP address of the remote client matches the values specified in the pattern attribute or the source-address attribute. Exposing runtime statistics for managed executor services In the JBoss EAP release, runtime statistics were not available for managed executor services in the ee subsystem. You can now monitor the performance of managed executor services by viewing the runtime statistics generated with the new management CLI attributes. The following management CLI attributes have been added: active-thread-count : the approximate number of threads that are actively executing tasks completed-task-count : the approximate total number of tasks that have completed execution hung-thread-count : the number of executor threads that are hung max-thread-count : the largest number of executor threads current-queue-size : the current size of the executor's task queue task-count : the approximate total number of tasks that have ever been submitted for execution thread-count : the current number of executor threads Terminating hung tasks You can now manually attempt to terminate hung tasks in the EE subsystem. To do this, run the following command: A new attribute, hung-task-termination-period , is added to the managed-executor-service You can now automatically attempt to terminate hung tasks in the EE subsystem. A new attribute, hung-task-termination-period , is added to the managed-scheduled-executor-service resources to facilitate this. hung-task-termination-period : the period, in milliseconds, for attempting hung tasks automatic termination, by cancelling such tasks, and interrupting their executing threads. If value is 0, which is the default, hung tasks are never cancelled. Using property replacement for permissions files Users upgrading from JBoss EAP 6 to JBoss EAP 7 were unable to migrate file permissions in the Java policy file to the permissions.xml or jboss-permissions.xml files. It was not possible to use property replacement to migrate file permissions in the permissions.xml and jboss-permissions.xml files. You can now use property replacement for the permissions.xml and jboss-permissions.xml files. The property replacement for jboss-permissions.xml and permissions.xml files can be enabled or disabled using the jboss-descriptor-property-replacement and spec-descriptor-property-replacement attributes in the ee subsystem. Configuring RESTEasy parameters You can now use the JBoss EAP management CLI to change the settings for RESTEasy parameters. A global change applies the updated settings to new deployments as web.xml context parameters. You can modify the settings of a parameter by using the :write-attribute operation with the /subsystem=jaxrs resource in the management CLI. For example: Note When you change the settings of a parameter, the updated settings only apply to new deployments. Restart the server to apply the new settings to current deployments. See the RESTEasy Configuration Parameters table for details about RESTEasy elements. Configuring RESTEasy providers In RESTEasy, certain built-in providers are enabled by default. You can now use the new RESTEasy parameter resteasy.disable.providers in the JBoss EAP management CLI to disable specific built-in providers. The following example demonstrates how to disable the built-in provider FileProvider : You can use the resteasy.disable.providers parameter with the pre-existing parameter resteasy.use.builtin.providers to customize a specific provider configuration that applies to all new deployments. Note When you change the settings of the resteasy.disable.providers parameter, the updated settings only apply to new deployments. Restart the server to apply the new settings to current deployments. 2.4. Management console New role decoder added to the elytron subsystem In JBoss EAP 7.4, you can use the management console to add the source-address-role-decoder role decoder to the elytron subsystem. By configuring this role decoder in the mappers element, you gain additional information from a remote client when you make authorization decisions. You can configure the following attributes for source-address-role-decoder : Attribute Description pattern A regular expression that specifies the IP address of a remote client or the IP addresses of remote clients to match. source-address Specifies the IP address of the remote client. roles Provides the list of roles to assign to a user if the IP address of the remote client matches the values specified in the pattern attribute or the source-address attribute. 2.5. Logging The Apache Log4j2 API In JBoss EAP 7.4, you can use an Apache Log4j2 API instead of an Apache Log4j API to send application logging messages to your JBoss LogManager implementation. The JBoss EAP 7.4 release supports the Log4J2 API, but the release does not support the Apache Log4j2 Core implementation, org.apache.logging.log4j:log4j-core , or its configuration files. 2.6. Infinispan subsystem Using Infinispan APIs in deployments You can now use the Infinispan subsystem to create remote and embedded JBoss EAP caches without the need to install separate modules. This allows you perform read and write operations to caches from your deployment but does not include support for all Data Grid capabilities or the following APIs: org.infinispan.query org.infinispan.counter.api org.infinispan.lock Additionally the Data Grid CDI modules are not available from the Infinispan subsystem. If you have questions about using JBoss EAP with any Data Grid features or capabilities, contact the Red Hat support team. 2.7. ejb3 subsystem Default global stateful session bean timeout value in the ejb3 subsystem In the ejb3 subsystem, you can now configure a default global timeout value for all stateful session beans (SFSBs) that are deployed on your server instance by using the default-stateful-bean-session-timeout attribute. This attribute is located in the JBoss EAP server configuration file. You can configure the attribute using the Management CLI. Attribute behavior varies according to the server mode. For example: When running in the standalone server, the configured value gets applied to all SFSBs deployed on the application server. When running in the managed domain, all SFSBs that are deployed on server instances within server groups receive concurrent timeout values. Note When you change the global timeout value for the attribute, the updated settings only apply to new deployments. Reload the server to apply the new settings to current deployments. By default, the attribute value is set at -1 milliseconds, which means that deployed SFSBs are configured to never time out. However, you can configure two other types of valid values for the attribute, as follows: When the value is 0 , SFSBs are eligible for immediate removal by the ejb container. When the value is greater than 0 , the SFSBs remain idle for the specified time before they are eligible for removal by the ejb container. You can still use the pre-existing @StatefulTimeout annotation or the stateful-timeout element, which is located in the ejb-jar.xml deployment descriptor, to configure the timeout value for an SFSB. However, setting such a configuration overrides the default global timeout value to the SFSB. Forcing Jakarta Enterprise Beans timer refresh in database-data-store You can now set the wildfly.ejb.timer.refresh.enabled flag using the EE interceptor. When an application calls the TimerService.getAllTimers() method, JBoss EAP checks this flag. If this flag is set to true , JBoss EAP refreshes the Jakarta Enterprise Beans timers from database before returning the result. In the JBoss EAP releases, the Jakarta Enterprise Beans timer reading could be refreshed in a database using the refresh-interval attribute found in database-data-store . Users could set the refresh-interval attribute value in milliseconds to refresh the Jakarta Enterprise Beans timer reading. For information about Jakarta Enterprise Beans clustered database backed-timers, see Jakarta Enterprise Beans clustered database timers in the Developing Jakarta Enterprise Beans Applications guide. Access to runtime information from Jakarta Enterprise Beans With JBoss EAP 7.4, you can access runtime data for Jakarta Enterprise Beans. Stateful session beans, stateless session beans, and singleton beans each return different runtime information. For example, the following command returns runtime data for a stateless session bean: Dynamic discovery of Jakarta Enterprise Beans over HTTP With JBoss EAP 7.4, you can use dynamic discovery of Jakarta Enterprise Beans over HTTP. To use this capability, add a configuration similar to the following to the ejb-remote profile: <remote connector-ref="http-remoting-connector" thread-pool-name="default"> <channel-creation-options> <option name="MAX_OUTBOUND_MESSAGES" value="1234" type="remoting"/> </channel-creation-options> <profiles> <profile name="my-profile"> <remote-http-connection name="ejb-http-connection" uri="http://127.0.0.1:8180/wildfly-services"/> </profile> </profiles> </remote> Global configuration of compression for remote Jakarta Enterprise Beans calls With JBoss EAP 7.4, you can configure compression of calls to remote Jakarta Enterprise Beans globally. To configure compression globally on a stand-alone client, specify the default.compression property in the jboss-ejb-client.properties file. To configure compression globally on a server, include the default-compression attribute in the <client-conext> element in the jboss-ejb-client.xml descriptor file in the application deployment unit. <jboss-ejb-client xmlns="urn:jboss:ejb-client:1.4"> <client-context default-compression="5"> <profile name="example-profile" /> </client-context> </jboss-ejb-client> New attribute for setting the principal propagation behavior in Elytron In JBoss EAP 7.4, a new optional attribute is added to the application-security-domain element of the ejb3 subsystem. With the new attribute, legacy-compliant-principal-propagation , you can control the principal propagation behavior of your Jakarta Enterprise Beans application that uses Elytron security. The default value of legacy-compliant-principal-propagation is true . Therefore, the principal propagation behavior is legacy security subsystem compliant by default. If you configure the attribute to false , Elytron provides any local unsecured Jakarta Enterprise Beans that have no incoming run as identity with an anonymous principal. This configuration complies with Elytron's behavior. For information about Elytron integration with the ejb subsystem, see Elytron integration with the ejb subsystem in the Developing Jakarta Enterprise Beans Applications guide. 2.8. Hibernate Configuring the wildfly.jpa.skipquerydetach persistence unit property You can configure the wildfly.jpa.skipquerydetach persistence unit property from the persistence.xml file of a container-managed persistence context. The default value for wildfly.jpa.skipquerydetach is false . Use this setting to set a transaction-scoped persistence context to immediately detach query results from an open persistence context. Configure wildfly.jpa.skipquerydetach as true , to set a transaction-scoped persistence context to detach query results when a persistence context is closed. This enables a non-standard specification extension. For applications that have the non-standalone specification extension jboss.as.jpa.deferdetach set as true , you can also set wildfly.jpa.skipquerydetach as true . 2.9. Web services Integrating Elytron with web services clients You can now configure web services clients to use the Elytron client configuration to obtain its credentials, authentication method, and SSL context. When you use JBossWS API to assign any configuration properties to the web services client, then the username, password, and SSL context from the Elytron client are also loaded and configured. The following authentication methods are configurable: UsernameToken Profile authentication HTTP Basic authentication TLS protocol You can use the <webservices/> element in wildfly-config.xml to specify that the credentials are for either HTTP Basic authentication,g UsernameToken Profile authentication or both. Ability for RESTEasy 3.x to access all standard MicroProfile ConfigSources RESTEasy 3.x can now access all standard MicroProfile ConfigSources . The following additional ConfigSources are also added to RESTEasy 3.x: servlet init-params (ordinal 60) filter init-params (ordinal 50) servlet context-params (ordinal 40) Previously, these capabilities were only included in RESTEasy 4.x. With this update, RESTEasy can access configuration parameters with or without the MicroProfile ConfigSources . In the absence of a MicroProfile Config implementation, RESTEasy falls back to the older method of gathering parameters from ServletContext parameters and init parameters. Configuring SameSite cookie attribute You can now configure the SameSite attribute for cookies in the current JBoss EAP release with a samesite-cookie predicated handler in the undertow subsystem. With this handler, you can update your server configuration without having to change your applications. This enhancement supports changes to the processing of cookies that were recently implemented in major web browsers to improve security. Configuring Eclipse MicroProfile REST client API in resteasy CDI modules The Eclipse MicroProfile REST client API is now an optional dependency that you can configure in resteasy CDI modules. 2.10. Messaging Duplicate messages on the JMS core bridge On rare instances for a server with an overloaded target queue, sending large messages over the JMS core bridge might cause duplication of your messages. Ability to pause a topic With JBoss EAP 7.4, you can pause a topic in addition to pausing a queue. When you pause a topic, JBoss EAP receives messages but does not deliver them. When you resume the topic, JBoss EAP delivers the messages. To pause a topic, issue a command similar to the following example: To resume a topic, issue a command similar to the following example: Ability to detect network isolation of broker You can now ping a configurable list of hosts to detect network isolation of the broker. You can use the following parameters to configure this functionality: network-check-NIC network-check-period network-check-timeout network-check-list network-check-URL-list network-check-ping-command network-check-ping6-command For example, to check the network status by pinging the IP address 10.0.0.1 , issue the following command: call-timeout attribute The call-timeout attribute on the JMS core bridge is configurable as a part of ActiveMQ Artemis. In this release, you are able to configure the call-timeout variable in EAP itself with the management API. Red Hat AMQ connection pools Red Hat AMQ recently began supporting connection pools in addition to single thread DB connections. With JBoss EAP 7.4, you can now use a connection pool when using Red Hat AMQ with JBoss EAP. 2.11. Scripts New environment variable for starting your server You can now add the MODULE_OPTS environment variable to the script configuration files in your JBoss EAP 7.4 instance. In a standalone server, use the following files: On RHEL, the startup script uses EAP_HOME/bin/standalone.conf file. On your Windows server, at the command prompt, use the EAP_HOME\bin\standalone.bat file. On your Windows server, at the PowerShell, use the EAP_HOME\bin\standalone.ps1 file. For servers in a domain, you can add the module-options attributes to a host JVM configuration or a server's JVM configuration. The MODULE_OPTS environment variable affects the entire server. For example, if you have a Java agent that requires logging, set the value of MODULE_OPTS to -javaagent:my-agent.jar . This will initialize your agent after you configure logging. 2.12. OpenShift Providing custom Galleon feature-pack support to your JBoss EAP S2I image You can use three new environment variables to provide custom Galleon feature-pack support for your JBoss EAP S2I image. You can use the environment variables outlined in the following table during your S2I build phase: Table 2.1. Custom Galleon feature-pack environment variables Environment variable Description GALLEON_DIR=<PATH> <PATH> is the relative directory to the application root directory that contains your optional Galleon custom content. Directory defaults to galleon . GALLEON_CUSTOM_FEATURE_PACKS_MAVEN_REPO=<PATH> <PATH> is the absolute path to a Maven local repository directory that contains custom feature-packs. Directory defaults to galleon/repository . GALLEON_PROVISION_FEATURE_PACKS=<LIST_OF_GALLEON_FEATURE_PACKS> <LIST_OF_GALLEON_FEATURE_PACKS> is a comma-separated list of your custom Galleon feature-packs identified by Maven coordinates. The listed feature-packs must be compatible with the version of the JBoss EAP 7.4 server present in the builder image. You can use the GALLEON_PROVISION_LAYERS environment variable to set the Galleon layers, which were defined by your custom feature-packs, for your server. Read-only server configuration directory JBoss EAP supports a read-only server configuration directory. You can use the --read-only-server-config command line parameter to lock down the server configuration when the server configuration directory is a read-only directory. This functionality is available only when running JBoss EAP as a standalone server. Instructions to deploy JBoss EAP quickstarts on OpenShift For a JBoss EAP release, all OpenShift-compatible quickstarts now include instructions to deploy JBoss EAP quickstarts on OpenShift. The readme.html file of the quickstarts include the following sections: Getting Started with OpenShift Prepare OpenShift for Quickstart Deployment Import the Latest JBoss EAP for OpenShift Image Streams and Templates Deploy the JBoss EAP for OpenShift Source-to-Image (S2I) Quickstart to OpenShift OpenShift Post Deployment Tasks New Galleon layer for the Distributable Web subsystem JBoss EAP provides the web-passivation layer to supply the distributable-web subsystem configured with a local web container cache. The web-passivation layer is a decorator layer. 2.13. Red Hat CodeReady Workspaces (CRW) Red Hat CodeReady Workspaces supports JBoss EAP 7.4 development files You can use a JBoss EAP 7.4 development file, YAML file, to define a JBoss EAP development environment on CRW. You can download example JBoss EAP 7.4 development files from the jboss-eap-quickstarts GitHub web page. A development file includes the following components: A browser IDE configuration A list of predefined commands The application runtime environment The location of the repository that you must clone On CRW, you can choose one of the following ways to create a JBoss EAP 7.4 workspace environment: Copy and paste the URL of a JBoss EAP development file directly into the Devfile section of the Get Started page on your CRW dashboard. You must select the Load devfile button to add the development file to your CRW dashboard. Open your CRW instance on OpenShift and enter the URL of a JBoss EAP development file in the Devfile tab on the Workspace menu. Save the development file and then restart your CRW instance. Important If you want to use a Java 8 development file in your JBoss EAP 7.4 workspace environment, do not install a Java 11 plug-in because it conflicts with the Java 8 plug-in. Additional resources For information on how to download example JBoss EAP 7.4 development files, go to the kitchensink-jsp subdirectory in the jboss-eap-quickstarts directory on the GitHub web page. For more information about downloading and installing the latest version of CRW that is compatible with JBoss EAP 7.4, see Installing CodeReady Workspaces in the CRW Installation Guide . For more information about configuring CRW, see Configuring a CodeReady Workspaces 2.9 workspace in the CRW End-user Guide . | [
"batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context,value=applicationSSC) run-batch reload",
"<failover-realm name=\"myfailoverrealm\" delegate-realm=\"LdapRealm\" failover-realm=\"LocalRealm\" />",
"<distributed-realm name=\"mymainrealm\" realms=\"realm1 realm2 realm3\" />",
"/subsystem=ee/managed-executor-service=default:terminate-hung-tasks()",
"/subsystem=jaxrs:write-attribute(name=resteasy-add-charset, value=false)",
"/subsystem=jaxrs:write-attribute(name=resteasy-disable-providers, value=[org.jboss.resteasy.plugins.providers.FileProvider])",
"/deployment=ejb-management.jar/subsystem=ejb3/stateless-session-bean=ManagedStatelessBean:read-resource(include-runtime)",
"<remote connector-ref=\"http-remoting-connector\" thread-pool-name=\"default\"> <channel-creation-options> <option name=\"MAX_OUTBOUND_MESSAGES\" value=\"1234\" type=\"remoting\"/> </channel-creation-options> <profiles> <profile name=\"my-profile\"> <remote-http-connection name=\"ejb-http-connection\" uri=\"http://127.0.0.1:8180/wildfly-services\"/> </profile> </profiles> </remote>",
"<jboss-ejb-client xmlns=\"urn:jboss:ejb-client:1.4\"> <client-context default-compression=\"5\"> <profile name=\"example-profile\" /> </client-context> </jboss-ejb-client>",
"/subsystem=messaging-activemq/server=default/jms-topic=topic:pause()",
"/subsystem=messaging-activemq/server=default/jms-topic=topic:resume()",
"/subsystem=messaging-activemq/server=default:write-attribute(name=network-check-list, value=\"10.0.0.1\")"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/7.4.0_release_notes/new_features_and_enhancements |
4.3. IPTables/Firewalls | 4.3. IPTables/Firewalls IPTables includes a SECMARK target module. This is used to set the security mark value associated with the packet for use by security subsystems such as SELinux. It is only valid in the mangle table. Refer to the following for example usage: | [
"iptables -t mangle -A INPUT -p tcp --dport 80 -j SECMARK --selctx \\ system_u:object_r:httpd_packet_t:s0"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-networking-iptables |
Chapter 3. Important update on odo | Chapter 3. Important update on odo Red Hat does not provide information about odo on the OpenShift Dedicated documentation site. See the documentation maintained by Red Hat and the upstream community for documentation information related to odo . Important For the materials maintained by the upstream community, Red Hat provides support under Cooperative Community Support . | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cli_tools/developer-cli-odo |
Chapter 5. Update between minor releases | Chapter 5. Update between minor releases To update the current version of Red Hat Hyperconverged Infrastructure for Virtualization 1.8 to the latest version, follow the steps in this section. 5.1. Update workflow Red Hat Hyperconverged Infrastructure for Virtualization is a software solution comprising several different components. Update the components in the following order to minimize disruption to your deployment: Prepare the systems to be updated. Updating the Hosted Engine virtual machine and Red Hat Virtualization Manager 4.4. Update the hyperconverged hosts. 5.2. Preparing the systems to update This section describes the steps to prepare the systems for the update procedure. 5.2.1. Update subscriptions You can check which repositories a machine has access to by running the following command as the root user on the Hosted Engine Virtual Machine: Verify that the Hosted Engine virtual machine is subscribed to the following repositories: rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms rhv-4.4-manager-for-rhel-8-x86_64-rpms fast-datapath-for-rhel-8-x86_64-rpms jb-eap-7.4-for-rhel-8-x86_64-rpms openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms rhceph-4-tools-for-rhel-8-x86_64-rpms Verify that the Hyperconverged host (Red Hat Virtualization Node) is subscribed to the following repository: rhvh-4-for-rhel-8-x86_64-rpms See Enabling the Red Hat Virtualization Manager Repositories for more information on subscribing to the above mentioned repositories. 5.2.2. Verify that data is not currently being synchronized using geo-replication Perform the following steps to check if geo-replication is in progress: Click the Tasks tab at the bottom right of the Manager. Ensure that there are no ongoing tasks related to data synchronization. If data synchronization tasks are present, wait until they are complete before starting the update process. Remove all the scheduled geo-replication sessions so that synchronization will not occur during the update. Click Storage Domains Select the domain and click on the domain name. Click the Remote Data Sync Setup tab Setup button. New dialog window to set the geo-replication schedule pops-up,set the recurrence to None . 5.3. Updating the Hosted Engine virtual machine and Red Hat Virtualization Manager 4.4 This section describes the steps to update the Hosted Engine Virtual Machine and the Red Hat Virtualization Manager 4.4 to move towards updating the hyperconverged hosts. 5.3.1. Updating the Hosted Engine virtual machine Place the cluster into Global Maintenance mode. Log in to the Web Console of one of the hyperconverged nodes. Click Virtualization Hosted Engine . Click Put this cluster into global maintenance . On the Manager machine, check if updated packages are available. Log in to the Hosted Engine Virtual Machine and run the following command: 5.3.2. Updating the Red Hat Virtualization Manager Log in to the Hosted Engine virtual machine. Upgrade the setup packages using the following command: Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script performs the following tasks: Prompts you with configuration questions. Stops the ovirt-engine service. Downloads and installs the updated packages. Backs up and updates the database. Performs post-installation configuration, Starts the ovirt-engine service. Run the engine-setup script and follow the prompts to upgrade the Manager. This process can take a while and cannot be aborted, Red Hat recommends running it inside a tmux session. When the script completes successfully, the following message appears: Execution of setup completed successfully. Important The update process might take some time. Do not stop the process before it completes. Upgrade all other packages. Important If any kernel packages are updated: Disable global maintenance mode Reboot the machine to complete the update. Remove the cluster from Global Maintenance mode. Log in to the Web Console of one of the hyperconverged nodes Click Virtualization Hosted Engine . Click Remove this cluster from maintenance . 5.4. Upgrading the hyperconverged hosts The upgrade process differs depending on whether your nodes use Red Hat Virtualization version 4.4.1 or version 4.4.2. Use the following command to verify which version you are using: Then follow the appropriate process for your version: Upgrading from Red Hat Virtualization 4.4.2 and later Upgrading from Red Hat Virtualization 4.4.1 and earlier 5.4.1. Upgrading from Red Hat Virtualization 4.4.2 and later Upgrade each hyperconverged host in the cluster, one at a time. For each hyperconverged host in the cluster: Upgrade the hyperconverged host. In the Manager, click Compute Hosts and select a node. Click Installation Upgrade . Click OK to confirm the upgrade. The node is upgraded and rebooted. Verify self-healing is complete. Click the name of the host. Click the Bricks tab. Verify that the Self-Heal Info column shows OK beside all bricks. Update cluster compatibility settings to ensure you can use new features. Log in to the Administrator Portal. Click Cluster and select the cluster name ( Default ). Click Edit . Change Cluster compatibility version to 4.6 . Important Cluster compatibility is not completely updated until the virtual machines have been rebooted. Schedule a maintenance window and move any application virtual machines to maintenance mode before rebooting all virtual machines on each node. Click Compute Data Centers . Click Edit . Change Compatibility version to 4.6 . Update data center compatibility settings to ensure you can use new features. Select Compute Data Centers . Select the appropriate data center. Click Edit . The Edit Data Center dialog box opens. Update Compatibility Version to 4.6 from the dropdown list. 5.4.2. Upgrading from Red Hat Virtualization 4.4.1 and earlier In the Manager, click Compute Hosts and select a node. Click Installation Check for Upgrade. This will trigger a background check on that host for the presence of host update. Once the update is available, there will be a notification to the host about the availability of host update. Move the host to maintenance mode . On the RHV Administration Portal, navigate to Hosts Select the host. Click on Management Maintenance Maintenance Host dialog box opens. On the Maintenance Host dialog box, check the Stop Gluster service box click OK . Once the host is in maintenance mode , click Installation Upgrade . Upgrade Host dialog box opens, make sure to un-check Reboot host after upgrade . Click OK to confirm the upgrade. Wait for the upgrade to complete. Remove the existing LVM filter on the upgraded host before rebooting by using the following command: Reboot the host. Once the host is rebooted, regenerate the LVM filter: Verify self-healing is complete before upgrading the host. Click the name of the host. Click the Bricks tab. Verify that the Self-Heal information column of all bricks is listed as OK before upgrading the host. Repeat the above steps on the other hyperconverged hosts. Update cluster compatibility settings to ensure you can use new features. Log in to the Administrator Portal. Click Cluster and select the cluster name ( Default ). Click Edit . Change Cluster compatibility version to 4.6 . Important Cluster compatibility is not completely updated until the virtual machines have been rebooted. Schedule a maintenance window and move any application virtual machines to maintenance mode before rebooting all virtual machines on each node. Click Compute Data Centers . Click Edit . Change Compatibility version to 4.6 . Update data center compatibility settings to ensure you can use new features. Select Compute Data Centers . Select the appropriate data center. Click Edit . The Edit Data Center dialog box opens. Update Compatibility Version to 4.6 from the dropdown list. Important Disable the gluster volume option cluster.lookup-optimize on all the gluster volumes after the update. Troubleshooting The self healing process should start automatically once each hyperconverged host comes up after a reboot. Check for self-heal status using the command: If there are pending self-heal entries for a long time, check the following: Gluster network is up. All brick processes in the volume are up. If there are any brick processes reported to be down, restart the glusterd service on the node where the brick is reported to be down: If the Red Hat Virtualization node is unable to boot and drops in to maintenance shell , then one of the reasons is due to the unstable LVM filter rejecting some of the physical volumes (PVs). Log into the maintenance shell with the root password. Remove the existing LVM filter configuration: Reboot the host. Once the node is up, regenerate the LVM filter: | [
"subscription-manager repos --list-enabled",
"engine-upgrade-check",
"yum update ovirt-engine\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"yum update",
"cat /etc/os-release | grep \"PRETTY_NAME\"",
"sed -i /^filter/d /etc/lvm/lvm.conf",
"vdsm-tool config-lvm-filter -y",
"for volume in `gluster volume list`; do gluster volume set USDvolume cluster.lookup-optimize off; done",
"gluster volume heal <volname> info summary",
"ip addr show <ethernet-interface>",
"gluster volume status <vol>",
"systemctl restart glusterd",
"sed -i /^filter/d /etc/lvm/lvm.conf",
"vdsm-tool config-lvm-filter -y"
]
| https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/upgrading_red_hat_hyperconverged_infrastructure_for_virtualization/update_between_minor_releases |
Chapter 12. Threading and scheduling | Chapter 12. Threading and scheduling AMQ C++ supports full multithreading with C++11 and later. Limited multithreading is possible with older versions of C++. See Section 12.6, "Using older versions of C++" . 12.1. The threading model The container object can handle multiple connections concurrently. When AMQP events occur on connections, the container calls messaging_handler callback functions. Callbacks for any one connection are serialized (not called concurrently), but callbacks for different connections can be safely executed in parallel. You can assign a handler to a connection in container::connect() or listen_handler::on_accept() using the handler connection option. It is recommended to create a separate handler for each connection so that the handler does not need locks or other synchronization to protect it against concurrent use by library threads. If any non-library threads use the handler concurrently, then you need synchronization. 12.2. Thread-safety rules The connection , session , sender , receiver , tracker , and delivery objects are not thread-safe and are subject to the following rules. You must use them only from a messaging_handler callback or a work_queue function. You must not use objects belonging to one connection from a callback for another connection. You can store AMQ C++ objects in member variables for use in a later callback, provided you respect rule two. The message object is a value type with the same threading constraints as a standard C++ built-in type. It cannot be concurrently modified. 12.3. Work queues The work_queue interface provides a safe way to communicate between different connection handlers or between non-library threads and connection handlers. Each connection has an associated work_queue . The work queue is thread-safe (C++11 or greater). Any thread can add work. A work item is a std::function , and bound arguments are called like an event callback. When the library calls the work function, it is serialized safely so that you can treat the work function like an event callback and safely access the handler and AMQ C++ objects stored on it. 12.4. The wake primitive The connection::wake() method allows any thread to prompt activity on a connection by triggering an on_connection_wake() callback. This is the only thread-safe method on connection . wake() is a lightweight, low-level primitive for signaling between threads. It does not carry any code or data, unlike work_queue . Multiple calls to wake() might be coalesced into a single on_connection_wake() . Calls to on_connection_wake() can occur without any application call to wake() since the library uses wake() internally. The semantics of wake() are similar to std::condition_variable::notify_one() . There will be a wakeup, but there must be some shared application state to determine why the wakeup occurred and what, if anything, to do about it. Work queues are easier to use in many instances, but wake() may be useful if you already have your own external thread-safe queues and need an efficient way to wake a connection to check them for data. 12.5. Scheduling deferred work AMQ C++ has the ability to execute code after a delay. You can use this to implement time-based behaviors in your application, such as periodically scheduled work or timeouts. To defer work for a fixed amount of time, use the schedule method to set the delay and register a function defining the work. Example: Sending a message after a delay void on_sender_open(proton::sender& snd) override { proton::duration interval {5 * proton::duration::SECOND}; snd.work_queue().schedule(interval, [=] { send(snd); }); } void send(proton::sender snd) { if (snd.credit() > 0) { proton::message msg {"hello"}; snd.send(msg); } } This example uses the schedule method on the work queue of the sender in order to establish it as the execution context for the work. 12.6. Using older versions of C++ Before C++11 there was no standard support for threading in C++. You can use AMQ C++ with threads but with the following limitations. The container does not create threads. It only uses the single thread that calls container::run() . None of the AMQ C++ library classes are thread-safe, including container and work_queue . You need an external lock to use container in multiple threads. The only exception is connection::wake() . It is thread-safe even in older C++. The container::schedule() and work_queue APIs accept C++11 lambda functions to define units of work. If you are using a version of C++ that does not support lambdas, you must use the make_work() function instead. | [
"void on_sender_open(proton::sender& snd) override { proton::duration interval {5 * proton::duration::SECOND}; snd.work_queue().schedule(interval, [=] { send(snd); }); } void send(proton::sender snd) { if (snd.credit() > 0) { proton::message msg {\"hello\"}; snd.send(msg); } }"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/threading_and_scheduling |
Chapter 22. KafkaAuthorizationOpa schema reference | Chapter 22. KafkaAuthorizationOpa schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationOpa schema properties To use Open Policy Agent authorization, set the type property in the authorization section to the value opa , and configure OPA properties as required. Streams for Apache Kafka uses Open Policy Agent plugin for Kafka authorization as the authorizer. For more information about the format of the input data and policy examples, see Open Policy Agent plugin for Kafka authorization . 22.1. url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. Required. 22.2. allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable. Defaults to false - all actions will be denied. 22.3. initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 5000 . 22.4. maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . 22.5. expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 milliseconds (1 hour). 22.6. tlsTrustedCertificates Trusted certificates for TLS connection to the OPA server. 22.7. superUsers A list of user principals treated as super users, so that they are always allowed without querying the open Policy Agent policy. An example of Open Policy Agent authorizer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward # ... 22.8. KafkaAuthorizationOpa schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationOpa type from KafkaAuthorizationSimple , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom . It must have the value opa for the type KafkaAuthorizationOpa . Property Property type Description type string Must be opa . url string The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. This option is required. allowOnError boolean Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable). Defaults to false - all actions will be denied. initialCacheCapacity integer Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request Defaults to 5000 . maximumCacheSize integer Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . expireAfterMs integer The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 . tlsTrustedCertificates CertSecretSource array Trusted certificates for TLS connection to the OPA server. superUsers string array List of super users, which is specifically a list of user principals that have unlimited access rights. enableMetrics boolean Defines whether the Open Policy Agent authorizer plugin should provide metrics. Defaults to false . | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaAuthorizationOpa-reference |
4.2. Using SMB shares with SSSD and Winbind | 4.2. Using SMB shares with SSSD and Winbind This section describes how you can use SSSD clients to access and fully use shares based on the Server Message Block (SMB) protocol, also known as the Common Internet File System (CIFS) protocol. Important Using SSSD as a client in IdM or Active Directory domains has certain limitations, and Red Hat does not recommend using SSSD as ID mapping plug-in for Winbind. For further details, see the " What is the support status for Samba file server running on IdM clients or directly enrolled AD clients where SSSD is used as the client daemon " article. SSSD does not support all the services that Winbind provides. For example, SSSD does not support authentication using the NT LAN Manager (NTLM) or NetBIOS name lookup. If you need these services, use Winbind. Note that in Identity Management domains, Kerberos authentication and DNS name lookup are available for the same purposes. 4.2.1. How SSSD Works with SMB The SMB file-sharing protocol is widely used on Windows machines. In Red Hat Enterprise Linux environments with a trust between Identity Management and Active Directory, SSSD enables seamless use of SMB as if it was a standard Linux file system. To access a SMB share, the system must be able to translate Windows SIDs to Linux POSIX UIDs and GIDs. SSSD clients use the SID-to-ID or SID-to-name algorithm, which enables this ID mapping. 4.2.2. Switching Between SSSD and Winbind for SMB Share Access This procedure describes how you can switch between SSSD and Winbind plug-ins that are used for accessing SMB shares from SSSD clients. For Winbind to be able to access SMB shares, you need to have the cifs-utils package installed on your client. To make sure that cifs-utils is installed on your machine: Optional . Find out whether you are currently using SSSD or Winbind to access SMB shares from the SSSD client: If the SSSD plug-in ( cifs_idmap_sss.so ) is installed, it has a higher priority than the Winbind plug-in ( idmapwb.so ) by default. Before switching to the Winbind plug-in, make sure Winbind is running on the system: Before switching to the SSSD plug-in, make sure SSSD is running on the system: To switch to a different plug-in, use the alternatives --set cifs-idmap-plugin command, and specify the path to the required plug-in. For example, to switch to Winbind: Note The 32-bit version platform, such as i686 in RHEL 7, uses the /usr/lib/cifs-utils/ directory instead of /usr/lib64/cifs-utils/ . | [
"rpm -q cifs-utils",
"alternatives --display cifs-idmap-plugin cifs-idmap-plugin - status is auto. link currently points to /usr/lib64/cifs-utils/cifs_idmap_sss.so /usr/lib64/cifs-utils/cifs_idmap_sss.so - priority 20 /usr/lib64/cifs-utils/idmapwb.so - priority 10 Current `best' version is /usr/lib64/cifs-utils/cifs_idmap_sss.so.",
"systemctl is-active winbind.service active",
"systemctl is-active sssd.service active",
"alternatives --set cifs-idmap-plugin /usr/lib64/cifs-utils/idmapwb.so"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/smb-sssd |
Chapter 5. Setting the load balancer for host registration | Chapter 5. Setting the load balancer for host registration You can configure Satellite to register clients through a load balancer when using the host registration feature. You will be able to register hosts to the load balancer instead of Capsule. The load balancer will decide through which Capsule to register the host at the time of request. Upon registration, the subscription manager on the host will be configured to manage content through the load balancer. Prerequisites You configured SSL certificates on all Capsule Servers. For more information, see Chapter 4, Configuring Capsule Servers for load balancing . You enabled Registration and Templates plugins on all Capsule Servers: Procedure On all Capsule Servers, set the registration and template URLs using satellite-installer : In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule, click the dropdown menu in the Actions column and select Refresh . | [
"satellite-installer --foreman-proxy-registration true --foreman-proxy-templates true",
"satellite-installer --foreman-proxy-registration-url \"https:// loadbalancer.example.com :9090\" --foreman-proxy-template-url \"http:// loadbalancer.example.com :8000\""
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/configuring_capsules_with_a_load_balancer/Setting_the_Load_Balancer_for_Host_Registration_load-balancing |
Chapter 3. Updating a disconnected Satellite Server | Chapter 3. Updating a disconnected Satellite Server Update your air-gapped Satellite setup where the connected Satellite Server, which synchronizes content from CDN, is air gapped from a disconnected Satellite Server, to the minor version. Prerequisites Back up your Satellite Server. For more information, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Install reposync that is required for the updating procedure: Procedure on the connected Satellite Server Ensure that you have synchronized the following repositories in your connected Satellite Server: rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms satellite-6.15-for-rhel-8-x86_64-rpms satellite-maintenance-6.15-for-rhel-8-x86_64-rpms Download the debug certificate of the organization and store it locally at /etc/pki/katello/certs/org-debug-cert.pem or a location of your choosing. For more information, see Creating an Organization Debug Certificate in Administering Red Hat Satellite . Create a Yum configuration file under /etc/yum.repos.d , such as satellite-disconnected .repo , with the following contents: In the configuration file, complete the following steps: For the sslclientcert and sslclientkey options, replace /etc/pki/katello/certs/org-debug-cert.pem with the location of the downloaded organization debug certificate. For the baseurl option, replace satellite.example.com with the correct FQDN of your connected Satellite Server. For the baseurl option, replace My_Organization with your organization label. Obtain the organization label: Enter the reposync command: This downloads the contents of the repositories from the connected Satellite Server and stores them in the directory ~/Satellite-repos . Verify that the RPMs have been downloaded and the repository data directory is generated in each of the sub-directories of ~/Satellite-repos . Archive the contents of the directory: Use the generated Satellite-repos.tgz file to upgrade in the disconnected Satellite Server. Procedure on the disconnected Satellite Server Copy the generated Satellite-repos.tgz file to your disconnected Satellite Server. Extract the archive to anywhere accessible by the root user. In the following example /root is the extraction location. Create a Yum configuration file under /etc/yum.repos.d with the following repository information: In the configuration file, replace the /root/Satellite-repos with the extracted location. Check the available versions to confirm the minor version is listed: Use the health check option to determine if the system is ready for upgrade. On first use of this command, satellite-maintain prompts you to enter the hammer admin user credentials and saves them in the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Due to the lengthy update time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process completed successfully. Perform the upgrade: Determine if the system needs a reboot: If the command told you to reboot, then reboot the system: Additional resources To restore the backup of the Satellite Server or Capsule Server, see Restoring Satellite Server or Capsule Server from a Backup | [
"dnf install 'dnf-command(reposync)'",
"[rhel-8-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) baseurl=_https://satellite.example.com_/pulp/content/_My_Organization_/Library/content/dist/rhel8/8/x86_64/baseos/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [rhel-8-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) baseurl=_https://satellite.example.com_/pulp/content/_My_Organization_/Library/content/dist/rhel8/8/x86_64/appstream/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-6.15-for-rhel-8-x86_64-rpms] name=Red Hat Satellite 6.15 for RHEL 8 RPMs x86_64 baseurl=_https://satellite.example.com_/pulp/content/_My_Organization_/Library/content/dist/layered/rhel8/x86_64/satellite/6.15/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-maintenance-6.15-for-rhel-8-x86_64-rpms] name=Red Hat Satellite Maintenance 6.15 for RHEL 8 RPMs x86_64 baseurl=_https://satellite.example.com_/pulp/content/_My_Organization_/Library/content/dist/layered/rhel8/x86_64/sat-maintenance/6.15/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1",
"hammer organization list",
"dnf reposync --delete --disableplugin=foreman-protector --download-metadata --repoid rhel-8-for-x86_64-appstream-rpms --repoid rhel-8-for-x86_64-baseos-rpms --repoid satellite-maintenance-6.15-for-rhel-8-x86_64-rpms --repoid satellite-6.15-for-rhel-8-x86_64-rpms -n -p ~/Satellite-repos",
"tar czf Satellite-repos.tgz -C ~ Satellite-repos",
"tar zxf Satellite-repos.tgz -C /root",
"[rhel-8-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) baseurl=file:///root/Satellite-repos/rhel-8-for-x86_64-baseos-rpms enabled=1 [rhel-8-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) baseurl=file:///root/Satellite-repos/rhel-8-for-x86_64-appstream-rpms enabled=1 [satellite-6.15-for-rhel-8-x86_64-rpms] name=Red Hat Satellite 6 for RHEL 8 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-6.15-for-rhel-8-x86_64-rpms enabled=1 [satellite-maintenance-6.15-for-rhel-8-x86_64-rpms] name=Red Hat Satellite Maintenance 6 for RHEL 8 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-maintenance-6.15-for-rhel-8-x86_64-rpms enabled=1",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.15.z --whitelist=\"check-upstream-repository,repositories-validate\"",
"satellite-maintain upgrade run --target-version 6.15.z --whitelist=\"check-upstream-repository,repositories-setup,repositories-validate\"",
"dnf needs-restarting --reboothint",
"reboot"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/updating_red_hat_satellite/Updating-Disconnected-satellite_updating |
8.262. xfsprogs | 8.262. xfsprogs 8.262.1. RHBA-2014:1564 - xfsprogs bug update Updated xfsprogs packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The xfsprogs packages contain a set of commands to use the XFS file system, including the mkfs.xfs command to construct an XFS file system. Bug Fixes BZ# 1018751 Due to a bug in the underlying source code, an attempt to use the xfs_io "pwrite" command to write to a block device residing on an XFS file system failed with the following error: XFS_IOC_FSGEOMETRY: Inappropriate ioctl for device. This update applies a patch to fix this bug and the command no longer fails in the described scenario. BZ# 1020438 Previously, the thread local data were used incorrectly. As a consequence, when the xfs_repair utility was executed with the ag_stride option, the utility could terminate unexpectedly with a segmentation fault. The underlying source code has been modified to fix this bug and xfs_repair no longer crashes in the described situation. BZ# 1024702 Under certain conditions, the xfs_fsr utility failed to reorganize files with SELinux attributes. With this update, a patch has been provided to address this bug and xfs_fsr can successfully defragment files with SELinux attributes. BZ# 1100107 , BZ# 1104956 When the sector size of the source file system was larger than 512 bytes, the xfs_copy utility could create a corrupted copy of that system. In addition, the utility exited with a non-zero return code in all cases, even if the operation was successful. This update applies a patch to fix these bugs and the utility now works as expected. Users of xfsprogs are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/xfsprogs |
Chapter 54. File Systems | Chapter 54. File Systems The default option specification is not overridden by the host-specific option in /etc/exports When sec=sys is used in the default option section of the /etc/exports file, the options list that follows is not parsed correctly. As a consequence, the default option cannot be overridden by the host-specific option. (BZ# 1359042 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/known_issues_file_systems |
Chapter 4. Tuning the number of BDB locks | Chapter 4. Tuning the number of BDB locks When a Directory Server instance uses Berkeley Database (BDB), lock mechanism controls how many copies of Directory Server processes can run at the same time. For example, during an import job, Directory Server sets a lock in the /run/lock/dirsrv/slapd-instance_name/imports/ directory to prevent the ns-slapd Directory Server process, another import, or export operations from running. If the server runs out of available locks, Directory Server logs the following error in the /var/log/dirsrv/slapd-instance_name/errors file: libdb: Lock table is out of available locks However, the Directory Server default settings try to prevent the server from running out of locks to avoid data corruption. For details, see Avoiding data corruption by monitoring free database locks 4.1. Avoiding data corruption by monitoring free BDB database locks Running out of Berkeley Database (BDB) locks can lead to data corruption. To avoid this, Directory Server, by default, monitors the remaining number of free locks every 500 milliseconds and, if the number of active database locks is equal or higher than the 90%, Directory Server stops all searches. The following procedure changes the interval to 600 milliseconds and the threshold to 85 percent. Note If you set a too high interval, the server can run out of locks before the monitoring check happens. Setting a too short interval can slow down the server. Prerequisites The Directory Server instance uses BDB. Procedure Set the interval and threshold: Restart the instance: Verification Display the locks monitoring settings: 4.2. Manually monitoring the number of BDB locks Directory Server tracks the current number of Berkeley Database (BDB) locks in the nsslapd-db-current-locks and nsslapd-db-max-locks attributes in cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config . Prerequisites The Directory Server instance uses BDB. Procedure To display the number of locks, enter: # ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -x -s sub -b "cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config" nsslapd-db-current-locks nsslapd-db-max-locks ... nsslapd-db-current-locks: 37 nsslapd-db-max-locks: 39 4.3. Setting the number of BDB locks using the command line Use the dsconf backend config set command to update the number of Berkeley Database (BDB) locks that Directory Server can use. Prerequisites The Directory Server instance uses BDB. Procedure Set the number of locks: The command sets the number of locks to 20000 . Restart the instance: Verification Display the value of the nsslapd-db-locks parameter: 4.4. Setting the number of BDB locks using the web console You can set the number of Berkeley Database (BDB) locks that Directory Server uses in the global database configuration in the web console. Prerequisites You are logged in to the instance in the web console. The Directory Server instance uses BDB. The following procedure sets the number of locks to 2000 . Procedure Navigate to Database Global Database Configuration Database Locks . Update the Database Locks field to 2000 . Click Save Config . Click Actions Restart Instance . Verification Verify that the new value is present in Database Global Database Configuration Database Locks . | [
"libdb: Lock table is out of available locks",
"dsconf instance_name backend config set --locks-monitoring-enabled on --locks-monitoring-pause 600 --locks-monitoring-threshold 85",
"dsctl instance_name restart",
"dsconf -D \" cn=Directory Manager \" ldap://supplier.example.com backend config get | grep \"nsslapd-db-locks-monitoring\" nsslapd-db-locks-monitoring-enabled: on nsslapd-db-locks-monitoring-threshold: 85 nsslapd-db-locks-monitoring-pause: 600",
"ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -s sub -b \"cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config\" nsslapd-db-current-locks nsslapd-db-max-locks nsslapd-db-current-locks: 37 nsslapd-db-max-locks: 39",
"dsconf instance_name backend config set --locks= 20000",
"dsctl instance_name restart",
"dsconf instance_name backend config get | grep \"nsslapd-db-locks:\" nsslapd-db-locks: 20000"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/tuning_the_performance_of_red_hat_directory_server/assembly_tuning-the-number-of-locks_assembly_improving-the-performance-of-views |
Chapter 7. Known issues | Chapter 7. Known issues See Known Issues for Red Hat JBoss Enterprise Application Platform 8.0 to view the list of known issues for this release. 7.1. Infinispan The /subsystem=distributable-web/infinispan-session-management=*:add operation may fail when executed on a default non-HA server configuration Issue - JBEAP-24997 The /subsystem=distributable-web/infinispan-session-management=*:add operation automatically adds the affinity=primary-owner child resource, which requires the routing=infinispan resource. The operation may fail because the required routing=infinispan resource is not defined in the default non-HA server configurations. Workaround To avoid this invalid intermediate state, execute both infinispan-session-management:add and affinity=local:add operations within a batch. Example: HotRod cannot create distributed sessions for externalization to Infinispan Issue - JBEAP-26062 An interoperability test involving Red Hat JBoss Enterprise Application Platform 8.0 and Red Hat Data Grid on OpenShift Container Platform shows an issue where writes to an Infinispan remote cache causes an internal server error. When the remote-cache-container is configured to use the default marshaller, JBoss Marshalling, cache writes cause HotRod to throw errors because only byte[] instances are supported. Example error message: Workaround Configure the remote-cache-container to use the ProtoStream marshaller marshaller=PROTOSTREAM : Example configuration: 7.2. Datasource configuration MsSQL connection resiliency is not supported Issue - JBEAP-25585 Red Hat JBoss Enterprise Application Platform 8.0 does not support connection resiliency of MsSQL JDBC driver version 10.2.0 and later. Connection resiliency causes the driver to be in an unexpected state for the recovery manager. By default, this driver has connection resiliency enabled and must be manually disabled by the user. Workaround The ConnectRetryCount parameter controls the number of reconnection attempts when there is a connection failure. This parameter is set to 1 by default, enabling connection resiliency. To disable connection resiliency, change the ConnectRetryCount parameter from 1 to 0 . You can set connection properties in the datasource configuration section of the server configuration file standalone.xml or domain.xml . For more information about how to configure datasource settings, see How to configure datasource settings in EAP for OpenShift and How to specify connection properties in the Datasource Configuration for JBoss EAP on the Red Hat Customer Portal. 7.3. Server Management Liveness probe :9990/health/live does not restart pod in case of Deployment Error Issue - JBEAP-24257 In JBoss EAP 7.4, the python liveness probe reports "not alive" when there are deployment errors that would result in restarting the container. In JBoss EAP 8.0, the liveness probe :9990/health/live uses the server management model to determine readiness. If the server-state is running, and there are no boot or deployment errors, then the liveness check reports UP when the server process is running. Therefore, deployment errors can result in a pod that is running but is "not ready". This would only affect applications that have intermittent errors during deployment. If these errors always occur during deployment, the container will never be ready and the pod would be in a CrashLoopBackoff state. Note :9990/health/live is the default liveness probe used by Helm charts and the JBoss EAP operator. Workaround If there are deployment errors that result in a pod that is running but is reporting "not ready", examine the server boot process, resolve the deployment issue causing the errors, and then verify that the server deploys correctly. If the deployment errors cannot be fixed, change the startup probe to use the /ready HTTP endpoint so that boot errors will trigger a pod restart. For example, if you deploy a JBoss EAP application with Helm, configure the liveness probe by updating the deploy.livenessProbe field: 7.4. Messaging framework Deprecation of org.apache.activemq.artemis module and warning messages Issue - JBEAP-26188 The org.apache.activemq.artemis module is deprecated in JBoss EAP 8.0. A warning message is triggered when deploying an application that include this module dependency in either the MANIFEST.MF or jboss-deployment-structure.xml configuration files. For more information, see Deprecated in Red Hat JBoss Enterprise Application Platform (EAP) 8 . Workaround With JBoss EAP 8.0 Update 1, you can prevent logging of these warning messages by replacing the org.apache.activemq.artemis module in your configuration files with the org.apache.activemq.artemis.client public module. For more information, see org.jboss.as.dependency.deprecated ... is using a deprecated module ("org.apache.activemq.artemis") in EAP 8 . 7.5. IBM MQ resource adapters Limitations and known issues of IBM MQ resource adapters IBM MQ resource adapters are supported with some limitations. See Deploying the IBM MQ Resource Adapter for more information. Revised on 2024-02-27 10:08:25 UTC | [
"batch /subsystem=distributable-web/infinispan-session-management=ism-0:add(cache-container=web,granularity=SESSION) /subsystem=distributable-web/infinispan-session-management=ism-0/affinity=local:add() run-batch -v",
"Caused by: java.lang.IllegalArgumentException: Only byte[] instances are supported currently! at [email protected]//org.infinispan.client.hotrod.marshall.BytesOnlyMarshaller.checkByteArray(BytesOnlyMarshaller.java:27)",
"/subsystem=infinispan/remote-cache-container=<RHDG_REMOTE_CACHE_CONTAINER_RESOURCE_NAME>:write-attribute(name=marshaller,value=PROTOSTREAM)",
"deploy: livenessProbe: httpGet: path: /health/ready"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/release_notes_for_red_hat_jboss_enterprise_application_platform_8.0/ref-known-issues_assembly-release-notes |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 8.4 is distributed with the kernel version 4.18.0-305, which provides support for the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures . For a list of available subscriptions, see Subscription Utilization on the Customer Portal. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/architectures |
function::registers_valid | function::registers_valid Name function::registers_valid - Determines validity of register and u_register in current context Synopsis Arguments None Description This function returns 1 if register and u_register can be used in the current context, or 0 otherwise. For example, registers_valid returns 0 when called from a begin or end probe. | [
"registers_valid:long()"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-registers-valid |
7.5. Defining Audit Rules | 7.5. Defining Audit Rules The Audit system operates on a set of rules that define what is to be captured in the log files. There are three types of Audit rules that can be specified: Control rules - allow the Audit system's behavior and some of its configuration to be modified. File system rules - also known as file watches, allow the auditing of access to a particular file or a directory. System call rules - allow logging of system calls that any specified program makes. Audit rules can be specified on the command line with the auditctl utility (note that these rules are not persistent across reboots), or written in the /etc/audit/audit.rules file. The following two sections summarize both approaches to defining Audit rules. 7.5.1. Defining Audit Rules with the auditctl Utility Note All commands which interact with the Audit service and the Audit log files require root privileges. Ensure you execute these commands as the root user. The auditctl command allows you to control the basic functionality of the Audit system and to define rules that decide which Audit events are logged. Defining Control Rules The following are some of the control rules that allow you to modify the behavior of the Audit system: -b sets the maximum amount of existing Audit buffers in the kernel, for example: -f sets the action that is performed when a critical error is detected, for example: The above configuration triggers a kernel panic in case of a critical error. -e enables and disables the Audit system or locks its configuration, for example: The above command locks the Audit configuration. -r sets the rate of generated messages per second, for example: The above configuration sets no rate limit on generated messages. -s reports the status of the Audit system, for example: -l lists all currently loaded Audit rules, for example: -D deletes all currently loaded Audit rules, for example: Defining File System Rules To define a file system rule, use the following syntax: where: path_to_file is the file or directory that is audited. permissions are the permissions that are logged: r - read access to a file or a directory. w - write access to a file or a directory. x - execute access to a file or a directory. a - change in the file's or directory's attribute. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.1. File System Rules To define a rule that logs all write access to, and every attribute change of, the /etc/passwd file, execute the following command: Note that the string following the -k option is arbitrary. To define a rule that logs all write access to, and every attribute change of, all the files in the /etc/selinux/ directory, execute the following command: To define a rule that logs the execution of the /sbin/insmod command, which inserts a module into the Linux kernel, execute the following command: Defining System Call Rules To define a system call rule, use the following syntax: where: action and filter specify when a certain event is logged. action can be either always or never . filter specifies which kernel rule-matching filter is applied to the event. The rule-matching filter can be one of the following: task , exit , user , and exclude . For more information about these filters, see the beginning of Section 7.1, "Audit System Architecture" . system_call specifies the system call by its name. A list of all system calls can be found in the /usr/include/asm/unistd_64.h file. Several system calls can be grouped into one rule, each specified after the -S option. field = value specifies additional options that furthermore modify the rule to match events based on a specified architecture, group ID, process ID, and others. For a full listing of all available field types and their values, see the auditctl (8) man page. key_name is an optional string that helps you identify which rule or a set of rules generated a particular log entry. Example 7.2. System Call Rules To define a rule that creates a log entry every time the adjtimex or settimeofday system calls are used by a program, and the system uses the 64-bit architecture, execute the following command: To define a rule that creates a log entry every time a file is deleted or renamed by a system user whose ID is 500 or larger (the -F auid!=4294967295 option is used to exclude users whose login UID is not set), execute the following command: It is also possible to define a file system rule using the system call rule syntax. The following command creates a rule for system calls that is analogous to the -w /etc/shadow -p wa file system rule: | [
"~]# auditctl -b 8192",
"~]# auditctl -f 2",
"~]# auditctl -e 2",
"~]# auditctl -r 0",
"~]# auditctl -s AUDIT_STATUS: enabled=1 flag=2 pid=0 rate_limit=0 backlog_limit=8192 lost=259 backlog=0",
"~]# auditctl -l LIST_RULES: exit,always watch=/etc/localtime perm=wa key=time-change LIST_RULES: exit,always watch=/etc/group perm=wa key=identity LIST_RULES: exit,always watch=/etc/passwd perm=wa key=identity LIST_RULES: exit,always watch=/etc/gshadow perm=wa key=identity ...",
"~]# auditctl -D No rules",
"auditctl -w path_to_file -p permissions -k key_name",
"~]# auditctl -w /etc/passwd -p wa -k passwd_changes",
"~]# auditctl -w /etc/selinux/ -p wa -k selinux_changes",
"~]# auditctl -w /sbin/insmod -p x -k module_insertion",
"auditctl -a action , filter -S system_call -F field = value -k key_name",
"~]# auditctl -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change",
"~]# auditctl -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete",
"~]# auditctl -a always,exit -F path=/etc/shadow -F perm=wa"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-Defining_Audit_Rules_and_Controls |
13.4. Configuration examples | 13.4. Configuration examples The following examples provide real-world demonstrations of how SELinux complements the Apache HTTP Server and how full function of the Apache HTTP Server can be maintained. 13.4.1. Running a static site To create a static website, label the .html files for that website with the httpd_sys_content_t type. By default, the Apache HTTP Server cannot write to files that are labeled with the httpd_sys_content_t type. The following example creates a new directory to store files for a read-only website: Use the mkdir utility as root to create a top-level directory: As root, create a /mywebsite/index.html file. Copy and paste the following content into /mywebsite/index.html : To allow the Apache HTTP Server read only access to /mywebsite/ , as well as files and subdirectories under it, label the directory with the httpd_sys_content_t type. Enter the following command as root to add the label change to file-context configuration: Use the restorecon utility as root to make the label changes: For this example, edit the /etc/httpd/conf/httpd.conf file as root. Comment out the existing DocumentRoot option. Add a DocumentRoot "/mywebsite" option. After editing, these options should look as follows: Enter the following command as root to see the status of the Apache HTTP Server. If the server is stopped, start it: If the server is running, restart the service by executing the following command as root (this also applies any changes made to httpd.conf ): Use a web browser to navigate to http://localhost/index.html . The following is displayed: 13.4.2. Sharing NFS and CIFS volumes By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Also, by default, Samba shares mounted on the client side are labeled with a default context defined by policy. In common policies, this default context uses the cifs_t type. Depending on policy configuration, services may not be able to read files labeled with the nfs_t or cifs_t types. This may prevent file systems labeled with these types from being mounted and then read or exported by other services. Booleans can be enabled or disabled to control which services are allowed to access the nfs_t and cifs_t types. Enable the httpd_use_nfs Boolean to allow httpd to access and share NFS volumes (labeled with the nfs_t type): Enable the httpd_use_cifs Boolean to allow httpd to access and share CIFS volumes (labeled with the cifs_t type): Note Do not use the -P option if you do not want setsebool changes to persist across reboots. 13.4.3. Sharing files between services Type Enforcement helps prevent processes from accessing files intended for use by another process. For example, by default, Samba cannot read files labeled with the httpd_sys_content_t type, which are intended for use by the Apache HTTP Server. Files can be shared between the Apache HTTP Server, FTP, rsync, and Samba, if the required files are labeled with the public_content_t or public_content_rw_t type. The following example creates a directory and files, and allows that directory and files to be shared (read only) through the Apache HTTP Server, FTP, rsync, and Samba: Use the mkdir utility as root to create a new top-level directory to share files between multiple services: Files and directories that do not match a pattern in file-context configuration may be labeled with the default_t type. This type is inaccessible to confined services: As root, create a /shares/index.html file. Copy and paste the following content into /shares/index.html : Labeling /shares/ with the public_content_t type allows read-only access by the Apache HTTP Server, FTP, rsync, and Samba. Enter the following command as root to add the label change to file-context configuration: Use the restorecon utility as root to apply the label changes: To share /shares/ through Samba: Confirm the samba , samba-common , and samba-client packages are installed (version numbers may differ): If any of these packages are not installed, install them by running the following command as root: Edit the /etc/samba/smb.conf file as root. Add the following entry to the bottom of this file to share the /shares/ directory through Samba: A Samba account is required to mount a Samba file system. Enter the following command as root to create a Samba account, where username is an existing Linux user. For example, smbpasswd -a testuser creates a Samba account for the Linux testuser user: If you run the above command, specifying a user name of an account that does not exist on the system, it causes a Cannot locate Unix account for ' username '! error. Start the Samba service: Enter the following command to list the available shares, where username is the Samba account added in step 3. When prompted for a password, enter the password assigned to the Samba account in step 3 (version numbers may differ): User the mkdir utility to create a new directory. This directory will be used to mount the shares Samba share: Enter the following command as root to mount the shares Samba share to /test/ , replacing username with the user name from step 3: Enter the password for username , which was configured in step 3. View the content of the file, which is being shared through Samba: To share /shares/ through the Apache HTTP Server: Confirm the httpd package is installed (version number may differ): If this package is not installed, use the yum utility as root to install it: Change into the /var/www/html/ directory. Enter the following command as root to create a link (named shares ) to the /shares/ directory: Start the Apache HTTP Server: Use a web browser to navigate to http://localhost/shares . The /shares/index.html file is displayed. By default, the Apache HTTP Server reads an index.html file if it exists. If /shares/ did not have index.html , and instead had file1 , file2 , and file3 , a directory listing would occur when accessing http://localhost/shares : Remove the index.html file: Use the touch utility as root to create three files in /shares/ : Enter the following command as root to see the status of the Apache HTTP Server: If the server is stopped, start it: Use a web browser to navigate to http://localhost/shares . A directory listing is displayed: 13.4.4. Changing port numbers Depending on policy configuration, services may only be allowed to run on certain port numbers. Attempting to change the port a service runs on without changing policy may result in the service failing to start. Use the semanage utility as the root user to list the ports SELinux allows httpd to listen on: By default, SELinux allows httpd to listen on TCP ports 80, 443, 488, 8008, 8009, or 8443. If /etc/httpd/conf/httpd.conf is configured so that httpd listens on any port not listed for http_port_t , httpd fails to start. To configure httpd to run on a port other than TCP ports 80, 443, 488, 8008, 8009, or 8443: Edit the /etc/httpd/conf/httpd.conf file as root so the Listen option lists a port that is not configured in SELinux policy for httpd . The following example configures httpd to listen on the 10.0.0.1 IP address, and on TCP port 12345: Enter the following command as the root user to add the port to SELinux policy configuration: Confirm that the port is added: If you no longer run httpd on port 12345, use the semanage utility as root to remove the port from policy configuration: | [
"~]# mkdir /mywebsite",
"<html> <h2>index.html from /mywebsite/</h2> </html>",
"~]# semanage fcontext -a -t httpd_sys_content_t \"/mywebsite(/.*)?\"",
"~]# restorecon -R -v /mywebsite restorecon reset /mywebsite context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /mywebsite/index.html context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0",
"#DocumentRoot \"/var/www/html\" DocumentRoot \"/mywebsite\"",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: inactive (dead)",
"~]# systemctl start httpd.service",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Wed 2014-02-05 13:16:46 CET; 2s ago",
"~]# systemctl restart httpd.service",
"index.html from /mywebsite/",
"~]# setsebool -P httpd_use_nfs on",
"~]# setsebool -P httpd_use_cifs on",
"~]# mkdir /shares",
"~]USD ls -dZ /shares drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /shares",
"<html> <body> <p>Hello</p> </body> </html>",
"~]# semanage fcontext -a -t public_content_t \"/shares(/.*)?\"",
"~]# restorecon -R -v /shares/ restorecon reset /shares context unconfined_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0 restorecon reset /shares/index.html context unconfined_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0",
"~]USD rpm -q samba samba-common samba-client samba-3.4.0-0.41.el6.3.i686 samba-common-3.4.0-0.41.el6.3.i686 samba-client-3.4.0-0.41.el6.3.i686",
"~]# yum install package-name",
"[shares] comment = Documents for Apache HTTP Server, FTP, rsync, and Samba path = /shares public = yes writable = no",
"~]# smbpasswd -a testuser New SMB password: Enter a password Retype new SMB password: Enter the same password again Added user testuser.",
"~]# systemctl start smb.service",
"~]USD smbclient -U username -L localhost Enter username 's password: Domain=[ HOSTNAME ] OS=[Unix] Server=[Samba 3.4.0-0.41.el6] Sharename Type Comment --------- ---- ------- shares Disk Documents for Apache HTTP Server, FTP, rsync, and Samba IPCUSD IPC IPC Service (Samba Server Version 3.4.0-0.41.el6) username Disk Home Directories Domain=[ HOSTNAME ] OS=[Unix] Server=[Samba 3.4.0-0.41.el6] Server Comment --------- ------- Workgroup Master --------- -------",
"~]# mkdir /test/",
"~]# mount //localhost/shares /test/ -o user= username",
"~]USD cat /test/index.html <html> <body> <p>Hello</p> </body> </html>",
"~]USD rpm -q httpd httpd-2.2.11-6.i386",
"~]# yum install httpd",
"html]# ln -s /shares/ shares",
"~]# systemctl start httpd.service",
"~]# rm -i /shares/index.html",
"~]# touch /shares/file{1,2,3} ~]# ls -Z /shares/ -rw-r--r-- root root system_u:object_r:public_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:public_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:public_content_t:s0 file3",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: inactive (dead)",
"~]# systemctl start httpd.service",
"~]# semanage port -l | grep -w http_port_t http_port_t tcp 80, 443, 488, 8008, 8009, 8443",
"Change this to Listen on specific IP addresses as shown below to prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # #Listen 12.34.56.78:80 Listen 10.0.0.1:12345",
"~]# semanage port -a -t http_port_t -p tcp 12345",
"~]# semanage port -l | grep -w http_port_t http_port_t tcp 12345, 80, 443, 488, 8008, 8009, 8443",
"~]# semanage port -d -t http_port_t -p tcp 12345"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-the_apache_http_server-configuration_examples |
Chapter 4. address | Chapter 4. address This chapter describes the commands under the address command. 4.1. address group create Create a new Address Group Usage: Table 4.1. Positional arguments Value Summary <name> New address group name Table 4.2. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --description <description> New address group description --address <ip-address> Ip address or cidr (repeat option to set multiple addresses) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 4.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 4.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.2. address group delete Delete address group(s) Usage: Table 4.7. Positional arguments Value Summary <address-group> Address group(s) to delete (name or id) Table 4.8. Command arguments Value Summary -h, --help Show this help message and exit 4.3. address group list List address groups Usage: Table 4.9. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List only address groups of given name in output --project <project> List address groups according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 4.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 4.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.4. address group set Set address group properties Usage: Table 4.14. Positional arguments Value Summary <address-group> Address group to modify (name or id) Table 4.15. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <name> Set address group name --description <description> Set address group description --address <ip-address> Ip address or cidr (repeat option to set multiple addresses) 4.5. address group show Display address group details Usage: Table 4.16. Positional arguments Value Summary <address-group> Address group to display (name or id) Table 4.17. Command arguments Value Summary -h, --help Show this help message and exit Table 4.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 4.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.6. address group unset Unset address group properties Usage: Table 4.22. Positional arguments Value Summary <address-group> Address group to modify (name or id) Table 4.23. Command arguments Value Summary -h, --help Show this help message and exit --address <ip-address> Ip address or cidr (repeat option to unset multiple addresses) 4.7. address scope create Create a new Address Scope Usage: Table 4.24. Positional arguments Value Summary <name> New address scope name Table 4.25. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --ip-version {4,6} Ip version (default is 4) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share Share the address scope between projects --no-share Do not share the address scope between projects (default) Table 4.26. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 4.27. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.28. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.29. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.8. address scope delete Delete address scope(s) Usage: Table 4.30. Positional arguments Value Summary <address-scope> Address scope(s) to delete (name or id) Table 4.31. Command arguments Value Summary -h, --help Show this help message and exit 4.9. address scope list List address scopes Usage: Table 4.32. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List only address scopes of given name in output --ip-version <ip-version> List address scopes of given ip version networks (4 or 6) --project <project> List address scopes according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share List address scopes shared between projects --no-share List address scopes not shared between projects Table 4.33. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 4.34. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.35. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.36. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.10. address scope set Set address scope properties Usage: Table 4.37. Positional arguments Value Summary <address-scope> Address scope to modify (name or id) Table 4.38. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <name> Set address scope name --share Share the address scope between projects --no-share Do not share the address scope between projects 4.11. address scope show Display address scope details Usage: Table 4.39. Positional arguments Value Summary <address-scope> Address scope to display (name or id) Table 4.40. Command arguments Value Summary -h, --help Show this help message and exit Table 4.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 4.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack address group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--description <description>] [--address <ip-address>] [--project <project>] [--project-domain <project-domain>] <name>",
"openstack address group delete [-h] <address-group> [<address-group> ...]",
"openstack address group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--project <project>] [--project-domain <project-domain>]",
"openstack address group set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <name>] [--description <description>] [--address <ip-address>] <address-group>",
"openstack address group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <address-group>",
"openstack address group unset [-h] [--address <ip-address>] <address-group>",
"openstack address scope create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--ip-version {4,6}] [--project <project>] [--project-domain <project-domain>] [--share | --no-share] <name>",
"openstack address scope delete [-h] <address-scope> [<address-scope> ...]",
"openstack address scope list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--ip-version <ip-version>] [--project <project>] [--project-domain <project-domain>] [--share | --no-share]",
"openstack address scope set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <name>] [--share | --no-share] <address-scope>",
"openstack address scope show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <address-scope>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/address |
Distributed compute node and storage deployment | Distributed compute node and storage deployment Red Hat OpenStack Platform 17.0 Deploying Red Hat OpenStack Platform distributed compute node technologies OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/index |
probe::sunrpc.svc.recv | probe::sunrpc.svc.recv Name probe::sunrpc.svc.recv - Listen for the RPC request on any socket Synopsis sunrpc.svc.recv Values sv_nrthreads the number of concurrent threads sv_name the service name sv_prog the number of the program timeout the timeout of waiting for data | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-svc-recv |
2.3. Global Utilization | 2.3. Global Utilization The Global Utilization section shows the system utilization of the CPU, Memory and Storage. Figure 2.3. Global Utilization The top section shows the percentage of the available CPU, memory or storage and the over commit ratio. For example, the over commit ratio for the CPU is calculated by dividing the number of virtual cores by the number of physical cores that are available for the running virtual machines based on the latest data in Data Warehouse. The donut displays the usage in percentage for the CPU, memory or storage and shows the average usage for all hosts based on the average usage in the last 5 minutes. Hovering over a section of the donut will display the value of the selected section. The line graph at the bottom displays the trend in the last 24 hours. Each data point shows the average usage for a specific hour. Hovering over a point on the graph displays the time and the percentage used for the CPU graph and the amount of usage for the memory and storage graphs. 2.3.1. Top Utilized Resources Figure 2.4. Top Utilized Resources (Memory) Clicking the donut in the global utilization section of the Dashboard will display a list of the top utilized resources for the CPU, memory or storage. For CPU and memory the pop-up shows a list of the ten hosts and virtual machines with the highest usage. For storage the pop-up shows a list of the top ten utilized storage domains and virtual machines. The arrow to the right of the usage bar shows the trend of usage for that resource in the last minute. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-global_utilization |
5.3.14. Renaming a Volume Group | 5.3.14. Renaming a Volume Group Use the vgrename command to rename an existing volume group. Either of the following commands renames the existing volume group vg02 to my_volume_group | [
"vgrename /dev/vg02 /dev/my_volume_group",
"vgrename vg02 my_volume_group"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/VG_rename |
2.6. Removing Control Groups | 2.6. Removing Control Groups Use cgdelete to remove cgroups. It has similar syntax as cgcreate . Run the following command: where: subsystems is a comma‐separated list of subsystems. path is the path to the cgroup relative to the root of the hierarchy. For example: cgdelete can also recursively remove all subgroups with the option -r . When you delete a cgroup, all its tasks move to its parent group. | [
"cgdelete subsystems : path",
"~]# cgdelete cpu,net_cls:/test-subgroup"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-removing_cgroups |
Chapter 4. AlertRelabelConfig [monitoring.openshift.io/v1] | Chapter 4. AlertRelabelConfig [monitoring.openshift.io/v1] Description AlertRelabelConfig defines a set of relabel configs for alerts. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec describes the desired state of this AlertRelabelConfig object. status object status describes the current state of this AlertRelabelConfig object. 4.1.1. .spec Description spec describes the desired state of this AlertRelabelConfig object. Type object Required configs Property Type Description configs array configs is a list of sequentially evaluated alert relabel configs. configs[] object RelabelConfig allows dynamic rewriting of label sets for alerts. See Prometheus documentation: - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config 4.1.2. .spec.configs Description configs is a list of sequentially evaluated alert relabel configs. Type array 4.1.3. .spec.configs[] Description RelabelConfig allows dynamic rewriting of label sets for alerts. See Prometheus documentation: - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string action to perform based on regex matching. Must be one of: 'Replace', 'Keep', 'Drop', 'HashMod', 'LabelMap', 'LabelDrop', or 'LabelKeep'. Default is: 'Replace' modulus integer modulus to take of the hash of the source label values. This can be combined with the 'HashMod' action to set 'target_label' to the 'modulus' of a hash of the concatenated 'source_labels'. This is only valid if sourceLabels is not empty and action is not 'LabelKeep' or 'LabelDrop'. regex string regex against which the extracted value is matched. Default is: '(.*)' regex is required for all actions except 'HashMod' replacement string replacement value against which a regex replace is performed if the regular expression matches. This is required if the action is 'Replace' or 'LabelMap' and forbidden for actions 'LabelKeep' and 'LabelDrop'. Regex capture groups are available. Default is: 'USD1' separator string separator placed between concatenated source label values. When omitted, Prometheus will use its default value of ';'. sourceLabels array (string) sourceLabels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the 'Replace', 'Keep', and 'Drop' actions. Not allowed for actions 'LabelKeep' and 'LabelDrop'. targetLabel string targetLabel to which the resulting value is written in a 'Replace' action. It is required for 'Replace' and 'HashMod' actions and forbidden for actions 'LabelKeep' and 'LabelDrop'. Regex capture groups are available. 4.1.4. .status Description status describes the current state of this AlertRelabelConfig object. Type object Property Type Description conditions array conditions contains details on the state of the AlertRelabelConfig, may be empty. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 4.1.5. .status.conditions Description conditions contains details on the state of the AlertRelabelConfig, may be empty. Type array 4.1.6. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 4.2. API endpoints The following API endpoints are available: /apis/monitoring.openshift.io/v1/alertrelabelconfigs GET : list objects of kind AlertRelabelConfig /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs DELETE : delete collection of AlertRelabelConfig GET : list objects of kind AlertRelabelConfig POST : create an AlertRelabelConfig /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs/{name} DELETE : delete an AlertRelabelConfig GET : read the specified AlertRelabelConfig PATCH : partially update the specified AlertRelabelConfig PUT : replace the specified AlertRelabelConfig /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs/{name}/status GET : read status of the specified AlertRelabelConfig PATCH : partially update status of the specified AlertRelabelConfig PUT : replace status of the specified AlertRelabelConfig 4.2.1. /apis/monitoring.openshift.io/v1/alertrelabelconfigs HTTP method GET Description list objects of kind AlertRelabelConfig Table 4.1. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfigList schema 401 - Unauthorized Empty 4.2.2. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs HTTP method DELETE Description delete collection of AlertRelabelConfig Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AlertRelabelConfig Table 4.3. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an AlertRelabelConfig Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body AlertRelabelConfig schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 201 - Created AlertRelabelConfig schema 202 - Accepted AlertRelabelConfig schema 401 - Unauthorized Empty 4.2.3. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the AlertRelabelConfig HTTP method DELETE Description delete an AlertRelabelConfig Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AlertRelabelConfig Table 4.10. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AlertRelabelConfig Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AlertRelabelConfig Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body AlertRelabelConfig schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 201 - Created AlertRelabelConfig schema 401 - Unauthorized Empty 4.2.4. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the AlertRelabelConfig HTTP method GET Description read status of the specified AlertRelabelConfig Table 4.17. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AlertRelabelConfig Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AlertRelabelConfig Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body AlertRelabelConfig schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 201 - Created AlertRelabelConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring_apis/alertrelabelconfig-monitoring-openshift-io-v1 |
Chapter 8. Troubleshooting Ceph placement groups | Chapter 8. Troubleshooting Ceph placement groups This section contains information about fixing the most common errors related to the Ceph Placement Groups (PGs). 8.1. Prerequisites Verify your network connection. Ensure that Monitors are able to form a quorum. Ensure that all healthy OSDs are up and in , and the backfilling and recovery processes are finished. 8.2. Most common Ceph placement groups errors The following table lists the most common error messages that are returned by the ceph health detail command. The table provides links to corresponding sections that explain the errors and point to specific procedures to fix the problems. In addition, you can list placement groups that are stuck in a state that is not optimal. See Section 8.3, "Listing placement groups stuck in stale , inactive , or unclean state" for details. 8.2.1. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. 8.2.2. Placement group error messages A table of common placement group error messages, and a potential fix. Error message See HEALTH_ERR pgs down Placement groups are down pgs inconsistent Inconsistent placement groups scrub errors Inconsistent placement groups HEALTH_WARN pgs stale Stale placement groups unfound Unfound objects 8.2.3. Stale placement groups The ceph health command lists some Placement Groups (PGs) as stale : What This Means The Monitor marks a placement group as stale when it does not receive any status update from the primary OSD of the placement group's acting set or when other OSDs reported that the primary OSD is down . Usually, PGs enter the stale state after you start the storage cluster and until the peering process completes. However, when the PGs remain stale for longer than expected, it might indicate that the primary OSD for those PGs is down or not reporting PG statistics to the Monitor. When the primary OSD storing stale PGs is back up , Ceph starts to recover the PGs. The mon_osd_report_timeout setting determines how often OSDs report PGs statistics to Monitors. By default, this parameter is set to 0.5 , which means that OSDs report the statistics every half a second. To Troubleshoot This Problem Identify which PGs are stale and on what OSDs they are stored. The error message includes information similar to the following example: Example Troubleshoot any problems with the OSDs that are marked as down . For details, see Down OSDs . Additional Resources The Monitoring Placement Group Sets section in the Administration Guide for Red Hat Ceph Storage 5 8.2.4. Inconsistent placement groups Some placement groups are marked as active + clean + inconsistent and the ceph health detail returns an error message similar to the following one: What This Means When Ceph detects inconsistencies in one or more replicas of an object in a placement group, it marks the placement group as inconsistent . The most common inconsistencies are: Objects have an incorrect size. Objects are missing from one replica after a recovery finished. In most cases, errors during scrubbing cause inconsistency within placement groups. To Troubleshoot This Problem Log in to the Cephadm shell: Example Determine which placement group is in the inconsistent state: Determine why the placement group is inconsistent . Start the deep scrubbing process on the placement group: Syntax Replace ID with the ID of the inconsistent placement group, for example: Search the output of the ceph -w for any messages related to that placement group: Syntax Replace ID with the ID of the inconsistent placement group, for example: If the output includes any error messages similar to the following ones, you can repair the inconsistent placement group. See Repairing inconsistent placement groups for details. Syntax If the output includes any error messages similar to the following ones, it is not safe to repair the inconsistent placement group because you can lose data. Open a support ticket in this situation. See Contacting Red Hat support for details. Additional Resources See the Listing placement group inconsistencies in the Red Hat Ceph Storage Troubleshooting Guide . See the Ceph data integrity section in the Red Hat Ceph Storage Architecture Guide . See the Scrubbing the OSD section in the Red Hat Ceph Storage Configuration Guide . 8.2.5. Unclean placement groups The ceph health command returns an error message similar to the following one: What This Means Ceph marks a placement group as unclean if it has not achieved the active+clean state for the number of seconds specified in the mon_pg_stuck_threshold parameter in the Ceph configuration file. The default value of mon_pg_stuck_threshold is 300 seconds. If a placement group is unclean , it contains objects that are not replicated the number of times specified in the osd_pool_default_size parameter. The default value of osd_pool_default_size is 3 , which means that Ceph creates three replicas. Usually, unclean placement groups indicate that some OSDs might be down . To Troubleshoot This Problem Determine which OSDs are down : Troubleshoot and fix any problems with the OSDs. See Down OSDs for details. Additional Resources Listing placement groups stuck in stale inactive or unclean state . 8.2.6. Inactive placement groups The ceph health command returns an error message similar to the following one: What This Means Ceph marks a placement group as inactive if it has not be active for the number of seconds specified in the mon_pg_stuck_threshold parameter in the Ceph configuration file. The default value of mon_pg_stuck_threshold is 300 seconds. Usually, inactive placement groups indicate that some OSDs might be down . To Troubleshoot This Problem Determine which OSDs are down : Troubleshoot and fix any problems with the OSDs. Additional Resources Listing placement groups stuck in stale inactive or unclean state See Down OSDs for details. 8.2.7. Placement groups are down The ceph health detail command reports that some placement groups are down : What This Means In certain cases, the peering process can be blocked, which prevents a placement group from becoming active and usable. Usually, a failure of an OSD causes the peering failures. To Troubleshoot This Problem Determine what blocks the peering process: Syntax Replace ID with the ID of the placement group that is down : Example The recovery_state section includes information on why the peering process is blocked. If the output includes the peering is blocked due to down osds error message, see Down OSDs . If you see any other error message, open a support ticket. See Contacting Red Hat Support service for details. Additional Resources The Ceph OSD peering section in the Red Hat Ceph Storage Administration Guide . 8.2.8. Unfound objects The ceph health command returns an error message similar to the following one, containing the unfound keyword: What This Means Ceph marks objects as unfound when it knows these objects or their newer copies exist but it is unable to find them. As a consequence, Ceph cannot recover such objects and proceed with the recovery process. An Example Situation A placement group stores data on osd.1 and osd.2 . osd.1 goes down . osd.2 handles some write operations. osd.1 comes up . A peering process between osd.1 and osd.2 starts, and the objects missing on osd.1 are queued for recovery. Before Ceph copies new objects, osd.2 goes down . As a result, osd.1 knows that these objects exist, but there is no OSD that has a copy of the objects. In this scenario, Ceph is waiting for the failed node to be accessible again, and the unfound objects blocks the recovery process. To Troubleshoot This Problem Log in to the Cephadm shell: Example Determine which placement group contains unfound objects: List more information about the placement group: Syntax Replace ID with the ID of the placement group containing the unfound objects: Example The might_have_unfound section includes OSDs where Ceph tried to locate the unfound objects: The already probed status indicates that Ceph cannot locate the unfound objects in that OSD. The osd is down status indicates that Ceph cannot contact that OSD. Troubleshoot the OSDs that are marked as down . See Down OSDs for details. If you are unable to fix the problem that causes the OSD to be down , open a support ticket. See Contacting Red Hat Support for service for details. 8.3. Listing placement groups stuck in stale , inactive , or unclean state After a failure, placement groups enter states like degraded or peering . This states indicate normal progression through the failure recovery process. However, if a placement group stays in one of these states for a longer time than expected, it can be an indication of a larger problem. The Monitors report when placement groups get stuck in a state that is not optimal. The mon_pg_stuck_threshold option in the Ceph configuration file determines the number of seconds after which placement groups are considered inactive , unclean , or stale . The following table lists these states together with a short explanation. State What it means Most common causes See inactive The PG has not been able to service read/write requests. Peering problems Inactive placement groups unclean The PG contains objects that are not replicated the desired number of times. Something is preventing the PG from recovering. unfound objects OSDs are down Incorrect configuration Unclean placement groups stale The status of the PG has not been updated by a ceph-osd daemon. OSDs are down Stale placement groups Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example List the stuck PGs: Example Additional Resources See the Placement Group States section in the Red Hat Ceph Storage Administration Guide . 8.4. Listing placement group inconsistencies Use the rados utility to list inconsistencies in various replicas of objects. Use the --format=json-pretty option to list a more detailed output. This section covers the listing of: Inconsistent placement group in a pool Inconsistent objects in a placement group Inconsistent snapshot sets in a placement group Prerequisites A running Red Hat Ceph Storage cluster in a healthy state. Root-level access to the node. Procedure List all the inconsistent placement groups in a pool: Syntax Example List inconsistent objects in a placement group with ID: Syntax Example The following fields are important to determine what causes the inconsistency: name : The name of the object with inconsistent replicas. nspace : The namespace that is a logical separation of a pool. It's empty by default. locator : The key that is used as the alternative of the object name for placement. snap : The snapshot ID of the object. The only writable version of the object is called head . If an object is a clone, this field includes its sequential ID. version : The version ID of the object with inconsistent replicas. Each write operation to an object increments it. errors : A list of errors that indicate inconsistencies between shards without determining which shard or shards are incorrect. See the shard array to further investigate the errors. data_digest_mismatch : The digest of the replica read from one OSD is different from the other OSDs. size_mismatch : The size of a clone or the head object does not match the expectation. read_error : This error indicates inconsistencies caused most likely by disk errors. union_shard_error : The union of all errors specific to shards. These errors are connected to a faulty shard. The errors that end with oi indicate that you have to compare the information from a faulty object to information with selected objects. See the shard array to further investigate the errors. In the above example, the object replica stored on osd.2 has different digest than the replicas stored on osd.0 and osd.1 . Specifically, the digest of the replica is not 0xffffffff as calculated from the shard read from osd.2 , but 0xe978e67f . In addition, the size of the replica read from osd.2 is 0, while the size reported by osd.0 and osd.1 is 968. List inconsistent sets of snapshots: Syntax Example The command returns the following errors: ss_attr_missing : One or more attributes are missing. Attributes are information about snapshots encoded into a snapshot set as a list of key-value pairs. ss_attr_corrupted : One or more attributes fail to decode. clone_missing : A clone is missing. snapset_mismatch : The snapshot set is inconsistent by itself. head_mismatch : The snapshot set indicates that head exists or not, but the scrub results report otherwise. headless : The head of the snapshot set is missing. size_mismatch : The size of a clone or the head object does not match the expectation. Additional Resources Inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . Repairing inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . 8.5. Repairing inconsistent placement groups Due to an error during deep scrubbing, some placement groups can include inconsistencies. Ceph reports such placement groups as inconsistent : Warning You can repair only certain inconsistencies. Do not repair the placement groups if the Ceph logs include the following errors: Open a support ticket instead. See Contacting Red Hat Support for service for details. Prerequisites Root-level access to the Ceph Monitor node. Procedure Repair the inconsistent placement groups: Syntax Replace ID with the ID of the inconsistent placement group. Additional Resources See the Inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . See the Listing placement group inconsistencies Red Hat Ceph Storage Troubleshooting Guide . 8.6. Increasing the placement group Insufficient Placement Group (PG) count impacts the performance of the Ceph cluster and data distribution. It is one of the main causes of the nearfull osds error messages. The recommended ratio is between 100 and 300 PGs per OSD. This ratio can decrease when you add more OSDs to the cluster. The pg_num and pgp_num parameters determine the PG count. These parameters are configured per each pool, and therefore, you must adjust each pool with low PG count separately. Important Increasing the PG count is the most intensive process that you can perform on a Ceph cluster. This process might have a serious performance impact if not done in a slow and methodical way. Once you increase pgp_num , you will not be able to stop or reverse the process and you must complete it. Consider increasing the PG count outside of business critical processing time allocation, and alert all clients about the potential performance impact. Do not change the PG count if the cluster is in the HEALTH_ERR state. Prerequisites A running Red Hat Ceph Storage cluster in a healthy state. Root-level access to the node. Procedure Reduce the impact of data redistribution and recovery on individual OSDs and OSD hosts: Lower the value of the osd max backfills , osd_recovery_max_active , and osd_recovery_op_priority parameters: Disable the shallow and deep scrubbing: Use the Ceph Placement Groups (PGs) per Pool Calculator to calculate the optimal value of the pg_num and pgp_num parameters. Increase the pg_num value in small increments until you reach the desired value. Determine the starting increment value. Use a very low value that is a power of two, and increase it when you determine the impact on the cluster. The optimal value depends on the pool size, OSD count, and client I/O load. Increment the pg_num value: Syntax Specify the pool name and the new value, for example: Example Monitor the status of the cluster: Example The PGs state will change from creating to active+clean . Wait until all PGs are in the active+clean state. Increase the pgp_num value in small increments until you reach the desired value: Determine the starting increment value. Use a very low value that is a power of two, and increase it when you determine the impact on the cluster. The optimal value depends on the pool size, OSD count, and client I/O load. Increment the pgp_num value: Syntax Specify the pool name and the new value, for example: Monitor the status of the cluster: The PGs state will change through peering , wait_backfill , backfilling , recover , and others. Wait until all PGs are in the active+clean state. Repeat the steps for all pools with insufficient PG count. Set osd max backfills , osd_recovery_max_active , and osd_recovery_op_priority to their default values: Enable the shallow and deep scrubbing: Additional Resources See the Nearfull OSDs See the Monitoring Placement Group Sets section in the Red Hat Ceph Storage Administration Guide . 8.7. Additional Resources See Chapter 3, Troubleshooting networking issues for details. See Chapter 4, Troubleshooting Ceph Monitors for details about troubleshooting the most common errors related to Ceph Monitors. See Chapter 5, Troubleshooting Ceph OSDs for details about troubleshooting the most common errors related to Ceph OSDs. See the Auto-scaling placement groups section in the Red Hat Ceph Storage Storage Strategies Guide for more information on PG autoscaler. | [
"HEALTH_WARN 24 pgs stale; 3/300 in osds are down",
"ceph health detail HEALTH_WARN 24 pgs stale; 3/300 in osds are down pg 2.5 is stuck stale+active+remapped, last acting [2,0] osd.10 is down since epoch 23, last address 192.168.106.220:6800/11080 osd.11 is down since epoch 13, last address 192.168.106.220:6803/11539 osd.12 is down since epoch 24, last address 192.168.106.220:6806/11861",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"cephadm shell",
"ceph health detail HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"ceph pg deep-scrub ID",
"ceph pg deep-scrub 0.6 instructing pg 0.6 on osd.0 to deep-scrub",
"ceph -w | grep ID",
"ceph -w | grep 0.6 2022-05-26 01:35:36.778215 osd.106 [ERR] 0.6 deep-scrub stat mismatch, got 636/635 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 1855455/1854371 bytes. 2022-05-26 01:35:36.788334 osd.106 [ERR] 0.6 deep-scrub 1 errors",
"PG . ID shard OSD : soid OBJECT missing attr , missing attr _ATTRIBUTE_TYPE PG . ID shard OSD : soid OBJECT digest 0 != known digest DIGEST , size 0 != known size SIZE PG . ID shard OSD : soid OBJECT size 0 != known size SIZE PG . ID deep-scrub stat mismatch, got MISMATCH PG . ID shard OSD : soid OBJECT candidate had a read error, digest 0 != known digest DIGEST",
"PG . ID shard OSD : soid OBJECT digest DIGEST != known digest DIGEST PG . ID shard OSD : soid OBJECT omap_digest DIGEST != known omap_digest DIGEST",
"HEALTH_WARN 197 pgs stuck unclean",
"ceph osd tree",
"HEALTH_WARN 197 pgs stuck inactive",
"ceph osd tree",
"HEALTH_ERR 7 pgs degraded; 12 pgs down; 12 pgs peering; 1 pgs recovering; 6 pgs stuck unclean; 114/3300 degraded (3.455%); 1/3 in osds are down pg 0.5 is down+peering pg 1.4 is down+peering osd.1 is down since epoch 69, last address 192.168.106.220:6801/8651",
"ceph pg ID query",
"ceph pg 0.5 query { \"state\": \"down+peering\", \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Peering\\/GetInfo\", \"enter_time\": \"2021-08-06 14:40:16.169679\", \"requested_info_from\": []}, { \"name\": \"Started\\/Primary\\/Peering\", \"enter_time\": \"2021-08-06 14:40:16.169659\", \"probing_osds\": [ 0, 1], \"blocked\": \"peering is blocked due to down osds\", \"down_osds_we_would_probe\": [ 1], \"peering_blocked_by\": [ { \"osd\": 1, \"current_lost_at\": 0, \"comment\": \"starting or marking this osd lost may let us proceed\"}]}, { \"name\": \"Started\", \"enter_time\": \"2021-08-06 14:40:16.169513\"} ] }",
"HEALTH_WARN 1 pgs degraded; 78/3778 unfound (2.065%)",
"cephadm shell",
"ceph health detail HEALTH_WARN 1 pgs recovering; 1 pgs stuck unclean; recovery 5/937611 objects degraded (0.001%); 1/312537 unfound (0.000%) pg 3.8a5 is stuck unclean for 803946.712780, current state active+recovering, last acting [320,248,0] pg 3.8a5 is active+recovering, acting [320,248,0], 1 unfound recovery 5/937611 objects degraded (0.001%); **1/312537 unfound (0.000%)**",
"ceph pg ID query",
"ceph pg 3.8a5 query { \"state\": \"active+recovering\", \"epoch\": 10741, \"up\": [ 320, 248, 0], \"acting\": [ 320, 248, 0], <snip> \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Active\", \"enter_time\": \"2021-08-28 19:30:12.058136\", \"might_have_unfound\": [ { \"osd\": \"0\", \"status\": \"already probed\"}, { \"osd\": \"248\", \"status\": \"already probed\"}, { \"osd\": \"301\", \"status\": \"already probed\"}, { \"osd\": \"362\", \"status\": \"already probed\"}, { \"osd\": \"395\", \"status\": \"already probed\"}, { \"osd\": \"429\", \"status\": \"osd is down\"}], \"recovery_progress\": { \"backfill_targets\": [], \"waiting_on_backfill\": [], \"last_backfill_started\": \"0\\/\\/0\\/\\/-1\", \"backfill_info\": { \"begin\": \"0\\/\\/0\\/\\/-1\", \"end\": \"0\\/\\/0\\/\\/-1\", \"objects\": []}, \"peer_backfill_info\": [], \"backfills_in_flight\": [], \"recovering\": [], \"pg_backend\": { \"pull_from_peer\": [], \"pushing\": []}}, \"scrub\": { \"scrubber.epoch_start\": \"0\", \"scrubber.active\": 0, \"scrubber.block_writes\": 0, \"scrubber.finalizing\": 0, \"scrubber.waiting_on\": 0, \"scrubber.waiting_on_whom\": []}}, { \"name\": \"Started\", \"enter_time\": \"2021-08-28 19:30:11.044020\"}],",
"cephadm shell",
"ceph pg dump_stuck inactive ceph pg dump_stuck unclean ceph pg dump_stuck stale",
"rados list-inconsistent-pg POOL --format=json-pretty",
"rados list-inconsistent-pg data --format=json-pretty [0.6]",
"rados list-inconsistent-obj PLACEMENT_GROUP_ID",
"rados list-inconsistent-obj 0.6 { \"epoch\": 14, \"inconsistents\": [ { \"object\": { \"name\": \"image1\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"version\": 1 }, \"errors\": [ \"data_digest_mismatch\", \"size_mismatch\" ], \"union_shard_errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"selected_object_info\": \"0:602f83fe:::foo:head(16'1 client.4110.0:1 dirty|data_digest|omap_digest s 968 uv 1 dd e978e67f od ffffffff alloc_hint [0 0 0])\", \"shards\": [ { \"osd\": 0, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 1, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 2, \"errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"size\": 0, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xffffffff\" } ] } ] }",
"rados list-inconsistent-snapset PLACEMENT_GROUP_ID",
"rados list-inconsistent-snapset 0.23 --format=json-pretty { \"epoch\": 64, \"inconsistents\": [ { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000001\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000002\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"ss_attr_missing\": true, \"extra_clones\": true, \"extra clones\": [ 2, 1 ] } ]",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"_PG_._ID_ shard _OSD_: soid _OBJECT_ digest _DIGEST_ != known digest _DIGEST_ _PG_._ID_ shard _OSD_: soid _OBJECT_ omap_digest _DIGEST_ != known omap_digest _DIGEST_",
"ceph pg repair ID",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 1 --osd_recovery_op_priority 1'",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd pool set POOL pg_num VALUE",
"ceph osd pool set data pg_num 4",
"ceph -s",
"ceph osd pool set POOL pgp_num VALUE",
"ceph osd pool set data pgp_num 4",
"ceph -s",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 3 --osd_recovery_op_priority 3'",
"ceph osd unset noscrub ceph osd unset nodeep-scrub"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/troubleshooting_guide/troubleshooting-ceph-placement-groups |
Chapter 46. Performance tuning considerations with Business Central | Chapter 46. Performance tuning considerations with Business Central The following key concepts or suggested practices can help you optimize Business Central configuration and Red Hat Process Automation Manager performance. These concepts are summarized in this section as a convenience and are explained in more detail in the cross-referenced documentation, where applicable. This section will expand or change as needed with new releases of Red Hat Process Automation Manager. Ensure that development mode is enabled during development You can set KIE Server or specific projects in Business Central to use production mode or development mode. By default, KIE Server and all new projects in Business Central are in development mode. This mode provides features that facilitate your development experience, such as flexible project deployment policies, and features that optimize KIE Server performance during development, such as disabled duplicate GAV detection. Use development mode until your Red Hat Process Automation Manager environment is established and completely ready for production mode. For more information about configuring the environment mode or duplicate GAV detection, see the following resources: Chapter 41, Configuring the environment mode in KIE Server and Business Central Packaging and deploying an Red Hat Process Automation Manager project Disable verification and validation of complex guided decision tables The decision table verification and validation feature of Business Central is enabled by default. This feature helps you validate your guided decision tables, but with complex guided decision tables, this feature can hinder decision engine performance. You can disable this feature by setting the org.kie.verification.disable-dtable-realtime-verification system property value to true . For more information about guided decision table validation, see Designing a decision service using guided decision tables . Disable automatic builds if you have many large projects In Business Central, when you navigate between projects in the Project Explorer side panel, the selected project is built automatically so that the Alerts window is updated to show any build errors for the project. If you have large projects or frequently switch between many projects that are under active development, this feature can hinder Business Central and decision engine performance. To disable automatic project builds, set the org.kie.build.disable-project-explorer system property to true . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/performance-tuning-business-central-ref_configuring-central |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_power/providing-feedback-on-red-hat-documentation_ibm-power |
27.2. libStorageMgmt Terminology | 27.2. libStorageMgmt Terminology Different array vendors and storage standards use different terminology to refer to similar functionality. This library uses the following terminology. Storage array Any storage system that provides block access (FC, FCoE, iSCSI) or file access through Network Attached Storage (NAS). Volume Storage Area Network (SAN) Storage Arrays can expose a volume to the Host Bus Adapter (HBA) over different transports, such as FC, iSCSI, or FCoE. The host OS treats it as block devices. One volume can be exposed to many disks if multipath[2] is enabled). This is also known as the Logical Unit Number (LUN), StorageVolume with SNIA terminology, or virtual disk. Pool A group of storage spaces. File systems or volumes can be created from a pool. Pools can be created from disks, volumes, and other pools. A pool may also hold RAID settings or thin provisioning settings. This is also known as a StoragePool with SNIA Terminology. Snapshot A point in time, read only, space efficient copy of data. This is also known as a read only snapshot. Clone A point in time, read writeable, space efficient copy of data. This is also known as a read writeable snapshot. Copy A full bitwise copy of the data. It occupies the full space. Mirror A continuously updated copy (synchronous and asynchronous). Access group Collections of iSCSI, FC, and FCoE initiators which are granted access to one or more storage volumes. This ensures that only storage volumes are accessible by the specified initiators. This is also known as an initiator group. Access Grant Exposing a volume to a specified access group or initiator. The libStorageMgmt library currently does not support LUN mapping with the ability to choose a specific logical unit number. The libStorageMgmt library allows the storage array to select the available LUN for assignment. If configuring a boot from SAN or masking more than 256 volumes be sure to read the OS, Storage Array, or HBA documents. Access grant is also known as LUN Masking. System Represents a storage array or a direct attached storage RAID. File system A Network Attached Storage (NAS) storage array can expose a file system to host an OS through an IP network, using either NFS or CIFS protocol. The host OS treats it as a mount point or a folder containing files depending on the client operating system. Disk The physical disk holding the data. This is normally used when creating a pool with RAID settings. This is also known as a DiskDrive using SNIA Terminology. Initiator In Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE), the initiator is the World Wide Port Name (WWPN) or World Wide Node Name (WWNN). In iSCSI, the initiator is the iSCSI Qualified Name (IQN). In NFS or CIFS, the initiator is the host name or the IP address of the host. Child dependency Some arrays have an implicit relationship between the origin (parent volume or file system) and the child (such as a snapshot or a clone). For example, it is impossible to delete the parent if it has one or more depend children. The API provides methods to determine if any such relationship exists and a method to remove the dependency by replicating the required blocks. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-libstoragemgmt-terminology |
5.37. crash | 5.37. crash 5.37.1. RHBA-2012:0822 - crash bug fix and enhancement update Updated crash packages that fix several bugs and add multiple enhancements are now available for Red Hat Enterprise Linux 6. The crash package provides a self-contained tool that can be used to investigate live systems, and kernel core dumps created from the netdump, diskdump, kdump, and Xen/KVM "virsh dump" facilities from Red Hat Enterprise Linux. The crash package has been upgraded to upstream version 6.0.4, which provides a number of bug fixes and enhancements over the version. (BZ# 767257 ) Bug Fixes BZ# 754291 If the kernel was configured with the Completely Fair Scheduler (CFS) Group Scheduling feature enabled (CONFIG_FAIR_GROUP_SCHED=y), the "runq" command of the crash utility did not display all tasks in CPU run queues. This update modifies the crash utility so that all tasks in run queues are now displayed as expected. Also, the "-d" option has been added to the "runq" command, which provides debugging information same as the /proc/sched_debug file. BZ# 768189 The "bt" command previously did not handle recursive non-maskable interrupts (NMIs) correctly on the Intel 64 and AMD64 architectures. As a consequence, the "bt" command could, under certain circumstances, display a task backtrace in an infinite loop. With this update, the crash utility has been modified to recognize a recursion in the NMI handler and prevent the infinite displaying of a backtrace. BZ# 782837 Under certain circumstances, the number of the "elf_prstatus" entries in the header of the compressed kdump core file could differ from the number of CPUs running when the system crashed. If such a core file was analyzed by the crash utility, crash terminated unexpectedly with a segmentation fault while displaying task backtraces. This update modifies the code so that the "bt" command now displays a backtrace as expected in this scenario. BZ# 797229 Recent changes in the code caused the crash utility to incorrectly recognize compressed kdump dump files for the 64-bit PowerPC architecture as dump files for the 32-bit PowerPC architecture. This caused the crash utility to fail during initialization. This update fixes the problem and the crash utility now recognizes and analyzes the compressed kdump dump files for the 32-bit and 64-bit PowerPC architectures as expected. BZ# 817247 The crash utility did not correctly handle situations when a user page was either swapped out or was not mapped on the IBM System z architecture. As a consequence, the "vm -p" command failed and either a read error occurred or an offset va1lue of a swap device was set incorrectly. With this update, crash displays the correct offset value of the swap device or correctly indicates that the user page is not mapped. BZ# 817248 The crash utility did not correctly handle situations when the "bt -t" and "bt -T" commands were run on an active task on a live system on the IBM System z architecture. Consequently, the commands failed with the "bt: invalid/stale stack pointer for this task: 0" error message. This update modifies the source code so that the "bt -t" and "bt -T" commands execute as expected. Enhancements BZ# 736884 With this update, crash now supports the "sadump" dump file format created by the Fujitsu Stand Alone Dump facility. BZ# 738865 The crash utility has been modified to fully support the "ELF kdump" and "compressed kdump" dump file formats for IBM System z. BZ# 739096 The makedumpfile facility can be used to filter out specific kernel data when creating a dump file, which can cause the crash utility to behave unpredictably. With this update, the crash utility now displays an early warning message if any part of the kernel has been erased or filtered out by makedumpfile. All users of crash are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/crash |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/installing_and_upgrading_private_automation_hub/making-open-source-more-inclusive |
Extension APIs | Extension APIs OpenShift Container Platform 4.17 Reference guide for extension APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/extension_apis/index |
Chapter 1. Advanced Red Hat Quay configuration | Chapter 1. Advanced Red Hat Quay configuration You can configure your Red Hat Quay after initial deployment using one of the following interfaces: The Red Hat Quay Config Tool. With this tool, a web-based interface for configuring the Red Hat Quay cluster is provided when running the Quay container in config mode. This method is recommended for configuring the Red Hat Quay service. Editing the config.yaml . The config.yaml file contains most configuration information for the Red Hat Quay cluster. Editing the config.yaml file directly is possible, but it is only recommended for advanced tuning and performance features that are not available through the Config Tool. Red Hat Quay API. Some Red Hat Quay features can be configured through the API. This content in this section describes how to use each of the aforementioned interfaces and how to configure your deployment with advanced features. 1.1. Using Red Hat Quay Config Tool to modify Red Hat Quay The Red Hat Quay Config Tool is made available by running a Quay container in config mode alongside the regular Red Hat Quay service. Use the following sections to run the Config Tool from the Red Hat Quay Operator, or to run the Config Tool on host systems from the command line interface (CLI). 1.1.1. Running the Config Tool from the Red Hat Quay Operator When running the Red Hat Quay Operator on OpenShift Container Platform, the Config Tool is readily available to use. Use the following procedure to access the Red Hat Quay Config Tool. Prerequisites You have deployed the Red Hat Quay Operator on OpenShift Container Platform. Procedure. On the OpenShift console, select the Red Hat Quay project, for example, quay-enterprise . In the navigation pane, select Networking Routes . You should see routes to both the Red Hat Quay application and Config Tool, as shown in the following image: Select the route to the Config Tool, for example, example-quayecosystem-quay-config . The Config Tool UI should open in your browser. Select Modify configuration for this cluster to bring up the Config Tool setup, for example: Make the desired changes, and then select Save Configuration Changes . Make any corrections needed by clicking Continue Editing , or, select to continue. When prompted, select Download Configuration . This will download a tarball of your new config.yaml , as well as any certificates and keys used with your Red Hat Quay setup. The config.yaml can be used to make advanced changes to your configuration or use as a future reference. Select Go to deployment rollout Populate the configuration to deployments . Wait for the Red Hat Quay pods to restart for the changes to take effect. 1.1.2. Running the Config Tool from the command line If you are running Red Hat Quay from a host system, you can use the following procedure to make changes to your configuration after the initial deployment. Prerequisites You have installed either podman or docker . Start Red Hat Quay in configuration mode. On the first Quay node, enter the following command: Note To modify an existing config bundle, you can mount your configuration directory into the Quay container. When the Red Hat Quay configuration tool starts, open your browser and navigate to the URL and port used in your configuration file, for example, quay-server.example.com:8080 . Enter your username and password. Modify your Red Hat Quay cluster as desired. 1.1.3. Deploying the config tool using TLS certificates You can deploy the config tool with secured TLS certificates by passing environment variables to the runtime variable. This ensures that sensitive data like credentials for the database and storage backend are protected. The public and private keys must contain valid Subject Alternative Names (SANs) for the route that you deploy the config tool on. The paths can be specified using CONFIG_TOOL_PRIVATE_KEY and CONFIG_TOOL_PUBLIC_KEY . If you are running your deployment from a container, the CONFIG_TOOL_PRIVATE_KEY and CONFIG_TOOL_PUBLIC_KEY values the locations of the certificates inside of the container. For example: USD podman run --rm -it --name quay_config -p 7070:8080 \ -v USD{PRIVATE_KEY_PATH}:/tls/localhost.key \ -v USD{PUBLIC_KEY_PATH}:/tls/localhost.crt \ -e CONFIG_TOOL_PRIVATE_KEY=/tls/localhost.key \ -e CONFIG_TOOL_PUBLIC_KEY=/tls/localhost.crt \ -e DEBUGLOG=true \ -ti config-app:dev 1.2. Using the API to modify Red Hat Quay See the Red Hat Quay API Guide for information on how to access Red Hat Quay API. 1.3. Editing the config.yaml file to modify Red Hat Quay Some advanced configuration features that are not available through the Config Tool can be implemented by editing the config.yaml file directly. Available settings are described in the Schema for Red Hat Quay configuration The following examples are settings you can change directly in the config.yaml file. 1.3.1. Add name and company to Red Hat Quay sign-in By setting the following field, users are prompted for their name and company when they first sign in. This is an optional field, but can provide your with extra data about your Red Hat Quay users. --- FEATURE_USER_METADATA: true --- 1.3.2. Disable TLS Protocols You can change the SSL_PROTOCOLS setting to remove SSL protocols that you do not want to support in your Red Hat Quay instance. For example, to remove TLS v1 support from the default SSL_PROTOCOLS:['TLSv1','TLSv1.1','TLSv1.2'] , change it to the following: --- SSL_PROTOCOLS : ['TLSv1.1','TLSv1.2'] --- 1.3.3. Rate limit API calls Adding the FEATURE_RATE_LIMITS parameter to the config.yaml file causes nginx to limit certain API calls to 30-per-second. If FEATURE_RATE_LIMITS is not set, API calls are limited to 300-per-second, effectively making them unlimited. Rate limiting is important when you must ensure that the available resources are not overwhelmed with traffic. Some namespaces might require unlimited access, for example, if they are important to CI/CD and take priority. In that scenario, those namespaces might be placed in a list in the config.yaml file using the NON_RATE_LIMITED_NAMESPACES . 1.3.4. Adjust database connection pooling Red Hat Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database. With the DB_CONNECTION_POOLING parameter, each process that interacts with the database will contain a connection pool These per-process connection pools are configured to maintain a maximum of 20 connections. When under heavy load, it is possible to fill the connection pool for every process within a Red Hat Quay container. Under certain deployments and loads, this might require analysis to ensure that Red Hat Quay does not exceed the database's configured maximum connection count. Over time, the connection pools will release idle connections. To release all connections immediately, Red Hat Quay must be restarted. Database connection pooling can be toggled by setting the DB_CONNECTION_POOLING to true or false . For example: --- DB_CONNECTION_POOLING: true --- When DB_CONNECTION_POOLING is enabled, you can change the maximum size of the connection pool with the DB_CONNECTION_ARGS in your config.yaml . For example: --- DB_CONNECTION_ARGS: max_connections: 10 --- 1.3.4.1. Database connection arguments You can customize your Red Hat Quay database connection settings within the config.yaml file. These are dependent on your deployment's database driver, for example, psycopg2 for Postgres and pymysql for MySQL. You can also pass in argument used by Peewee's connection pooling mechanism. For example: --- DB_CONNECTION_ARGS: max_connections: n # Max Connection Pool size. (Connection Pooling only) timeout: n # Time to hold on to connections. (Connection Pooling only) stale_timeout: n # Number of seconds to block when the pool is full. (Connection Pooling only) --- 1.3.4.2. Database SSL configuration Some key-value pairs defined under the DB_CONNECTION_ARGS field are generic, while others are specific to the database. In particular, SSL configuration depends on the database that you are deploying. 1.3.4.2.1. PostgreSQL SSL connection arguments The following YAML shows a sample PostgreSQL SSL configuration: --- DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert --- The sslmode parameter determines whether, or with, what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes for the sslmode parameter: disabl: : Only try a non-SSL connection. allow : Try a non-SSL connection first. Upon failure, try an SSL connection. prefer : Default. Try an SSL connection first. Upon failure, try a non-SSL connection. require : Only try an SSL connection. If a root CA file is present, verify the connection in the same way as if verify-ca was specified. verify-ca : Only try an SSL connection, and verify that the server certificate is issued by a trust certificate authority (CA). verify-full : Only try an SSL connection. Verify that the server certificate is issued by a trust CA, and that the requested server host name matches that in the certificate. For more information about the valid arguments for PostgreSQL, see Database Connection Control Functions . 1.3.4.2.2. MySQL SSL connection arguments The following YAML shows a sample MySQL SSL configuration: --- DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert --- For more information about the valid connection arguments for MySQL, see Connecting to the Server Using URI-Like Strings or Key-Value Pairs . 1.3.4.3. HTTP connection counts You can specify the quantity of simultaneous HTTP connections using environment variables. The environment variables can be specified as a whole, or for a specific component. The default for each is 50 parallel connections per process. See the following YAML for example environment variables; --- WORKER_CONNECTION_COUNT_REGISTRY=n WORKER_CONNECTION_COUNT_WEB=n WORKER_CONNECTION_COUNT_SECSCAN=n WORKER_CONNECTION_COUNT=n --- Note Specifying a count for a specific component will override any value set in the WORKER_CONNECTION_COUNT configuration field. 1.3.4.4. Dynamic process counts To estimate the quantity of dynamically sized processes, the following calculation is used by default. Note Red Hat Quay queries the available CPU count from the entire machine. Any limits applied using kubernetes or other non-virtualized mechanisms will not affect this behavior. Red Hat Quay makes its calculation based on the total number of processors on the Node. The default values listed are simply targets, but shall not exceed the maximum or be lower than the minimum. Each of the following process quantities can be overridden using the environment variable specified below: registry - Provides HTTP endpoints to handle registry action minimum: 8 maximum: 64 default: USDCPU_COUNT x 4 environment variable: WORKER_COUNT_REGISTRY web - Provides HTTP endpoints for the web-based interface minimum: 2 maximum: 32 default: USDCPU_COUNT x 2 environment_variable: WORKER_COUNT_WEB secscan - Interacts with Clair minimum: 2 maximum: 4 default: USDCPU_COUNT x 2 environment variable: WORKER_COUNT_SECSCAN 1.3.4.5. Environment variables Red Hat Quay allows overriding default behavior using environment variables. The following table lists and describes each variable and the values they can expect. Table 1.1. Worker count environment variables Variable Description Values WORKER_COUNT_REGISTRY Specifies the number of processes to handle registry requests within the Quay container. Integer between 8 and 64 WORKER_COUNT_WEB Specifies the number of processes to handle UI/Web requests within the container. Integer between 2 and 32 WORKER_COUNT_SECSCAN Specifies the number of processes to handle Security Scanning (for example, Clair) integration within the container. Integer. Because the Operator specifies 2 vCPUs for resource requests and limits, setting this value between 2 and 4 is safe. However, users can run more, for example, 16 , if warranted. DB_CONNECTION_POOLING Toggle database connection pooling. true or false 1.3.4.6. Turning off connection pooling Red Hat Quay deployments with a large amount of user activity can regularly hit the 2k maximum database connection limit. In these cases, connection pooling, which is enabled by default for Red Hat Quay, can cause database connection count to rise exponentially and require you to turn off connection pooling. If turning off connection pooling is not enough to prevent hitting the 2k database connection limit, you need to take additional steps to deal with the problem. If this happens, you might need to increase the maximum database connections to better suit your workload. | [
"podman run --rm -it --name quay_config -p 8080:8080 -v path/to/config-bundle:/conf/stack registry.redhat.io/quay/quay-rhel8:v3.9.10 config <my_secret_password>",
"podman run --rm -it --name quay_config -p 7070:8080 -v USD{PRIVATE_KEY_PATH}:/tls/localhost.key -v USD{PUBLIC_KEY_PATH}:/tls/localhost.crt -e CONFIG_TOOL_PRIVATE_KEY=/tls/localhost.key -e CONFIG_TOOL_PUBLIC_KEY=/tls/localhost.crt -e DEBUGLOG=true -ti config-app:dev",
"--- FEATURE_USER_METADATA: true ---",
"--- SSL_PROTOCOLS : ['TLSv1.1','TLSv1.2'] ---",
"--- DB_CONNECTION_POOLING: true ---",
"--- DB_CONNECTION_ARGS: max_connections: 10 ---",
"--- DB_CONNECTION_ARGS: max_connections: n # Max Connection Pool size. (Connection Pooling only) timeout: n # Time to hold on to connections. (Connection Pooling only) stale_timeout: n # Number of seconds to block when the pool is full. (Connection Pooling only) ---",
"--- DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert ---",
"--- DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert ---",
"--- WORKER_CONNECTION_COUNT_REGISTRY=n WORKER_CONNECTION_COUNT_WEB=n WORKER_CONNECTION_COUNT_SECSCAN=n WORKER_CONNECTION_COUNT=n ---"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/advanced-quay-configuration |
Chapter 12. Editing applications | Chapter 12. Editing applications You can edit the configuration and the source code of the application you create using the Topology view. 12.1. Prerequisites You have the appropriate roles and permissions in a project to create and modify applications in OpenShift Container Platform. You have created and deployed an application on OpenShift Container Platform using the Developer perspective . You have logged in to the web console and have switched to the Developer perspective . 12.2. Editing the source code of an application using the Developer perspective You can use the Topology view in the Developer perspective to edit the source code of your application. Procedure In the Topology view, click the Edit Source code icon, displayed at the bottom-right of the deployed application, to access your source code and modify it. Note This feature is available only when you create applications using the From Git , From Catalog , and the From Dockerfile options. If the Eclipse Che Operator is installed in your cluster, a Che workspace ( ) is created and you are directed to the workspace to edit your source code. If it is not installed, you will be directed to the Git repository ( ) your source code is hosted in. 12.3. Editing the application configuration using the Developer perspective You can use the Topology view in the Developer perspective to edit the configuration of your application. Note Currently, only configurations of applications created by using the From Git , Container Image , From Catalog , or From Dockerfile options in the Add workflow of the Developer perspective can be edited. Configurations of applications created by using the CLI or the YAML option from the Add workflow cannot be edited. Prerequisites Ensure that you have created an application using the From Git , Container Image , From Catalog , or From Dockerfile options in the Add workflow. Procedure After you have created an application and it is displayed in the Topology view, right-click the application to see the edit options available. Figure 12.1. Edit application Click Edit application-name to see the Add workflow you used to create the application. The form is pre-populated with the values you had added while creating the application. Edit the necessary values for the application. Note You cannot edit the Name field in the General section, the CI/CD pipelines, or the Create a route to the application field in the Advanced Options section. Click Save to restart the build and deploy a new image. Figure 12.2. Edit and redeploy application | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/building_applications/odc-editing-applications |
Chapter 12. Integrating with Google Cloud Security Command Center | Chapter 12. Integrating with Google Cloud Security Command Center If you are using Google Cloud Security Command Center (Cloud SCC), you can forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Cloud SCC. This guide explains how to integrate Red Hat Advanced Cluster Security for Kubernetes with Cloud SCC. The following steps represent a high-level workflow for integrating Red Hat Advanced Cluster Security for Kubernetes with Cloud SCC. Register a new security source with Google Cloud. Provide the source ID and service account key to Red Hat Advanced Cluster Security for Kubernetes. Identify the policies you want to send notifications for, and update the notification settings for those policies. 12.1. Configuring Google Cloud SCC Start by adding Red Hat Advanced Cluster Security for Kubernetes as a trusted Cloud SCC source. Procedure Follow the Adding vulnerability and threat sources to Cloud Security Command Center guide and add Red Hat Advanced Cluster Security for Kubernetes as a trusted Cloud SCC source. Make a note of the Source ID that Google Cloud creates for your Red Hat Advanced Cluster Security for Kubernetes integration. If you do not see a source ID after registering, you can find it on the Cloud SCC Security Sources page . Create a key for the service account you created, or the existing account you used, in the step. See Google Cloud's guide to creating and managing service account keys for details. 12.2. Configuring Red Hat Advanced Cluster Security for Kubernetes for integrating with Google Cloud SCC You can create a new Google Cloud SCC integration in Red Hat Advanced Cluster Security for Kubernetes by using the source ID and a Google service account. Prerequisites A service account with the Security Center Findings Editor IAM role on the organization level. See Access control with IAM for more information. Either a workload identity or a Service account key (JSON) for the service account. See Creating a service account and Creating service account keys for more information. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Google Cloud SCC . Click New Integration ( add icon). Enter a name for Integration Name . Enter the Cloud SCC Source ID . When using a workload identity, check Use workload identity . Otherwise, enter the contents of your service account key file into the Service account key (JSON) field. Select Create to generate the configuration. 12.3. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the Google Cloud SCC notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/integrating/integrate-with-google-cloud-scc |
Chapter 8. Important links | Chapter 8. Important links Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2022-02-01 16:35:47 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_rhel/important-links-str |
function::returnstr | function::returnstr Name function::returnstr - Formats the return value as a string Synopsis Arguments format Variable to determine return type base value Description This function is used by the nd_syscall tapset, and returns a string. Set format equal to 1 for a decimal, 2 for hex, 3 for octal. Note that this function should only be used in dwarfless probes (i.e. 'kprobe.function( " foo " )'). Other probes should use return_str . | [
"returnstr:string(format:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-returnstr |
Chapter 2. Managing your cluster resources | Chapter 2. Managing your cluster resources You can apply global configuration options in OpenShift Container Platform. Operators apply these configuration settings across the cluster. 2.1. Interacting with your cluster resources You can interact with cluster resources by using the OpenShift CLI ( oc ) tool in OpenShift Container Platform. The cluster resources that you see after running the oc api-resources command can be edited. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the web console or you have installed the oc CLI tool. Procedure To see which configuration Operators have been applied, run the following command: USD oc api-resources -o name | grep config.openshift.io To see what cluster resources you can configure, run the following command: USD oc explain <resource_name>.config.openshift.io To see the configuration of custom resource definition (CRD) objects in the cluster, run the following command: USD oc get <resource_name>.config -o yaml To edit the cluster resource configuration, run the following command: USD oc edit <resource_name>.config -o yaml | [
"oc api-resources -o name | grep config.openshift.io",
"oc explain <resource_name>.config.openshift.io",
"oc get <resource_name>.config -o yaml",
"oc edit <resource_name>.config -o yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/support/managing-cluster-resources |
27.4. Sample Kickstart Configurations | 27.4. Sample Kickstart Configurations 27.4.1. Advanced Partitioning Example The following is an integrated example showing the clearpart , zerombr , part , raid , volgroup , and logvol Kickstart options in action: Example 27.10. Advanced Partitioning Example This advanced example implements LVM over RAID, as well as the ability to resize various directories for future growth. First, the clearpart command is used on drives hda and hdc to wipe them. The zerombr command initializes unused partition tables. Then, the two drives are partitioned to prepare them for RAID configuration. Each drive is divided into five partitions, and each drive is partitioned into an identical layout. The part uses these pairs of physical partitions to create a software RAID device with RAID1 level (mirroring). The first four RAID devices are used for / (root), /safe , swap and /usr . The fifth, largest pair of partitions is named pv.01 and will be used in the following part as a physical volume for LVM. Finally, the last set of commands first creates a volume group named sysvg on the pv.01 physical volume. Then, three logical volumes ( /var , /var/freespace and /usr/local ) are created and added to the sysvg volume group. The /var and /var/freespace volumes have a set size of 8 GB, and the /usr/local volume uses the --grow option to fill all remaining available space. 27.4.2. User Input Example The following is an example showing how to prompt the user for input, and then read that input and save it as a variable, using bash: Example 27.11. User Input Example Due to the way Kickstart operates, the script must switch to a new virtual terminal before reading input from the user. This is accomplished by the exec < /dev/tty6 > /dev/tty6 2> /dev/tty6 and chvt 6 commands. The read USERINPUT reads input from the user until enter is pressed, and stores it in the variable USERINPUT . The echo -n "You entered:" "USDUSERINPUT" command displays the text You entered: followed by the user's input. Finally, the chvt 1 and exec < /dev/tty1 > /dev/tty1 2> /dev/tty1 commands switch back to the original terminal and allow Kickstart to continue installation. 27.4.3. Example Kickstart file for installing and starting the RNG daemon The following is an example Kickstart file which demonstrates how to install and enable a service, in this case the Random Number Generator (RNG) daemon, which supplies entropy to the system kernel: Example 27.12. Example Kickstart file for installing and starting the RNG daemon The services --enabled=rngd command instructs the installed system to start the RNG daemon each time the system starts. The rng-tools package, which contains the RNG daemon, is then designated for installation. | [
"clearpart --drives=hda,hdc zerombr Raid 1 IDE config part raid.11 --size 1000 --asprimary --ondrive=hda part raid.12 --size 1000 --asprimary --ondrive=hda part raid.13 --size 2000 --asprimary --ondrive=hda part raid.14 --size 8000 --ondrive=hda part raid.15 --size 16384 --grow --ondrive=hda part raid.21 --size 1000 --asprimary --ondrive=hdc part raid.22 --size 1000 --asprimary --ondrive=hdc part raid.23 --size 2000 --asprimary --ondrive=hdc part raid.24 --size 8000 --ondrive=hdc part raid.25 --size 16384 --grow --ondrive=hdc You can add --spares=x raid / --fstype xfs --device root --level=RAID1 raid.11 raid.21 raid /safe --fstype xfs --device safe --level=RAID1 raid.12 raid.22 raid swap --fstype swap --device swap --level=RAID1 raid.13 raid.23 raid /usr --fstype xfs --device usr --level=RAID1 raid.14 raid.24 raid pv.01 --fstype xfs --device pv.01 --level=RAID1 raid.15 raid.25 LVM configuration so that we can resize /var and /usr/local later volgroup sysvg pv.01 logvol /var --vgname=sysvg --size=8000 --name=var logvol /var/freespace --vgname=sysvg --size=8000 --name=freespacetouse logvol /usr/local --vgname=sysvg --size=1 --grow --name=usrlocal",
"%pre exec < /dev/tty6 > /dev/tty6 2> /dev/tty6 chvt 6 IFS=USD'\\n' echo -n \"Enter input: \" read USERINPUT echo echo -n \"You entered:\" \"USDUSERINPUT\" echo chvt 1 exec < /dev/tty1 > /dev/tty1 2> /dev/tty1 %end",
"services --enabled=rngd %packages rng-tools %end"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-kickstart-examples |
1.2. Direct Integration | 1.2. Direct Integration You need two components to connect a Linux system to Active Directory (AD). One component interacts with the central identity and authentication source, which is AD in this case. The other component detects available domains and configures the first component to work with the right identity source. There are different options that can be used to retrieve information and perform authentication against AD. Among them are: Native LDAP and Kerberos PAM and NSS modules Among these modules are nss_ldap , pam_ldap , and pam_krb5 . As PAM and NSS modules are loaded into every application process, they directly affect the execution environment. With no caching, offline support, or sufficient protection of access credentials, use of the basic LDAP and Kerberos modules for NSS and PAM is discouraged due to their limited functionality. Samba Winbind Samba Winbind had been a traditional way of connecting Linux systems to AD. Winbind emulates a Windows client on a Linux system and is able to communicate to AD servers. Note that: The Winbind service must be running if you configured Samba as a domain member. Direct integration with Winbind in a multi-forest AD setup requires bidirectional trusts. Remote forests must trust the local forest to ensure that the idmap_ad plug-in handles remote forest users correctly. System Security Services Daemon (SSSD) The primary function of SSSD is to access a remote identity and authentication resource through a common framework that provides caching and offline support to the system. SSSD is highly configurable; it provides PAM and NSS integration and a database to store local users, as well as core and extended user data retrieved from a central server. SSSD is the recommended component to connect a Linux system with an identity server of your choice, be it Active Directory, Identity Management (IdM) in Red Hat Enterprise Linux, or any generic LDAP or Kerberos server. Note that: Direct integration with SSSD works only within a single AD forest by default. Remote forests must trust the local forest to ensure that the idmap_ad plug-in handles remote forest users correctly. The main reason to transition from Winbind to SSSD is that SSSD can be used for both direct and indirect integration and allows to switch from one integration approach to another without significant migration costs. The most convenient way to configure SSSD or Winbind in order to directly integrate a Linux system with AD is to use the realmd service. It allows callers to configure network authentication and domain membership in a standard way. The realmd service automatically discovers information about accessible domains and realms and does not require advanced configuration to join a domain or realm. Direct integration is a simple way to introduce Linux systems to AD environment. However, as the share of Linux systems grows, the deployments usually see the need for a better centralized management of the identity-related policies such as host-based access control, sudo, or SELinux user mappings. At first, the configuration of these aspects of the Linux systems can be maintained in local configuration files. With a growing number of systems though, distribution and management of the configuration files is easier with a provisioning system such as Red Hat Satellite. This approach creates an overhead of changing the configuration files and then distributing them. When direct integration does not scale anymore, it is more beneficial to consider indirect integration described in the section. 1.2.1. Supported Windows Platforms for direct integration You can directly integrate your Linux machine with Active Directory forests that use the following forest and domain functional levels: Forest functional level range: Windows Server 2008 - Windows Server 2016 [1] Domain functional level range: Windows Server 2008 - Windows Server 2016 [1] Direct integration has been tested on the following supported operating systems using the mentioned functional levels: Windows Server 2019 Windows Server 2016 Windows Server 2012 R2 [1] Windows Server 2019 does not introduce a new functional level. The highest functional level Windows Server 2019 uses are Windows Server 2016. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/summary-direct |
Chapter 29. Managing the Kerberos Domain | Chapter 29. Managing the Kerberos Domain This chapter describes managing the Kerberos Key Distribution Center (KDC) component of the Identity Management server. Important Do not use the kadmin or kadmin.local utilities to manage the Identity Management Kerberos policies. Use the native Identity Management command-line tools as described in this guide. If you attempt to manage the Identity Management policies using the mentioned Kerberos tools, some of the operations will not affect the Identity Management configuration stored in its Directory Server instance. 29.1. Managing Kerberos Ticket Policies Kerberos ticket policies in Identity Management set restrictions on ticket duration and renewal. Using the following procedures, you can configure Kerberos ticket policies for the Kerberos Key Distribution Center (KDC) running on your Identity Management server. 29.1.1. Determining the lifetime of a Kerberos Ticket When an Identity Management server determines the lifetime of a ticket to be granted after an Identity Management client has requested a Kerberos ticket on behalf of user_name , several parameters are taken into account. First, client-side evaluation takes place which calculates the value to be requested on the basis of the kinit command and the ticket_lifetime setting in the /etc/krb5.conf file. The value is then sent to the Identity Management server where server-side evaluation takes place. If the requested lifetime is lower than what the global settings allow, the requested lifetime is granted. Otherwise, the lifetime granted is the value which the global settings allow. The lifetime requested by the client on behalf of user_name is determined as follows: On the client side If you explicitly state a value for user_name in the kinit command itself by using the -l option, for example: that value, in this case 90000 seconds, is requested by the client on behalf of user_name . Else, if no lifetime value is passed in as an argument of the kinit user_name command, the value of the ticket_lifetime setting in the client's /etc/krb5.conf file is used by the client on behalf of user_name . If no value is specified in the /etc/krb5.conf file, the default IdM value for initial ticket requests is used, which is 1 day. On the server side Server-side, a two-stage evaluation takes place: The value requested by the client is compared to the --maxlife setting of the user_name -specific Kerberos ticket policies if these policies exist, and the lower value of the two is selected. If user_name -specific Kerberos ticket policies do not exist, the value sent by the client is compared to the --maxlife setting of the Global Kerberos ticket policy, and the lower value of the two is selected. For details on global and user-specific Kerberos ticket policies, see Section 29.1.2, "Global and User-specific Kerberos Ticket Policies" . The value selected in the step is compared to two other values: The value of the max_life setting in the /var/kerberos/krb5kdc/kdc.conf file The value set in the krbMaxTicketLife attribute of the LDAP entry with the distinguished name (DN): krbPrincipalName=krbtgt/ REALM_NAME @ REALM_NAME ,cn= REALM_NAME ,cn=kerberos, domain_name The lowest of the three values is ultimately selected for the lifetime of the Kerberos ticket granted to user_name . 29.1.2. Global and User-specific Kerberos Ticket Policies You can redefine the global Kerberos ticket policy and define additional policies specifically to individual users. Global Kerberos ticket policy The global policy applies to all tickets issued within the Identity Management Kerberos realm. User-specific Kerberos ticket policies User-specific policies apply only to the associated user account. For example, a user-specific Kerberos ticket policy can define a longer maximum ticket lifetime for the admin user. User-specific policies take precedence over the global policy. 29.1.3. Configuring the Global Kerberos Ticket Policy To configure the global Kerberos ticket policy, you can use: the Identity Management web UI: see the section called "Web UI: Configuring the Global Kerberos Ticket Policy" the command line: see the section called "Command Line: Configuring the Global Kerberos Ticket Policy" Table 29.1. Supported Kerberos Ticket Policy Attributes Attribute Explanation Example Max renew The period of time (in seconds) during which the user can renew the Kerberos ticket after its expiry. After the renew period, the user must log in using the kinit utility to get a new ticket. To renew the ticket, use the kinit -R command. Max renew = 604800 After the ticket expires, the user can renew it within the 7 days (604,800 seconds). Max life The lifetime of a Kerberos ticket (in seconds). The period during which the Kerberos ticket stays active. Max life = 86400 The ticket expires 24 hours (86,400 seconds) after it was issued. Web UI: Configuring the Global Kerberos Ticket Policy Select Policy Kerberos Ticket Policy . Define the required values: In the Max renew field, enter the maximum renewal period of Kerberos tickets. In the Max life field, enter the maximum lifetime of Kerberos tickets. Figure 29.1. Configuring the Global Kerberos Ticket Policy Click Save . Command Line: Configuring the Global Kerberos Ticket Policy To modify the global Kerberos ticket policy: Use the ipa krbtpolicy-mod command, and pass at least one of the following options: --maxrenew to define the maximum renewal period of Kerberos tickets --maxlife to define the maximum lifetime of Kerberos tickets For example, to change the maximum lifetime: To reset the global Kerberos ticket policy to the original default values: Use the ipa krbtpolicy-reset command. Optional. Use the ipa krbtpolicy-show command to verify the current settings. For details on ipa krbtpolicy-mod and ipa krbtpolicy-reset , pass the --help option with them. 29.1.4. Configuring User-specific Kerberos Ticket Policies To modify the Kerberos ticket policy for a particular user: Use the ipa krbtpolicy-mod user_name command, and pass at least one of the following options: --maxrenew to define the maximum renewal period of Kerberos tickets --maxlife to define the maximum lifetime of Kerberos tickets If you define only one of the attributes, Identity Management will apply the global Kerberos ticket policy value for the other attribute. For example, to change the maximum lifetime for the admin user: Optional. Use the ipa krbtpolicy-show user_name command to display the current values for the specified user. The new policy takes effect immediately on the Kerberos ticket that the user requests, such as when using the kinit utility. To reset a user-specific Kerberos ticket policy, use the ipa krbtpolicy-reset user_name command. The command clears the values defined specifically to the user, after which Identity Management applies the global policy values. For details on ipa krbtpolicy-mod and ipa krbtpolicy-reset , pass the --help option with them. | [
"kinit user_name -l 90000",
"ipa krbtpolicy-mod --maxlife= 80000 Max life: 80000 Max renew: 604800",
"ipa krbtpolicy-mod admin --maxlife= 160000 Max life: 80000 Max renew: 604800"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/kerberos |
1.2. Overview | 1.2. Overview This document contains information about the known and resolved issues of Red Hat JBoss Data Grid version 6.6.0. Customers are requested to read this documentation prior to installing this version. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.1_release_notes/overview34 |
5.2. Global Settings | 5.2. Global Settings The global settings configure parameters that apply to all servers running HAProxy. A typical global section may look like the following: In the above configuration, the administrator has configured the service to log all entries to the local syslog server. By default, this could be /var/log/syslog or some user-designated location. The maxconn parameter specifies the maximum number of concurrent connections for the service. By default, the maximum is 2000. The user and group parameters specifies the user name and group name for which the haproxy process belongs. Finally, the daemon parameter specifies that haproxy run as a background process. | [
"global log 127.0.0.1 local2 maxconn 4000 user haproxy group haproxy daemon"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-haproxy-setup-global |
4. Related Documentation | 4. Related Documentation For more information about using Red Hat Enterprise Linux, refer to the following resources: Red Hat Enterprise Linux Installation Guide - Provides information regarding installation of Red Hat Enterprise Linux. Red Hat Enterprise Linux Introduction to System Administration - Provides introductory information for new Red Hat Enterprise Linux system administrators. Red Hat Enterprise Linux System Administration Guide - Provides more detailed information about configuring Red Hat Enterprise Linux to suit your particular needs as a user. Red Hat Enterprise Linux Reference Guide - Provides detailed information suited for more experienced users to reference when needed, as opposed to step-by-step instructions. Red Hat Enterprise Linux Security Guide - Details the planning and the tools involved in creating a secured computing environment for the data center, workplace, and home. For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux, refer to the following resources: Red Hat Cluster Suite Overview - Provides a high level overview of the Red Hat Cluster Suite. Configuring and Managing a Red Hat Cluster - Provides information about installing, configuring and managing Red Hat Cluster components. Global File System: Configuration and Administration - Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). LVM Administrator's Guide: Configuration and Administration - Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. Using Device-Mapper Multipath - Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux. Linux Virtual Server Administration - Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS). Red Hat Cluster Suite Release Notes - Provides information about the current release of Red Hat Cluster Suite. Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML and PDF versions online at the following location: http://www.redhat.com/docs | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/related_documentation-gnbd |
1.3. Dependencies | 1.3. Dependencies The Ruby Software Development Kit has the following dependencies, which you must install manually if you are using gem : libxml2 for parsing and rendering XML libcurl for HTTP transfers C compiler Required header and library files Note You do not need to install the dependency files if you installed the RPM. Install the dependency files: Note If you are using Debian or Ubuntu, use apt-get : | [
"dnf install gcc libcurl-devel libxml2-devel ruby-devel",
"apt-get install gcc libxml2-dev libcurl-dev ruby-dev"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/dependencies |
probe::sunrpc.clnt.bind_new_program | probe::sunrpc.clnt.bind_new_program Name probe::sunrpc.clnt.bind_new_program - Bind a new RPC program to an existing client Synopsis sunrpc.clnt.bind_new_program Values progname the name of new RPC program old_prog the number of old RPC program vers the version of new RPC program servername the server machine name old_vers the version of old RPC program old_progname the name of old RPC program prog the number of new RPC program | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-bind-new-program |
Chapter 3. Red Hat Quay infrastructure | Chapter 3. Red Hat Quay infrastructure Red Hat Quay runs on any physical or virtual infrastructure, both on premise or public cloud. Deployments range from simple to massively scaled, like the following: All-in-one setup on a developer notebook Highly available on virtual machines or on OpenShift Container Platform Geographically dispersed across multiple availability zones and regions 3.1. Running Red Hat Quay on standalone hosts You can automate the standalone deployment process by using Ansible or another automation suite. All standalone hosts require valid a Red Hat Enterprise Linux (RHEL) subscription. Proof of Concept deployment Red Hat Quay runs on a machine with image storage, containerized database, Redis, and optionally, Clair security scanning. Highly available setups Red Hat Quay and Clair run in containers across multiple hosts. You can use systemd units to ensure restart on failure or reboot. High availability setups on standalone hosts require customer-provided load balancers, either low-level TCP load balancers or application load balancers, capable of terminating TLS. 3.2. Running Red Hat Quay on OpenShift The Red Hat Quay Operator for OpenShift Container Platform provides the following features: Automated deployment and management of Red Hat Quay with customization options Management of Red Hat Quay and all of its dependencies Automated scaling and updates Integration with existing OpenShift Container Platform processes like GitOps, monitoring, alerting, logging Provision of object storage with limited availability, backed by the multi-cloud object gateway (NooBaa), as part of the Red Hat OpenShift Data Foundation (ODF) Operator. This service does not require an additional subscription. Scaled-out, high availability object storage provided by the ODF Operator. This service requires an additional subscription. Red Hat Quay can run on OpenShift Container Platform infrastructure nodes. As a result, no further subscriptions are required. Running Red Hat Quay on OpenShift Container Platform has the following benefits: Zero to Hero: Simplified deployment of Red Hat Quay and associated components means that you can start using the product immediately Scalability: Use cluster compute capacity to manage demand through automated scaling, based on actual load Simplified Networking: Automated provisioning of load balancers and traffic ingress secured through HTTPS using OpenShift Container Platform TLS certificates and Routes Declarative configuration management: Configurations stored in CustomResource objects for GitOps-friendly lifecycle management Repeatability: Consistency regardless of the number of replicas of Red Hat Quay and Clair OpenShift integration: Additional services to use OpenShift Container Platform Monitoring and Alerting facilities to manage multiple Red Hat Quay deployments on a single cluster 3.3. Integrating standalone Red Hat Quay with OpenShift Container Platform While the Red Hat Quay Operator ensures seamless deployment and management of Red Hat Quay running on OpenShift Container Platform, it is also possible to run Red Hat Quay in standalone mode and then serve content to one or many OpenShift Container Platform clusters, wherever they are running. Integrating standalone Red Hat Quay with OpenShift Container Platform Several Operators are available to help integrate standalone and Operator based deployments ofRed Hat Quay with OpenShift Container Platform, like the following: Red Hat Quay Cluster Security Operator Relays Red Hat Quay vulnerability scanning results into the OpenShift Container Platform console Red Hat Quay Bridge Operator Ensures seamless integration and user experience by using Red Hat Quay with OpenShift Container Platform in conjunction with OpenShift Container Platform Builds and ImageStreams 3.4. Mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is small-scale version of Red Hat Quay that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift Container Platform subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with pre-configured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. The mirror registry for Red Hat OpenShift is limited to hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as release images or Operator images. It uses local storage. Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Unlike Red Hat Quay, the mirror registry for Red Hat OpenShift is not a highly-available registry. Only local file system storage is supported. Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged, because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to use the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. More information is available at Creating a mirror registry with mirror registry for Red Hat OpenShift . 3.5. Single compared to multiple registries Many users consider running multiple, distinct registries. The preferred approach with Red Hat Quay is to have a single, shared registry: If you want a clear separation between development and production images, or a clear separation by content origin, for example, keeping third-party images distinct from internal ones, you can use organizations and repositories, combined with role-based access control (RBAC), to achieve the desired separation. Given that the image registry is a critical component in an enterprise environment, you may be tempted to use distinct deployments to test upgrades of the registry software to newer versions. The Red Hat Quay Operator updates the registry for patch releases as well as minor or major updates. This means that any complicated procedures are automated and, as a result, there is no requirement for you to provision multiple instances of the registry to test the upgrade. With Red Hat Quay, there is no need to have a separate registry for each cluster you deploy. Red Hat Quay is proven to work at scale at Quay.io , and can serve content to thousands of clusters. Even if you have deployments in multiple data centers, you can still use a single Red Hat Quay instance to serve content to multiple physically-close data centers, or use the HA functionality with load balancers to stretch across data centers. Alternatively, you can use the Red Hat Quay geo-replication feature to stretch across physically distant data centers. This requires the provisioning of a global load balancer or DNS-based geo-aware load balancing. One scenario where it may be appropriate to run multiple distinct registries, is when you want to specify different configuration for each registry. In summary, running a shared registry helps you to save storage, infrastructure and operational costs, but a dedicated registry might be needed in specific circumstances. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_architecture/arch-quay-infrastructure |
Chapter 25. Running special container images | Chapter 25. Running special container images You can run some special types of container images. Some container images have built-in labels called runlabels that enable you to run those containers with preset options and arguments. The podman container runlabel <label> command, you can execute the command defined in the <label> for the container image. Supported labels are install , run and uninstall . 25.1. Opening privileges to the host There are several differences between privileged and non-privileged containers. For example, the toolbox container is a privileged container. Here are examples of privileges that may or may not be open to the host from a container: Privileges : A privileged container disables the security features that isolate the container from the host. You can run a privileged container using the podman run --privileged <image_name> command. You can, for example, delete files and directories mounted from the host that are owned by the root user. Process tables : You can use the podman run --privileged --pid=host <image_name> command to use the host PID namespace for the container. Then you can use the ps -e command within a privileged container to list all processes running on the host. You can pass a process ID from the host to commands that run in the privileged container (for example, kill <PID> ). Network interfaces : By default, a container has only one external network interface and one loopback network interface. You can use the podman run --net=host <image_name> command to access host network interfaces directly from within the container. Inter-process communications : The IPC facility on the host is accessible from within the privileged container. You can run commands such as ipcs to see information about active message queues, shared memory segments, and semaphore sets on the host. 25.2. Container images with runlabels Some Red Hat images include labels that provide pre-set command lines for working with those images. Using the podman container runlabel <label> command, you can use the podman command to execute the command defined in the <label> for the image. Existing runlabels include: install : Sets up the host system before executing the image. Typically, this results in creating files and directories on the host that the container can access when it is run later. run : Identifies podman command line options to use when running the container. Typically, the options will open privileges on the host and mount the host content the container needs to remain permanently on the host. uninstall : Cleans up the host system after you finish running the container. 25.3. Running rsyslog with runlabels The rhel8/rsyslog container image is made to run a containerized version of the rsyslogd daemon. The rsyslog image contains the following runlabels: install , run and uninstall . The following procedure steps you through installing, running, and uninstalling the rsyslog image: Prerequisites The container-tools module is installed. Procedure Pull the rsyslog image: Display the install runlabel for rsyslog : This shows that the command will open privileges to the host, mount the host root filesystem on /host in the container, and run an install.sh script. Run the install runlabel for rsyslog : This creates files on the host system that the rsyslog image will use later. Display the run runlabel for rsyslog : This shows that the command opens privileges to the host and mount specific files and directories from the host inside the container, when it launches the rsyslog container to run the rsyslogd daemon. Execute the run runlabel for rsyslog : The rsyslog container opens privileges, mounts what it needs from the host, and runs the rsyslogd daemon in the background ( -d ). The rsyslogd daemon begins gathering log messages and directing messages to files in the /var/log directory. Display the uninstall runlabel for rsyslog : Run the uninstall runlabel for rsyslog : Note In this case, the uninstall.sh script just removes the /etc/logrotate.d/syslog file. It does not clean up the configuration files. | [
"podman pull registry.redhat.io/rhel8/rsyslog",
"podman container runlabel install --display rhel8/rsyslog command: podman run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog registry.redhat.io/rhel8/rsyslog:latest /bin/install.sh",
"podman container runlabel install rhel8/rsyslog command: podman run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog registry.redhat.io/rhel8/rsyslog:latest /bin/install.sh Creating directory at /host//etc/pki/rsyslog Creating directory at /host//etc/rsyslog.d Installing file at /host//etc/rsyslog.conf Installing file at /host//etc/sysconfig/rsyslog Installing file at /host//etc/logrotate.d/syslog",
"podman container runlabel run --display rhel8/rsyslog command: podman run -d --privileged --name rsyslog --net=host --pid=host -v /etc/pki/rsyslog:/etc/pki/rsyslog -v /etc/rsyslog.conf:/etc/rsyslog.conf -v /etc/sysconfig/rsyslog:/etc/sysconfig/rsyslog -v /etc/rsyslog.d:/etc/rsyslog.d -v /var/log:/var/log -v /var/lib/rsyslog:/var/lib/rsyslog -v /run:/run -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog --restart=always registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh",
"podman container runlabel run rhel8/rsyslog command: podman run -d --privileged --name rsyslog --net=host --pid=host -v /etc/pki/rsyslog:/etc/pki/rsyslog -v /etc/rsyslog.conf:/etc/rsyslog.conf -v /etc/sysconfig/rsyslog:/etc/sysconfig/rsyslog -v /etc/rsyslog.d:/etc/rsyslog.d -v /var/log:/var/log -v /var/lib/rsyslog:/var/lib/rsyslog -v /run:/run -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog --restart=always registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh 28a0d719ff179adcea81eb63cc90fcd09f1755d5edb121399068a4ea59bd0f53",
"podman container runlabel uninstall --display rhel8/rsyslog command: podman run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog registry.redhat.io/rhel8/rsyslog:latest /bin/uninstall.sh",
"podman container runlabel uninstall rhel8/rsyslog command: podman run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog registry.redhat.io/rhel8/rsyslog:latest /bin/uninstall.sh"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_running-special-container-images |
2.5.3. vmstat | 2.5.3. vmstat For a more concise understanding of system performance, try vmstat . With vmstat , it is possible to get an overview of process, memory, swap, I/O, system, and CPU activity in one line of numbers: The first line divides the fields in six categories, including process, memory, swap, I/O, system, and CPU related statistics. The second line further identifies the contents of each field, making it easy to quickly scan data for specific statistics. The process-related fields are: r -- The number of runnable processes waiting for access to the CPU b -- The number of processes in an uninterruptible sleep state The memory-related fields are: swpd -- The amount of virtual memory used free -- The amount of free memory buff -- The amount of memory used for buffers cache -- The amount of memory used as page cache The swap-related fields are: si -- The amount of memory swapped in from disk so -- The amount of memory swapped out to disk The I/O-related fields are: bi -- Blocks sent to a block device bo -- Blocks received from a block device The system-related fields are: in -- The number of interrupts per second cs -- The number of context switches per second The CPU-related fields are: us -- The percentage of the time the CPU ran user-level code sy -- The percentage of the time the CPU ran system-level code id -- The percentage of the time the CPU was idle wa -- I/O wait When vmstat is run without any options, only one line is displayed. This line contains averages, calculated from the time the system was last booted. However, most system administrators do not rely on the data in this line, as the time over which it was collected varies. Instead, most administrators take advantage of vmstat 's ability to repetitively display resource utilization data at set intervals. For example, the command vmstat 1 displays one new line of utilization data every second, while the command vmstat 1 10 displays one new line per second, but only for the ten seconds. In the hands of an experienced administrator, vmstat can be used to quickly determine resource utilization and performance issues. But to gain more insight into those issues, a different kind of tool is required -- a tool capable of more in-depth data collection and analysis. | [
"procs memory swap io system cpu r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 5276 315000 130744 380184 1 1 2 24 14 50 1 1 47 0"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-resource-tools-vmstat |
11.3. Supported Volume Options | 11.3. Supported Volume Options The following table lists available volume options along with their description and default value. Important The default values are subject to change, and may not be the same for all versions of Red Hat Gluster Storage. Table 11.1. Volume Options Option Value Description Allowed Values Default Value auth.allow IP addresses or hostnames of the clients which are allowed to access the volume. Valid hostnames or IP addresses, which includes wild card patterns including * . For example, 192.168.1.* . A list of comma separated addresses is acceptable, but a single hostname must not exceed 256 characters. * (allow all) auth.reject IP addresses or hostnames of FUSE clients that are denied access to a volume. For NFS access control, use nfs.rpc-auth-* options instead. Auth.reject takes precedence and overrides auth.allow. If auth.allow and auth.reject contain the same IP address then auth.reject is considered. Valid hostnames or IP addresses, which includes wild card patterns including * . For example, 192.168.1.* . A list of comma separated addresses is acceptable, but a single hostname must not exceed 256 characters. none (reject none) changelog Enables the changelog translator to record all the file operations. on | off off client.event-threads Specifies the number of network connections to be handled simultaneously by the client processes accessing a Red Hat Gluster Storage node. 1 - 32 2 client.strict-locks With this option enabled, it do not reopen the saved fds after reconnect if POSIX locks are held on them. Hence, subsequent operations on these fds are failed. This is necessary for stricter lock compliance as bricks cleanup any granted locks when a client disconnects. on | off off Important Before enabling client.strict-locks option, upgrade all the servers and clients to RHGS-3.5.5. cluster.background-self-heal-count The maximum number of heal operations that can occur simultaneously. Requests in excess of this number are stored in a queue whose length is defined by cluster.heal-wait-queue-leng . 0-256 8 cluster.brick-multiplex Available as of Red Hat Gluster Storage 3.3 and later. Controls whether to use brick multiplexing on all volumes. Red Hat recommends restarting volumes after enabling or disabling brick multiplexing. When set to off (the default), each brick has its own process and uses its own port. When set to on , bricks that are compatible with each other use the same process and the same port. This reduces per-brick memory usage and port consumption. Brick compatibility is determined at volume start, and depends on volume options shared between bricks. When multiplexing is enabled, restart volumes whenever volume configuration is changed in order to maintain the compatibility of the bricks grouped under a single process. on | off off cluster.consistent-metadata If set to on , the readdirp function in Automatic File Replication feature will always fetch metadata from their respective read children as long as it holds the good copy (the copy that does not need healing) of the file/directory. However, this could cause a reduction in performance where readdirps are involved. This option requires that the volume is remounted on the client to take effect. on | off off cluster.granular-entry-heal If set to enable, stores more granular information about the entries which were created or deleted from a directory while a brick in a replica was down. This helps in faster self-heal of directories, especially in use cases where directories with large number of entries are modified by creating or deleting entries. If set to disable, it only stores that the directory needs heal without information about what entries within the directories need to be healed, and thereby requires entire directory crawl to identify the changes. enable | disable enable Important Execute the gluster volume set VOLNAME cluster.granular-entry-heal [enable | disable] command only if the volume is in Created state. If the volume is in any other state other than Created , for example, Started , Stopped , and so on, execute gluster volume heal VOLNAME granular-entry-heal [enable | disable] command to enable or disable granular-entry-heal option. Important For new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.4, the cluster.granular-entry-heal option is enabled by default for the replicate volumes. cluster.heal-wait-queue-leng The maximum number of requests for heal operations that can be queued when heal operations equal to cluster.background-self-heal-count are already in progress. If more heal requests are made when this queue is full, those heal requests are ignored. 0-10000 128 cluster.lookup-optimize If this option is set to on , when a hashed sub-volume does not return a lookup result, negative lookups are optimized by not continuing to look on non-hashed subvolumes. For existing volumes, any directories created after the upgrade will have lookup-optimize behavior enabled. Rebalance operation has to be performed on all existing directories before they can use the lookup optimization. For new volumes, the lookup-optimize behavior is enabled by default, except for the root of the volume. Run a rebalance operation in order to enable lookup-optimize for the root of the volume. on|off on (Red Hat Gluster Storage 3.4 onwards) cluster.max-bricks-per-process The maximum number of bricks that can run on a single instance of glusterfsd process. As of Red Hat Gluster Storage 3.4 Batch 2 Update, the default value of this option is set to 250 . This provides better control of resource usage for container-based workloads. In earlier versions, the default value was 0 , which used a single process for all bricks on the node. Updating the value of this option does not affect currently running bricks. Restart the volume to change this setting for existing bricks. 0 to system maximum (any positive integer greater than 1) 250 cluster.min-free-disk Specifies the percentage of disk space that must be kept free. This may be useful for non-uniform bricks. Percentage of required minimum free disk space. 10% cluster.op-version Allows you to set the operating version of the cluster. The op-version number cannot be downgraded and is set for all volumes in the cluster. The op-version is not listed as part of gluster volume info command output. 30708 | 30712 | 31001 | 31101 | 31302 | 31303 | 31304 | 31305 | 31306 | 70200 Default value depends on Red Hat Gluster Storage version first installed. For Red Hat Gluster Storage 3.5 the value is set to 70200 for a new deployment. cluster.read-freq-threshold Specifies the number of reads, in a promotion/demotion cycle, that would mark a file HOT for promotion. Any file that has read hits less than this value will be considered as COLD and will be demoted. 0-20 0 cluster.self-heal-daemon Specifies whether proactive self-healing on replicated volumes is activated. on | off on cluster.server-quorum-ratio Sets the quorum percentage for the trusted storage pool. 0 - 100 >50% cluster.server-quorum-type If set to server, this option enables the specified volume to participate in the server-side quorum. For more information on configuring the server-side quorum , see Section 11.15.1.1, "Configuring Server-Side Quorum" none | server none cluster.quorum-count Specifies the minimum number of bricks that must be available in order for writes to be allowed. This is set on a per-volume basis. This option is used by the cluster.quorum-type option to determine write behavior. Valid values are between 1 and the number of bricks in a replica set. null cluster.quorum-type Determines when the client is allowed to write to a volume. For more information on configuring the client-side quorum , see Section 11.15.1.2, "Configuring Client-Side Quorum" none | fixed | auto auto cluster.shd-max-threads Specifies the number of entries that can be self healed in parallel on each replica by self-heal daemon. 1 - 64 1 cluster.shd-max-threads Specifies the number of entries that can be self healed in parallel on each replica by self-heal daemon. 1 - 64 1 cluster.shd-wait-qlength Specifies the number of entries that must be kept in the queue for self-heal daemon threads to take up as soon as any of the threads are free to heal. This value should be changed based on how much memory self-heal daemon process can use for keeping the set of entries that need to be healed. 1 - 655536 1024 cluster.shd-wait-qlength Specifies the number of entries that must be kept in the dispersed subvolume's queue for self-heal daemon threads to take up as soon as any of the threads are free to heal. This value should be changed based on how much memory self-heal daemon process can use for keeping the set of entries that need to be healed. 1 - 655536 1024 cluster.tier-demote-frequency Specifies how frequently the tier daemon must check for files to demote. 1 - 172800 seconds 3600 seconds cluster.tier-max-files Specifies the maximum number of files that may be migrated in any direction from each node in a given cycle. 1-100000 files 10000 cluster.tier-max-mb Specifies the maximum number of MB that may be migrated in any direction from each node in a given cycle. 1 -100000 (100 GB) 4000 MB cluster.tier-mode If set to cache mode, promotes or demotes files based on whether the cache is full or not, as specified with watermarks. If set to test mode, periodically demotes or promotes files automatically based on access. test | cache cache cluster.tier-promote-frequency Specifies how frequently the tier daemon must check for files to promote. 1- 172800 seconds 120 seconds cluster.use-anonymous-inode When enabled, handles entry heal related issues and heals the directory renames efficiently. on|off on (Red Hat Gluster Storage 3.5.4 onwards) cluster.use-compound-fops When enabled, write transactions that occur as part of Automatic File Replication are modified so that network round trips are reduced, improving performance. on | off off cluster.watermark-hi Upper percentage watermark for promotion. If hot tier fills above this percentage, no promotion will happen and demotion will happen with high probability. 1- 99 % 90% cluster.watermark-low Lower percentage watermark. If hot tier is less full than this, promotion will happen and demotion will not happen. If greater than this, promotion/demotion will happen at a probability relative to how full the hot tier is. 1- 99 % 75% cluster.write-freq-threshold Specifies the number of writes, in a promotion/demotion cycle, that would mark a file HOT for promotion. Any file that has write hits less than this value will be considered as COLD and will be demoted. 0-20 0 config.transport Specifies the type of transport(s) volume would support communicating over. tcp OR rdma OR tcp,rdma tcp diagnostics.brick-log-buf-size The maximum number of unique log messages that can be suppressed until the timeout or buffer overflow, whichever occurs first on the bricks. 0 and 20 (0 and 20 included) 5 diagnostics.brick-log-flush-timeout The length of time for which the log messages are buffered, before being flushed to the logging infrastructure (gluster or syslog files) on the bricks. 30 - 300 seconds (30 and 300 included) 120 seconds diagnostics.brick-log-format Allows you to configure the log format to log either with a message id or without one on the brick. no-msg-id | with-msg-id with-msg-id diagnostics.brick-log-level Changes the log-level of the bricks. INFO | DEBUG | WARNING | ERROR | CRITICAL | NONE | TRACE info diagnostics.brick-sys-log-level Depending on the value defined for this option, log messages at and above the defined level are generated in the syslog and the brick log files. INFO | WARNING | ERROR | CRITICAL CRITICAL diagnostics.client-log-buf-size The maximum number of unique log messages that can be suppressed until the timeout or buffer overflow, whichever occurs first on the clients. 0 and 20 (0 and 20 included) 5 diagnostics.client-log-flush-timeout The length of time for which the log messages are buffered, before being flushed to the logging infrastructure (gluster or syslog files) on the clients. 30 - 300 seconds (30 and 300 included) 120 seconds diagnostics.client-log-format Allows you to configure the log format to log either with a message ID or without one on the client. no-msg-id | with-msg-id with-msg-id diagnostics.client-log-level Changes the log-level of the clients. INFO | DEBUG | WARNING | ERROR | CRITICAL | NONE | TRACE info diagnostics.client-sys-log-level Depending on the value defined for this option, log messages at and above the defined level are generated in the syslog and the client log files. INFO | WARNING | ERROR | CRITICAL CRITICAL disperse.eager-lock Before a file operation starts, a lock is placed on the file. The lock remains in place until the file operation is complete. After the file operation completes, if eager-lock is on, the lock remains in place either until lock contention is detected, or for 1 second in order to check if there is another request for that file from the same client. If eager-lock is off, locks release immediately after file operations complete, improving performance for some operations, but reducing access efficiency. on | off on disperse.other-eager-lock This option is equivalent to the disperse.eager-lock option but applicable only for non regular files. When multiple clients access a particular directory, disabling disperse.other-eager-lockoption for the volume can improve performance for directory access without compromising performance of I/O's for regular files. on | off on disperse.other-eager-lock-timeout Maximum time (in seconds) that a lock on a non regular entry is held if no new operations on the entry are received. 0-60 1 disperse.shd-max-threads Specifies the number of entries that can be self healed in parallel on each disperse subvolume by self-heal daemon. 1 - 64 1 disperse.shd-wait-qlength Specifies the number of entries that must be kept in the dispersed subvolume's queue for self-heal daemon threads to take up as soon as any of the threads are free to heal. This value should be changed based on how much memory self-heal daemon process can use for keeping the set of entries that need to be healed. 1 - 655536 1024 features.ctr_link_consistency Enables a crash consistent way of recording hardlink updates by Change Time Recorder translator. When recording in a crash consistent way the data operations will experience more latency. on | off off features.ctr-enabled Enables Change Time Recorder (CTR) translator for a tiered volume. This option is used in conjunction with features.record-counters option to enable recording write and read heat counters. on | off on features.locks-notify-contention When this option is enabled and a lock request conflicts with a currently granted lock, an upcall notification will be sent to the current owner of the lock to request it to be released as soon as possible. yes | no yes features.locks-notify-contention-delay This value determines the minimum amount of time (in seconds) between upcall contention notifications on the same inode. If multiple lock requests are received during this period, only one upcall will be sent. 0-60 5 features.quota-deem-statfs (Deprecated) See Chapter 9, Managing Directory Quotas for more details. When this option is set to on, it takes the quota limits into consideration while estimating the filesystem size. The limit will be treated as the total size instead of the actual size of filesystem. on | off on features.read-only Specifies whether to mount the entire volume as read-only for all the clients accessing it. on | off off features.record-counters If set to enabled, cluster.write-freq-thresholdand cluster.read-freq-thresholdoptions defines the number of writes and reads to a given file that are needed before triggering migration. on | off on features.shard Enables or disables sharding on the volume. Affects files created after volume configuration. enable | disable disable features.shard-block-size Specifies the maximum size of file pieces when sharding is enabled. Affects files created after volume configuration. 512MB 512MB geo-replication.indexing Enables the marker translator to track the changes in the volume. on | off off network.ping-timeout The time the client waits for a response from the server. If a timeout occurs, all resources held by the server on behalf of the client are cleaned up. When the connection is reestablished, all resources need to be reacquired before the client can resume operations on the server. Additionally, locks are acquired and the lock tables are updated. A reconnect is a very expensive operation and must be avoided. 42 seconds 42 seconds nfs.acl Disabling nfs.acl will remove support for the NFSACL sideband protocol. This is enabled by default. enable | disable enable nfs.addr-namelookup Specifies whether to lookup names for incoming client connections. In some configurations, the name server can take too long to reply to DNS queries, resulting in timeouts of mount requests. This option can be used to disable name lookups during address authentication. Note that disabling name lookups will prevent you from using hostnames in nfs.rpc-auth-*options. on | off off nfs.disable Specifies whether to disable NFS exports of individual volumes. on | off off nfs.enable-ino32 For nfs clients or applciatons that do not support 64-bit inode numbers, use this option to make NFS return 32-bit inode numbers instead. Disabled by default, so NFS returns 64-bit inode numbers. This value is global and applies to all the volumes in the trusted storage pool. enable | disable disable nfs.export-volumes Enables or disables exporting entire volumes. If this option is disabled and the nfs.export-diroption is enabled, you can set subdirectories as the only exports. on | off on nfs.mount-rmtab Path to the cache file that contains a list of NFS-clients and the volumes they have mounted. Change the location of this file to a mounted (with glusterfs-fuse, on all storage servers) volume to gain a trusted pool wide view of all NFS-clients that use the volumes. The contents of this file provide the information that can get obtained with the showmount command. Path to a directory /var/lib/glusterd/nfs/rmtab nfs.mount-udp Enable UDP transport for the MOUNT sideband protocol. By default, UDP is not enabled, and MOUNT can only be used over TCP. Some NFS-clients (certain Solaris, HP-UX and others) do not support MOUNT over TCP and enabling nfs.mount-udpmakes it possible to use NFS exports provided by Red Hat Gluster Storage. disable | enable disable nfs.nlm By default, the Network Lock Manager (NLMv4) is enabled. Use this option to disable NLM. Red Hat does not recommend disabling this option. on|off on nfs.port Associates glusterFS NFS with a non-default port. 1025-60999 38465- 38467 nfs.ports-insecure Allows client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting for allowing insecure ports for all exports using a single option. on | off off nfs.rdirplus The default value is on. When this option is turned off, NFS falls back to standard readdir instead of readdirp. Turning this off would result in more lookup and stat requests being sent from the client which may impact performance. on|off on nfs.rpc-auth-allow IP_ADRESSES A comma separated list of IP addresses allowed to connect to the server. By default, all clients are allowed. Comma separated list of IP addresses accept all nfs.rpc-auth-reject IP_ADRESSES A comma separated list of addresses not allowed to connect to the server. By default, all connections are allowed. Comma separated list of IP addresses reject none nfs.server-aux-gids When enabled, the NFS-server will resolve the groups of the user accessing the volume. NFSv3 is restricted by the RPC protocol (AUTH_UNIX/AUTH_SYS header) to 16 groups. By resolving the groups on the NFS-server, this limits can get by-passed. on|off off nfs.transport-type Specifies the transport used by GlusterFS NFS server to communicate with bricks. tcp OR rdma tcp open-behind It improves the application's ability to read data from a file by sending success notifications to the application whenever it receives an open call. on | off on performance.cache-max-file-size Sets the maximum file size cached by the io-cache translator. Can be specified using the normal size descriptors of KB, MB, GB, TB, or PB (for example, 6 GB). Size in bytes, or specified using size descriptors. 2 ^ 64-1 bytes performance.cache-min-file-size Sets the minimum file size cached by the io-cache translator. Can be specified using the normal size descriptors of KB, MB, GB, TB, or PB (for example, 6 GB). Size in bytes, or specified using size descriptors. 0 performance.cache-refresh-timeout The number of seconds cached data for a file will be retained. After this timeout, data re-validation will be performed. 0 - 61 seconds 1 second performance.cache-size Size of the read cache. Size in bytes, or specified using size descriptors. 32 MB performance.client-io-threads Improves performance for parallel I/O from a single mount point for dispersed (erasure-coded) volumes by allowing up to 16 threads to be used in parallel. When enabled, 1 thread is used by default, and further threads up to the maximum of 16 are created as required by client workload. This is useful for dispersed and distributed dispersed volumes. This feature is not recommended for distributed, replicated or distributed-replicated volumes. It is disabled by default on replicated and distributed-replicated volume types. on | off on, except for replicated and distributed-replicated volumes performance.flush-behind Specifies whether the write-behind translator performs flush operations in the background by returning (false) success to the application before flush file operations are sent to the backend file system. on | off on performance.io-thread-count The number of threads in the I/O threads translator. 1 - 64 16 performance.lazy-open This option requires open-behind to be on. Perform an open in the backend only when a necessary file operation arrives (for example, write on the file descriptor, unlink of the file). When this option is disabled, perform backend open immediately after an unwinding open. Yes/No Yes performance.md-cache-timeout The time period in seconds which controls when metadata cache has to be refreshed. If the age of cache is greater than this time-period, it is refreshed. Every time cache is refreshed, its age is reset to 0 . 0-600 seconds 1 second performance.nfs-strict-write-ordering Specifies whether to prevent later writes from overtaking earlier writes for NFS, even if the writes do not relate to the same files or locations. on | off off performance.nfs.flush-behind Specifies whether the write-behind translator performs flush operations in the background for NFS by returning (false) success to the application before flush file operations are sent to the backend file system. on | off on performance.nfs.strict-o-direct Specifies whether to attempt to minimize the cache effects of I/O for a file on NFS. When this option is enabled and a file descriptor is opened using the O_DIRECT flag, write-back caching is disabled for writes that affect that file descriptor. When this option is disabled, O_DIRECT has no effect on caching. This option is ignored if performance.write-behind is disabled. on | off off performance.nfs.write-behind-trickling-writes Enables and disables trickling-write strategy for the write-behind translator for NFS clients. on | off on performance.nfs.write-behind-window-size Specifies the size of the write-behind buffer for a single file or inode for NFS. 512 KB - 1 GB 1 MB performance.quick-read To enable/disable quick-read translator in the volume. on | off on performance.rda-cache-limit The value specified for this option is the maximum size of cache consumed by the readdir-ahead translator. This value is global and the total memory consumption by readdir-ahead is capped by this value, irrespective of the number/size of directories cached. 0-1GB 10MB performance.rda-request-size The value specified for this option will be the size of buffer holding directory entries in readdirp response. 4KB-128KB 128KB performance.resync-failed-syncs-after-fsync If syncing cached writes that were issued before an fsync operation fails, this option configures whether to reattempt the failed sync operations. on | off off performance.strict-o-direct Specifies whether to attempt to minimize the cache effects of I/O for a file. When this option is enabled and a file descriptor is opened using the O_DIRECT flag, write-back caching is disabled for writes that affect that file descriptor. When this option is disabled, O_DIRECT has no effect on caching. This option is ignored if performance.write-behind is disabled. on | off off performance.strict-write-ordering Specifies whether to prevent later writes from overtaking earlier writes, even if the writes do not relate to the same files or locations. on | off off performance.use-anonymous-fd This option requires open-behind to be on. For read operations, use anonymous file descriptor when the original file descriptor is open-behind and not yet opened in the backend. Yes | No Yes performance.write-behind Enables and disables write-behind translator. on | off on performance.write-behind-trickling-writes Enables and disables trickling-write strategy for the write-behind translator for FUSE clients. on | off on performance.write-behind-window-size Specifies the size of the write-behind buffer for a single file or inode. 512 KB - 1 GB 1 MB rebal-throttle Rebalance process is made multithreaded to handle multiple files migration for enhancing the performance. During multiple file migration, there can be a severe impact on storage system performance. The throttling mechanism is provided to manage it. lazy, normal, aggressive normal server.allow-insecure Allows FUSE-based client connections from unprivileged ports. By default, this is enabled, meaning that ports can accept and reject messages from insecure ports. When disabled, only privileged ports are allowed. This is a global setting for allowing insecure ports to be enabled for all FUSE-based exports using a single option. Use nfs.rpc-auth-* options for NFS access control. on | off on server.anongid Value of the GID used for the anonymous user when root-squash is enabled. When root-squash is enabled, all the requests received from the root GID (that is 0) are changed to have the GID of the anonymous user. 0 - 4294967295 65534 (this UID is also known as nfsnobody) server.anonuid Value of the UID used for the anonymous user when root-squash is enabled. When root-squash is enabled, all the requests received from the root UID (that is 0) are changed to have the UID of the anonymous user. 0 - 4294967295 65534 (this UID is also known as nfsnobody) server.event-threads Specifies the number of network connections to be handled simultaneously by the server processes hosting a Red Hat Gluster Storage node. 1 - 32 1 server.gid-timeout The time period in seconds which controls when cached groups has to expire. This is the cache that contains the groups (GIDs) where a specified user (UID) belongs to. This option is used only when server.manage-gids is enabled. 0-4294967295 seconds 2 seconds server.manage-gids Resolve groups on the server-side. By enabling this option, the groups (GIDs) a user (UID) belongs to gets resolved on the server, instead of using the groups that were send in the RPC Call by the client. This option makes it possible to apply permission checks for users that belong to bigger group lists than the protocol supports (approximately 93). on|off off server.root-squash Prevents root users from having root privileges, and instead assigns them the privileges of nfsnobody. This squashes the power of the root users, preventing unauthorized modification of files on the Red Hat Gluster Storage servers. This option is used only for glusterFS NFS protocol. on | off off server.statedump-path Specifies the directory in which the statedumpfiles must be stored. /var/run/gluster (for a default installation) Path to a directory ssl.crl-path Specifies the path to a directory containing SSL certificate revocation list (CRL). This list helps the server nodes to stop the nodes with revoked certificates from accessing the cluster. Absolute path of the directory hosting the CRL files. null (No default value. Hence, it is blank until the volume option is set.) storage.fips-mode-rchecksum If enabled, posix_rchecksum uses the FIPS compliant SHA256 checksum, else it uses MD5. on | off on Warning Do not enable the storage.fips-mode-rchecksum option on volumes with clients that use Red Hat Gluster Storage 3.4 or earlier. storage.create-mask Maximum set (upper limit) of permission for the files that will be created. 0000 - 0777 0777 storage. create-directory-mask Maximum set (upper limit) of permission for the directories that will be created. 0000 - 0777 0777 storage.force-create-mode Minimum set (lower limit) of permission for the files that will be created. 0000 - 0777 0000 storage.force-directory-mode Minimum set (lower limit) of permission for the directories that will be created. 0000 - 0777 0000 Important Behavior is undefined in terms of calculated file access mode when both a mask and a matching forced mode are set simultaneously, create-directory-mask and force-directory-mode or create-mask and force-create-mode . storage.health-check-interval Sets the time interval in seconds for a filesystem health check. You can set it to 0 to disable. The POSIX translator on the bricks performs a periodic health check. If this check fails, the file system exported by the brick is not usable anymore and the brick process (glusterfsd) logs a warning and exits. 0-4294967295 seconds 30 seconds storage.health-check-timeout Sets the time interval in seconds to wait for aio_write to finish for health check. Set to 0 to disable. 0-4294967295 seconds 20 seconds storage.owner-gid Sets the GID for the bricks of the volume. This option may be required when some of the applications need the brick to have a specific GID to function correctly. Example: For QEMU integration the UID/GID must be qemu:qemu, that is, 107:107 (107 is the UID and GID of qemu). Any integer greater than or equal to -1. The GID of the bricks are not changed. This is denoted by -1. storage.owner-uid Sets the UID for the bricks of the volume. This option may be required when some of the applications need the brick to have a specific UID to function correctly. Example: For QEMU integration the UID/GID must be qemu:qemu , that is, 107:107 (107 is the UID and GID of qemu). Any integer greater than or equal to -1. The UID of the bricks are not changed. This is denoted by -1. storage.reserve The POSIX translator includes an option that allow users to reserve disk space on the bricks. This option ensures that enough space is retained to allow users to expand disks or cluster when the bricks are nearly full. The option does this by preventing new file creation when the disk has the storage.reserve percentage/size or less free space. Storage.reserve accepts value either in form of percentage or in form of MB/GB. To reconfigure this volume option from MB/GB to percentage or percentage to MB/GB, make use of the same volume option. Also, the newest set value is considered. If set to 0 storage.reserve is disabled 0-100% (applicable if parameter is percentage) or nKB/MB/GB (applicable when size is used as parameter), where 'n' is the positive integer that needs to be reserved. Respective examples: gluster volume set <vol-name> storage.reserve 15% or gluster volume set <vol-name> storage.reserve 100GB 1% (1% of the brick size) Note Be mindful of the brick size while setting the storage.reserve option in MB/GB. For example, in a case where the value for the volume option is >= brick size, the entire brick will be reserved. The option works at sub-volume level. transport.listen-backlog The maximum number of established TCP socket requests queued and waiting to be accepted at any one time. 0 to system maximum 1024 | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/volume_option_table |
Chapter 5. Managing Load-balancing service instance logs | Chapter 5. Managing Load-balancing service instance logs You can enable tenant flow logging or suppress logging to the amphora local file system. You can also forward administrative or tenant flow logs to syslog receivers in a set of containers or to other syslog receivers at endpoints of your choosing. When you choose to use the TCP syslog protocol, you can specify one or more secondary endpoints for administrative and tenant log offloading in the event that the primary endpoint fails. In addition, you can control a range of other logging features such as, setting the syslog facility value, changing the tenant flow log format, and widening the scope of administrative logging to include logs from sources like the kernel and from cron. Section 5.1, "Enabling Load-balancing service instance administrative log offloading" Section 5.2, "Enabling tenant flow log offloading for Load-balancing service instances" Section 5.3, "Disabling Load-balancing service instance tenant flow logging" Section 5.4, "Disabling Load-balancing service instance local log storage" Section 5.5, "Heat parameters for Load-balancing service instance logging" Section 5.6, "Load-balancing service instance tenant flow log format" 5.1. Enabling Load-balancing service instance administrative log offloading By default, Load-balancing service instances (amphorae) store logs on the local machine in the systemd journal. However, you can specify that the amphorae offload logs to syslog receivers to aggregate administrative logs. Log offloading enables administrators to go to one location for logs, and retain logs when the amphorae are rotated. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a custom YAML environment file. Example In the YAML environment file under parameter_defaults , set OctaviaLogOffload to true . Note The amphorae offload administrative logs use the syslog facility value of local1 , by default, unless you specify another value with the OctaviaAdminLogFacility parameter. The valid values are 0 - 7. Example The amphorae forward only load balancer-related administrative logs, such as the haproxy admin logs, keepalived, and amphora agent logs. If you want to configure the amphorae to send all of the administrative logs from the amphorae, such as the kernel, system, and security logs, set OctaviaForwardAllLogs to true . Example Choose the log protocol: UDP (default) or TCP. In the event that the primary endpoint fails, the amphorae sends its logs to a secondary endpoint only when the log protocol is TCP. The amphorae use a set of default containers defined by the Orchestration service (heat) that contain syslog receivers listening for log messages. If you want to use a different set of endpoints, you can specify those with the OctaviaAdminLogTargets parameter. The endpoints configured for tenant flow log offloading can be the same endpoints used for administrative log offloading. Also, if your log offload protocol is TCP, in the event that the first endpoint is unreachable, the amphorae will try the additional endpoints in the order that you list them until a connection succeeds. Example By default, when you enable log offloading, tenant flow logs are also offloaded. If you want to disable tenant flow log offloading, set the OctaviaConnectionLogging to false . Example Run the deployment command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Verification Unless you specified specific endpoints with the OctaviaAdminLogTargets or OctaviaTenantLogTargets , the amphorae offload logs to the RHOSP Controller in the same location as the other RHOSP logs ( /var/log/containers/octavia-amphorae/ ). Check the appropriate location for the presence of the following log files: octavia-amphora.log -- Log file for the administrative log. (if enabled) octavia-tenant-traffic.log -- Log file for the tenant traffic flow log. Additional resources Section 5.5, "Heat parameters for Load-balancing service instance logging" Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide 5.2. Enabling tenant flow log offloading for Load-balancing service instances By default, Load-balancing service instances (amphorae) store logs on the local machine in the systemd journal. However, you can specify alternate syslog receiver endpoints instead. Because tenant flow logs grow in size depending on the number of tenant connections, ensure that the alternate syslog receivers contain sufficient disk space. By default, tenant flow log offloading is automatically enabled when administrative log offloading is enabled. To turn off tenant flow log offloading, set the OctaviaConnectionLogging parameter to false . Important Tenant flow logging can produce a large number of syslog messages depending on how many connections the load balancers are receiving. Tenant flow logging produces one log entry for each connection to the load balancer. Monitor log volume and configure your syslog receivers appropriately based on the expected number of connections that your load balancers manage. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Locate the environment file in which the OctaviaConnectionLogging parameter is set: If you do not find the file, create an environment file: Add the OctaviaLogOffload and OctaviaConnectionLogging parameters to the parameter_defaults section of the environment file and set the values to true : Note The amphorae use the syslog facility default value of local0 to offload tenant flow logs unless you use the OctaviaTenantLogFacility parameter to specify another value. The valid values are 0 - 7. Optional: To change the default endpoint that the amphorae use for both tenant and administrative log offloading, use OctaviaTenantLogTargets and OctaviaAdminLogTargets , respectively. The amphorae use a set of default containers that contain syslog receivers that listen for log messages. Also, if your log offload protocol is TCP, in the event that the first endpoint is unreachable, the amphorae will try the additional endpoints in the order that you list them until a connection succeeds. Run the deployment command and include the core heat templates, environment files, and the custom environment file you modified. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Verification Unless you specified specific endpoints with the OctaviaAdminLogTargets or OctaviaTenantLogTargets , the amphorae offload logs to the RHOSP Controller in the same location as the other RHOSP logs ( /var/log/containers/octavia-amphorae/ ). Check the appropriate location for the presence of the following log files: octavia-amphora.log -- Log file for the administrative log. octavia-tenant-traffic.log -- Log file for the tenant traffic flow log. Additional resources Section 5.5, "Heat parameters for Load-balancing service instance logging" Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide Section 5.6, "Load-balancing service instance tenant flow log format" 5.3. Disabling Load-balancing service instance tenant flow logging Tenant flow log offloading for Load-balancing service instances (amphorae) is automatically enabled when you enable administrative log offloading. To keep administrative log offloading enabled and to disable tenant flow logging, you must set the OctaviaConnectionLogging parameter to false . When the OctaviaConnectionLogging parameter is false , the amphorae do not write tenant flow logs to the disk inside the amphorae, nor offload any logs to syslog receivers listening elsewhere. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Locate the YAML custom environment file in which amphora logging is configured. Example In the custom environment file, under parameter_defaults , set OctaviaConnectionLogging to false . Example Run the deployment command and include the core heat templates, environment files, and the custom environment file in which you set OctaviaConnectionLogging to true . Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Verification Unless you specified specific endpoints with the OctaviaAdminLogTargets or OctaviaTenantLogTargets , the amphorae offload logs to the RHOSP Controller in the same location as the other RHOSP logs ( /var/log/containers/octavia-amphorae/ ). Check the appropriate location for the absence of octavia-tenant-traffic.log . Additional resources Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide 5.4. Disabling Load-balancing service instance local log storage Even when you configure Load-balancing service instances (amphorae) to offload administrative and tenant flow logs, the amphorae continue to write these logs to the disk inside the amphorae. To improve the performance of the load balancer, you can stop logging locally. Important If you disable logging locally, you also disable all log storage in the amphora, including kernel, system, and security logging. Note If you disable local log storage and the OctaviaLogOffload parameter is set to false, ensure that you set OctaviaConnectionLogging to false for improved load balancing performance. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a custom YAML environment file. Example In the environment file under parameter_defaults , set OctaviaDisableLocalLogStorage to true . Run the deployment command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Verification On the amphora instance, check the location where log files are written, and verify that no new log files are being written. Additional resources Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide 5.5. Heat parameters for Load-balancing service instance logging When you want to configure Load-balancing service instance (amphora) logging, you set values for one or more Orchestration service (heat) parameters that control logging and run the openstack overcloud deploy command. These heat parameters for amphora logging enable you to control features such as turning on log offloading, defining custom endpoints to offload logs to, setting the syslog facility value for logs, and so on. Table 5.1. Heat parameters for all logs Parameter Default Description OctaviaLogOffload false When true , instances offload their logs. If no endpoints are specified, then, by default, the instance offloads its log to the same location as the other RHOSP logs ( /var/log/containers/octavia-amphorae/ ). OctaviaDisableLocalLogStorage false When true , instances do not store logs on the instance host filesystem. This includes all kernel, system, and security logs. OctaviaForwardAllLogs false When true , instances forward all log messages to the administrative log endpoints, including non-load balancing related logs such as the cron and kernel logs. For instances to recognize OctaviaForwardAllLogs , you must also enable OctaviaLogOffload . Table 5.2. Heat parameters for admin logging Parameter Default Description OctaviaAdminLogTargets No value. A comma-delimited list of syslog endpoints (<host>:<port>) to receive administrative log messages. These endpoints can be a container, VM, or physical host that is running a process that is listening for the log messages on the specified port. When OctaviaAdminLogTargets is not present, the instance offloads its log to the same location as the other RHOSP logs ( /var/log/containers/octavia-amphorae/ ) on a set of containers that is defined by RHOSP director. OctaviaAdminLogFacility 1 A number between 0 and 7 that is the syslog "LOG_LOCAL" facility to use for the administrative log messages. Table 5.3. Heat parameters for tenant flow logging Parameter Default Description OctaviaConnectionLogging true When true , tenant connection flows are logged. When OctaviaConnectionLogging is false, the amphorae stop logging tenant connections regardless of the OctaviaLogOffload setting. OctaviaConnectionLogging disables local tenant flow log storage and, if log offloading is enabled, it does not forward tenant flow logs. OctaviaTenantLogTargets No value. A comma-delimited list of syslog endpoints (<host>:<port>) to receive tenant traffic flow log messages. These endpoints can be a container, VM, or physical host that is running a process that is listening for the log messages on the specified port. When OctaviaTenantLogTargets is not present, the instance offloads its log to the same location as the other RHOSP logs ( /var/log/containers/octavia-amphorae/ ) on a set of containers that is defined by RHOSP director. OctaviaTenantLogFacility 0 A number between 0 and 7 that is the syslog "LOG_LOCAL" facility to use for the tenant traffic flow log messages. OctaviaUserLogFormat "{{ '{{' }} project_id {{ '}}' }} {{ '{{' }} lb_id {{ '}}' }} %f %ci %cp %t %{+Q}r %ST %B %U %[ssl_c_verify] %{+Q}[ssl_c_s_dn] %b %s %Tt %tsc" The format for the tenant traffic flow log. The alphanumerics represent specific octavia fields, and the curly braces ({}) are substitution variables. Additional resources Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide 5.6. Load-balancing service instance tenant flow log format The log format that the tenant flow logs for Load-balancing service instances (amphorae) follows is the HAProxy log format. The two exceptions are the project_id and lb_id variables whose values are provided by the amphora provider driver. Sample Here is a sample log entry with rsyslog as the syslog receiver: Notes A hyphen (-) indicates any field that is unknown or not applicable to the connection. The prefix in the earlier sample log entry originates from the rsyslog receiver, and is not part of the syslog message from the amphora: Default The default amphora tenant flow log format is: Refer to the table that follows for a description of the format. Table 5.4. Data variables for tenant flow logs format variable definitions. Variable Type Field name {{project_id}} UUID Project ID (substitution variable from the amphora provider driver) {{lb_id}} UUID Load balancer ID (substitution variable from the amphora provider driver) %f string frontend_name %ci IP address client_ip %cp numeric client_port %t date date_time %ST numeric status_code %B numeric bytes_read %U numeric bytes_uploaded %ssl_c_verify Boolean client_certificate_verify (0 or 1) %ssl_c_s_dn string client_certificate_distinguised_name %b string pool_id %s string member_id %Tt numeric processing_time (milliseconds) %tsc string termination_state (with cookie status) Additional resources Custom log format in HAProxy Documentation | [
"source ~/stackrc",
"vi /home/stack/templates/my-octavia-environment.yaml",
"parameter_defaults: OctaviaLogOffload: true",
"parameter_defaults: OctaviaLogOffload: true OctaviaAdminLogFacility: 2",
"parameter_defaults: OctaviaLogOffload: true OctaviaForwardAllLogs: true",
"parameter_defaults: OctaviaLogOffload: true OctaviaConnectionLogging: false OctaviaLogOffloadProtocol: tcp",
"OctaviaAdminLogTargets: <ip_address>:<port>[, <ip_address>:<port>]",
"parameter_defaults: OctaviaLogOffload: true OctaviaLogOffloadProtocol: tcp OctaviaAdminLogTargets: 192.0.2.1:10514, 2001:db8:1::10:10514",
"parameter_defaults: OctaviaLogOffload: true OctaviaConnectionLogging: false",
"openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /home/stack/templates/my-octavia-environment.yaml",
"source ~/stackrc",
"grep -rl OctaviaConnectionLogging /home/stack/templates/",
"vi /home/stack/templates/my-octavia-environment.yaml",
"parameter_defaults: OctaviaLogOffload: true OctaviaConnectionLogging: true",
"OctaviaAdminLogTargets: <ip-address>:<port>[, <ip-address>:<port>] OctaviaTenantLogTargets: <ip-address>:<port>[, <ip-address>:<port>]",
"openstack overcloud deploy --templates -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /home/stack/templates/my-octavia-environment.yaml",
"source ~/stackrc",
"grep -rl OctaviaLogOffload /home/stack/templates/",
"parameter_defaults: OctaviaLogOffload: true OctaviaConnectionLogging: false",
"openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /home/stack/templates/my-octavia-environment.yaml",
"source ~/stackrc",
"vi /home/stack/templates/my-octavia-environment.yaml",
"parameter_defaults: OctaviaDisableLocalLogStorage: true",
"openstack overcloud deploy --templates -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /home/stack/templates/my-octavia-environment.yaml",
"Jun 12 00:44:13 amphora-3e0239c3-5496-4215-b76c-6abbe18de573 haproxy[1644]: 5408b89aa45b48c69a53dca1aaec58db fd8f23df-960b-4b12-ba62-2b1dff661ee7 261ecfc2-9e8e-4bba-9ec2-3c903459a895 172.24.4.1 41152 12/Jun/2019:00:44:13.030 \"GET / HTTP/1.1\" 200 76 73 - \"\" e37e0e04-68a3-435b-876c-cffe4f2138a4 6f2720b3-27dc-4496-9039-1aafe2fee105 4 --",
"Jun 12 00:44:13 amphora-3e0239c3-5496-4215-b76c-6abbe18de573 haproxy[1644]:\"",
"`\"{{ '{{' }} project_id {{ '}}' }} {{ '{{' }} lb_id {{ '}}' }} %f %ci %cp %t %{+Q}r %ST %B %U %[ssl_c_verify] %{+Q}[ssl_c_s_dn] %b %s %Tt %tsc\"`"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_load_balancing_as_a_service/manage-lb-service-instance-logs_rhosp-lbaas |
Chapter 3. Support | Chapter 3. Support Only the configuration options described in this documentation are supported for logging. Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged . An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed . Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. Logging is not: A high scale log collection system Security Information and Event Monitoring (SIEM) compliant Historical or long term log retention or storage A guaranteed log sink Secure storage - audit logs are not stored by default 3.1. Supported API custom resource definitions LokiStack development is ongoing. Not all APIs are currently supported. Table 3.1. Loki API support states CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported in 5.5 RulerConfig rulerconfig.loki.grafana/v1 Supported in 5.7 AlertingRule alertingrule.loki.grafana/v1 Supported in 5.7 RecordingRule recordingrule.loki.grafana/v1 Supported in 5.7 3.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: The Elasticsearch custom resource (CR) The Kibana deployment The fluent.conf file The Fluentd daemon set You must set the OpenShift Elasticsearch Operator to the Unmanaged state to modify the Elasticsearch deployment files. Explicitly unsupported cases include: Configuring default log rotation . You cannot modify the default log rotation configuration. Configuring the collected log location . You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log . Throttling log collection . You cannot throttle down the rate at which the logs are read in by the log collector. Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 3.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 3.4. Support exception for the Logging UI Plugin Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA. 3.5. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and logging. Note Do not use the hack/logging-dump.sh script. The script is no longer supported and does not collect data. 3.5.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your logging, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 3.5.2. Collecting logging data You can use the oc adm must-gather CLI command to collect information about logging. Procedure To collect logging information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal . | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/support |
Chapter 15. Red Hat build of Kogito microservice deployment troubleshooting | Chapter 15. Red Hat build of Kogito microservice deployment troubleshooting Use the information in this section to troubleshoot issues that you might encounter when using the operator to deploy Red Hat build of Kogito microservices. The following information is updated as new issues and workarounds are discovered. No builds are running If you do not see any builds running nor any resources created in the relevant namespace, enter the following commands to retrieve running pods and to view the operator log for the pod: View RHPAM Kogito Operator log for a specified pod Verify KogitoRuntime status If you create, for example, KogitoRuntime application with a non-existing image using the following YAML definition: Example YAML definition for a KogitoRuntime application apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example # Application name spec: image: 'not-existing-image:latest' replicas: 1 You can verify the status of the KogitoRuntime application using the oc describe KogitoRuntime example command in the bash console. When you run the oc describe KogitoRuntime example command in the bash console, you receive the following output: Example KogitoRuntime status At the end of the output, you can see the KogitoRuntime status with a relevant message. | [
"// Retrieves running pods oc get pods NAME READY STATUS RESTARTS AGE kogito-operator-6d7b6d4466-9ng8t 1/1 Running 0 26m // Opens RHPAM Kogito Operator log for the pod oc logs -f kogito-operator-6d7b6d4466-9ng8t",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example # Application name spec: image: 'not-existing-image:latest' replicas: 1",
"[user@localhost ~]USD oc describe KogitoRuntime example Name: example Namespace: username-test Labels: <none> Annotations: <none> API Version: rhpam.kiegroup.org/v1 Kind: KogitoRuntime Metadata: Creation Timestamp: 2021-05-20T07:19:41Z Generation: 1 Managed Fields: API Version: rhpam.kiegroup.org/v1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:image: f:replicas: Manager: Mozilla Operation: Update Time: 2021-05-20T07:19:41Z API Version: rhpam.kiegroup.org/v1 Fields Type: FieldsV1 fieldsV1: f:spec: f:monitoring: f:probes: .: f:livenessProbe: f:readinessProbe: f:resources: f:runtime: f:status: .: f:cloudEvents: f:conditions: Manager: main Operation: Update Time: 2021-05-20T07:19:45Z Resource Version: 272185 Self Link: /apis/rhpam.kiegroup.org/v1/namespaces/ksuta-test/kogitoruntimes/example UID: edbe0bf1-554e-4523-9421-d074070df982 Spec: Image: not-existing-image:latest Replicas: 1 Status: Cloud Events: Conditions: Last Transition Time: 2021-05-20T07:19:44Z Message: Reason: NoPodAvailable Status: False Type: Deployed Last Transition Time: 2021-05-20T07:19:44Z Message: Reason: RequestedReplicasNotEqualToAvailableReplicas Status: True Type: Provisioning Last Transition Time: 2021-05-20T07:19:45Z Message: you may not have access to the container image \"quay.io/kiegroup/not-existing-image:latest\" Reason: ImageStreamNotReadyReason Status: True Type: Failed"
]
| https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/ref-kogito-microservice-deploy-troubleshooting_deploying-kogito-microservices-on-openshift |
Installing on IBM Cloud (Classic) | Installing on IBM Cloud (Classic) OpenShift Container Platform 4.17 Installing OpenShift Container Platform IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team | [
"<cluster_name>.<domain>",
"test-cluster.example.com",
"ipmi://<IP>:<port>?privilegelevel=OPERATOR",
"ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>",
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt kni",
"sudo systemctl start firewalld",
"sudo systemctl enable firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"PRVN_HOST_ID=<ID>",
"ibmcloud sl hardware list",
"PUBLICSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRIVSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)",
"PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)",
"PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR",
"PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)",
"PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)",
"PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)",
"PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR",
"PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)",
"sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2",
"vim pull-secret.txt",
"sudo dnf install dnsmasq",
"sudo vi /etc/dnsmasq.conf",
"interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r",
"ibmcloud sl hardware list",
"ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null",
"\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"",
"sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile",
"00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1",
"sudo systemctl start dnsmasq",
"sudo systemctl enable dnsmasq",
"sudo systemctl status dnsmasq",
"● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k",
"sudo firewall-cmd --add-port 53/udp --permanent",
"sudo firewall-cmd --add-port 67/udp --permanent",
"sudo firewall-cmd --change-zone=provisioning --zone=external --permanent",
"sudo firewall-cmd --reload",
"export VERSION=stable-4.17",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/installing_on_ibm_cloud_classic/index |
SystemTap Tapset Reference | SystemTap Tapset Reference Red Hat Enterprise Linux 7 Most common tapset definitions for SystemTap scripts Red Hat Enterprise Linux Documentation Vladimir Slavik Red Hat Customer Content Services [email protected] Robert Kratky Red Hat Customer Content Services William Cohen Red Hat Software Engineering Don Domingo Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/index |
Chapter 3. Creating system images with Image Builder command-line interface | Chapter 3. Creating system images with Image Builder command-line interface Image Builder is a tool for creating custom system images. To control Image Builder and create your custom system images, use the command-line interface which is currently the preferred method to use Image Builder. 3.1. Image Builder command-line interface Image Builder command-line interface is currently the preferred method to use Image Builder. It offers more functionality than the Chapter 4, Creating system images with Image Builder web console interface . To use this interface, run the composer-cli tool with suitable options and sub-commands. The workflow for the command-line interface can be summarized as follows: 1. Export (save) the blueprint definition to a plain text file 2. Edit this file in a text editor 3. Import (push) the blueprint text file back into Image Builder 4. Run a compose to build an image from the blueprint 5. Export the image file to download it Apart from the basic sub-commands to achieve this procedure, the composer-cli tool offers many sub-commands to examine the state of configured blueprints and composes. To run the composer-cli command as a non-root, the user must be in the weldr or root groups . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/chap-Documentation-Image_Builder-Test_Chapter3 |
Chapter 8. Managing users and roles | Chapter 8. Managing users and roles A User defines a set of details for individuals who use the system. Users can be associated with organizations and environments, so that when they create new entities, the default settings are automatically used. Users can also have one or more roles attached, which grants them rights to view and manage organizations and environments. See Section 8.1, "Managing users" for more information on working with users. You can manage permissions of several users at once by organizing them into user groups. User groups themselves can be further grouped to create a hierarchy of permissions. For more information on creating user groups, see Section 8.4, "Creating and managing user groups" . Roles define a set of permissions and access levels. Each role contains one on more permission filters that specify the actions allowed for the role. Actions are grouped according to the Resource type . Once a role has been created, users and user groups can be associated with that role. This way, you can assign the same set of permissions to large groups of users. Satellite provides a set of predefined roles and also enables creating custom roles and permission filters as described in Section 8.5, "Creating and managing roles" . 8.1. Managing users As an administrator, you can create, modify and remove Satellite users. You can also configure access permissions for a user or a group of users by assigning them different roles . 8.1.1. Creating a user Use this procedure to create a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click Create User . Enter the account details for the new user. Click Submit to create the user. The user account details that you can specify include the following: On the User tab, select an authentication source from the Authorized by list: INTERNAL : to manage the user inside Satellite Server. EXTERNAL : to manage the user with external authentication. For more information, see Configuring authentication for Red Hat Satellite users . On the Organizations tab, select an organization for the user. Specify the default organization Satellite selects for the user after login from the Default on login list. Important If a user is not assigned to an organization, their access is limited. CLI procedure Create a user: The --auth-source-id 1 setting means that the user is authenticated internally, you can specify an external authentication source as an alternative. Add the --admin option to grant administrator privileges to the user. Specifying organization IDs is not required. You can modify the user details later by using the hammer user update command. Additional resources For more information about creating user accounts by using Hammer, enter hammer user create --help . 8.1.2. Assigning roles to a user Use this procedure to assign roles to a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click the username of the user to be assigned one or more roles. Note If a user account is not listed, check that you are currently viewing the correct organization. To list all the users in Satellite, click Default Organization and then Any Organization . Click the Locations tab, and select a location if none is assigned. Click the Organizations tab, and check that an organization is assigned. Click the Roles tab to display the list of available roles. Select the roles to assign from the Roles list. To grant all the available permissions, select the Administrator checkbox. Click Submit . To view the roles assigned to a user, click the Roles tab; the assigned roles are listed under Selected items . To remove an assigned role, click the role name in Selected items . CLI procedure To assign roles to a user, enter the following command: 8.1.3. Impersonating a different user account Administrators can impersonate other authenticated users for testing and troubleshooting purposes by temporarily logging on to the Satellite web UI as a different user. When impersonating another user, the administrator has permissions to access exactly what the impersonated user can access in the system, including the same menus. Audits are created to record the actions that the administrator performs while impersonating another user. However, all actions that an administrator performs while impersonating another user are recorded as having been performed by the impersonated user. Prerequisites Ensure that you are logged on to the Satellite web UI as a user with administrator privileges for Satellite. Procedure In the Satellite web UI, navigate to Administer > Users . To the right of the user that you want to impersonate, from the list in the Actions column, select Impersonate . When you want to stop the impersonation session, in the upper right of the main menu, click the impersonation icon. 8.1.4. Creating an API-only user You can create users that can interact only with the Satellite API. Prerequisites You have created a user and assigned roles to them. Note that this user must be authorized internally. For more information, see the following sections: Section 8.1.1, "Creating a user" Section 8.1.2, "Assigning roles to a user" Procedure Log in to your Satellite as admin. Navigate to Administer > Users and select a user. On the User tab, set a password. Do not save or communicate this password with others. You can create pseudo-random strings on your console: Create a Personal Access Token for the user. For more information, see Section 8.3.1, "Creating a Personal Access Token" . 8.2. Managing SSH keys Adding SSH keys to a user allows deployment of SSH keys during provisioning. For information on deploying SSH keys during provisioning, see Deploying SSH Keys during Provisioning in Provisioning hosts . For information on SSH keys and SSH key creation, see Using SSH-based Authentication in Red Hat Enterprise Linux 8 Configuring basic system settings . 8.2.1. Managing SSH keys for a user Use this procedure to add or remove SSH keys for a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you are logged in to the Satellite web UI as an Admin user of Red Hat Satellite or a user with the create_ssh_key permission enabled for adding SSH key and destroy_ssh_key permission for removing a key. Procedure In the Satellite web UI, navigate to Administer > Users . From the Username column, click on the username of the required user. Click on the SSH Keys tab. To Add SSH key Prepare the content of the public SSH key in a clipboard. Click Add SSH Key . In the Key field, paste the public SSH key content from the clipboard. In the Name field, enter a name for the SSH key. Click Submit . To Remove SSH key Click Delete on the row of the SSH key to be deleted. Click OK in the confirmation prompt. CLI procedure To add an SSH key to a user, you must specify either the path to the public SSH key file, or the content of the public SSH key copied to the clipboard. If you have the public SSH key file, enter the following command: If you have the content of the public SSH key, enter the following command: To delete an SSH key from a user, enter the following command: To view an SSH key attached to a user, enter the following command: To list SSH keys attached to a user, enter the following command: 8.3. Managing Personal Access Tokens Personal Access Tokens allow you to authenticate API requests without using your password. You can set an expiration date for your Personal Access Token and you can revoke it if you decide it should expire before the expiration date. 8.3.1. Creating a Personal Access Token Use this procedure to create a Personal Access Token. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to create a Personal Access Token. On the Personal Access Tokens tab, click Add Personal Access Token . Enter a Name for you Personal Access Token. Optional: Select the Expires date to set an expiration date. If you do not set an expiration date, your Personal Access Token will never expire unless revoked. Click Submit. You now have the Personal Access Token available to you on the Personal Access Tokens tab. Important Ensure to store your Personal Access Token as you will not be able to access it again after you leave the page or create a new Personal Access Token. You can click Copy to clipboard to copy your Personal Access Token. Verification Make an API request to your Satellite Server and authenticate with your Personal Access Token: You should receive a response with status 200 , for example: If you go back to Personal Access Tokens tab, you can see the updated Last Used time to your Personal Access Token. 8.3.2. Revoking a Personal Access Token Use this procedure to revoke a Personal Access Token before its expiration date. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to revoke the Personal Access Token. On the Personal Access Tokens tab, locate the Personal Access Token you want to revoke. Click Revoke in the Actions column to the Personal Access Token you want to revoke. Verification Make an API request to your Satellite Server and try to authenticate with the revoked Personal Access Token: You receive the following error message: 8.4. Creating and managing user groups 8.4.1. User groups With Satellite, you can assign permissions to groups of users. You can also create user groups as collections of other user groups. If you use an external authentication source, you can map Satellite user groups to external user groups as described in Configuring External User Groups in Installing Satellite Server in a connected network environment . User groups are defined in an organizational context, meaning that you must select an organization before you can access user groups. 8.4.2. Creating a user group Use this procedure to create a user group. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Create User group . On the User Group tab, specify the name of the new user group and select group members: Select the previously created user groups from the User Groups list. Select users from the Users list. On the Roles tab, select the roles you want to assign to the user group. Alternatively, select the Admin checkbox to assign all available permissions. Click Submit . CLI procedure To create a user group, enter the following command: 8.4.3. Removing a user group Use the following procedure to remove a user group from Satellite. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Delete to the right of the user group you want to delete. Click Confirm to delete the user group. 8.5. Creating and managing roles Satellite provides a set of predefined roles with permissions sufficient for standard tasks, as listed in Section 8.6, "Predefined roles available in Satellite" . It is also possible to configure custom roles, and assign one or more permission filters to them. Permission filters define the actions allowed for a certain resource type. Certain Satellite plugins create roles automatically. 8.5.1. Creating a role Use this procedure to create a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Create Role . Provide a Name for the role. Click Submit to save your new role. CLI procedure To create a role, enter the following command: To serve its purpose, a role must contain permissions. After creating a role, proceed to Section 8.5.3, "Adding permissions to a role" . 8.5.2. Cloning a role Use the Satellite web UI to clone a role. Procedure In the Satellite web UI, navigate to Administer > Roles and select Clone from the drop-down menu to the right of the required role. Provide a Name for the role. Click Submit to clone the role. Click the name of the cloned role and navigate to Filters . Edit the permissions as required. Click Submit to save your new role. 8.5.3. Adding permissions to a role Use this procedure to add permissions to a role. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Roles . Select Add Filter from the drop-down list to the right of the required role. Select the Resource type from the drop-down list. The (Miscellaneous) group gathers permissions that are not associated with any resource group. Click the permissions you want to select from the Permission list. Depending on the Resource type selected, you can select or deselect the Unlimited and Override checkbox. The Unlimited checkbox is selected by default, which means that the permission is applied on all resources of the selected type. When you disable the Unlimited checkbox, the Search field activates. In this field you can specify further filtering with use of the Satellite search syntax. For more information, see Section 8.7, "Granular permission filtering" . When you enable the Override checkbox, you can add additional locations and organizations to allow the role to access the resource type in the additional locations and organizations; you can also remove an already associated location and organization from the resource type to restrict access. Click . Click Submit to save changes. CLI procedure List all available permissions: Add permissions to a role: For more information about roles and permissions parameters, enter the hammer role --help and hammer filter --help commands. 8.5.4. Viewing permissions of a role Use the Satellite web UI to view the permissions of a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Filters to the right of the required role to get to the Filters page. The Filters page contains a table of permissions assigned to a role grouped by the resource type. It is also possible to generate a complete table of permissions and actions that you can use on your Satellite system. For more information, see Section 8.5.5, "Creating a complete permission table" . 8.5.5. Creating a complete permission table Use the Satellite CLI to create a permission table. Procedure Start the Satellite console with the following command: Insert the following code into the console: The above syntax creates a table of permissions and saves it to the /tmp/table.html file. Press Ctrl + D to exit the Satellite console. Insert the following text at the first line of /tmp/table.html : Append the following text at the end of /tmp/table.html : Open /tmp/table.html in a web browser to view the table. 8.5.6. Removing a role Use the following procedure to remove a role from Satellite. Procedure In the Satellite web UI, navigate to Administer > Roles . Select Delete from the drop-down list to the right of the role to be deleted. Click Confirm to delete the role. 8.6. Predefined roles available in Satellite The following table provides an overview of permissions that predefined roles in Satellite grant to a user. For a complete set of predefined roles and the permissions they grant, log in to Satellite web UI as the privileged user and navigate to Administer > Roles . For more information, see Section 8.5.4, "Viewing permissions of a role" . Predefined role Permissions the role provides Additional information Auditor View the Audit log. Default role View tasks and jobs invocations. Satellite automatically assigns this role to every user in the system. Manager View and edit global settings. Organization admin All permissions except permissions for managing organizations. An administrator role defined per organization. The role has no visibility into resources in other organizations. By cloning this role and assigning an organization, you can delegate administration of that organization to a user. Site manager View permissions for various items. Permissions to manage hosts in the infrastructure. A restrained version of the Manager role. System admin Edit global settings in Administer > Settings . View, create, edit, and destroy users, user groups, and roles. View, create, edit, destroy, and assign organizations and locations but not view resources within them. Users with this role can create users and assign all roles to them. Give this role only to trusted users. Viewer View the configuration of every element of the Satellite structure, logs, reports, and statistics. 8.7. Granular permission filtering As mentioned in Section 8.5.3, "Adding permissions to a role" , Red Hat Satellite provides the ability to limit the configured user permissions to selected instances of a resource type. These granular filters are queries to the Satellite database and are supported by the majority of resource types. 8.7.1. Creating a granular permission filter Use this procedure to create a granular filter. To use the CLI instead of the Satellite web UI, see the CLI procedure . Satellite does not apply search conditions to create actions. For example, limiting the create_locations action with name = "Default Location" expression in the search field does not prevent the user from assigning a custom name to the newly created location. Procedure Specify a query in the Search field on the Edit Filter page. Deselect the Unlimited checkbox for the field to be active. Queries have the following form: field_name marks the field to be queried. The range of available field names depends on the resource type. For example, the Partition Table resource type offers family , layout , and name as query parameters. operator specifies the type of comparison between field_name and value . See Section 8.7.3, "Supported operators for granular search" for an overview of applicable operators. value is the value used for filtering. This can be for example a name of an organization. Two types of wildcard characters are supported: underscore (_) provides single character replacement, while percent sign (%) replaces zero or more characters. For most resource types, the Search field provides a drop-down list suggesting the available parameters. This list appears after placing the cursor in the search field. For many resource types, you can combine queries using logical operators such as and , not and has operators. CLI procedure To create a granular filter, enter the hammer filter create command with the --search option to limit permission filters, for example: This command adds to the qa-user role a permission to view, create, edit, and destroy content views that only applies to content views with name starting with ccv . 8.7.2. Examples of using granular permission filters As an administrator, you can allow selected users to make changes in a certain part of the environment path. The following filter allows you to work with content while it is in the development stage of the application lifecycle, but the content becomes inaccessible once is pushed to production. 8.7.2.1. Applying permissions for the host resource type The following query applies any permissions specified for the Host resource type only to hosts in the group named host-editors. The following query returns records where the name matches XXXX , Yyyy , or zzzz example strings: You can also limit permissions to a selected environment. To do so, specify the environment name in the Search field, for example: You can limit user permissions to a certain organization or location with the use of the granular permission filter in the Search field. However, some resource types provide a GUI alternative, an Override checkbox that provides the Locations and Organizations tabs. On these tabs, you can select from the list of available organizations and locations. For more information, see Section 8.7.2.2, "Creating an organization-specific manager role" . 8.7.2.2. Creating an organization-specific manager role Use the Satellite web UI to create an administrative role restricted to a single organization named org-1 . Procedure In the Satellite web UI, navigate to Administer > Roles . Clone the existing Organization admin role. Select Clone from the drop-down list to the Filters button. You are then prompted to insert a name for the cloned role, for example org-1 admin . Click the desired locations and organizations to associate them with the role. Click Submit to create the role. Click org-1 admin , and click Filters to view all associated filters. The default filters work for most use cases. However, you can optionally click Edit to change the properties for each filter. For some filters, you can enable the Override option if you want the role to be able to access resources in additional locations and organizations. For example, by selecting the Domain resource type, the Override option, and then additional locations and organizations using the Locations and Organizations tabs, you allow this role to access domains in the additional locations and organizations that is not associated with this role. You can also click New filter to associate new filters with this role. 8.7.3. Supported operators for granular search Table 8.1. Logical operators Operator Description and Combines search criteria. not Negates an expression. has Object must have a specified property. Table 8.2. Symbolic operators Operator Description = Is equal to . An equality comparison that is case-sensitive for text fields. != Is not equal to . An inversion of the = operator. ~ Like . A case-insensitive occurrence search for text fields. !~ Not like . An inversion of the ~ operator. ^ In . An equality comparison that is case-sensitive search for text fields. This generates a different SQL query to the Is equal to comparison, and is more efficient for multiple value comparison. !^ Not in . An inversion of the ^ operator. >, >= Greater than , greater than or equal to . Supported for numerical fields only. <, ⇐ Less than , less than or equal to . Supported for numerical fields only. | [
"hammer user create --auth-source-id My_Authentication_Source --login My_User_Name --mail My_User_Mail --organization-ids My_Organization_ID_1 , My_Organization_ID_2 --password My_User_Password",
"hammer user add-role --id user_id --role role_name",
"openssl rand -hex 32",
"hammer user ssh-keys add --user-id user_id --name key_name --key-file ~/.ssh/id_rsa.pub",
"hammer user ssh-keys add --user-id user_id --name key_name --key ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNtYAAABBBHHS2KmNyIYa27Qaa7EHp+2l99ucGStx4P77e03ZvE3yVRJEFikpoP3MJtYYfIe8k 1/46MTIZo9CPTX4CYUHeN8= host@user",
"hammer user ssh-keys delete --id key_id --user-id user_id",
"hammer user ssh-keys info --id key_id --user-id user_id",
"hammer user ssh-keys list --user-id user_id",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{\"satellite_version\":\"6.16.0\",\"result\":\"ok\",\"status\":200,\"version\":\"3.5.1.10\",\"api_version\":2}",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{ \"error\": {\"message\":\"Unable to authenticate user My_Username \"} }",
"hammer user-group create --name My_User_Group_Name --role-ids My_Role_ID_1 , My_Role_ID_2 --user-ids My_User_ID_1 , My_User_ID_2",
"hammer role create --name My_Role_Name",
"hammer filter available-permissions",
"hammer filter create --permission-ids My_Permission_ID_1 , My_Permission_ID_2 --role My_Role_Name",
"foreman-rake console",
"f = File.open('/tmp/table.html', 'w') result = Foreman::AccessControl.permissions {|a,b| a.security_block <=> b.security_block}.collect do |p| actions = p.actions.collect { |a| \"<li>#{a}</li>\" } \"<tr><td>#{p.name}</td><td><ul>#{actions.join('')}</ul></td><td>#{p.resource_type}</td></tr>\" end.join(\"\\n\") f.write(result)",
"<table border=\"1\"><tr><td>Permission name</td><td>Actions</td><td>Resource type</td></tr>",
"</table>",
"field_name operator value",
"hammer filter create --permission-ids 91 --search \"name ~ ccv*\" --role qa-user",
"hostgroup = host-editors",
"name ^ (XXXX, Yyyy, zzzz)",
"Dev"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/managing_users_and_roles_admin |
Chapter 10. Managing custom software repositories | Chapter 10. Managing custom software repositories You can configure a repository in the /etc/yum.conf file. or in a .repo file in the /etc/yum.repos.d/ directory. Note It is recommended to define your repositories in the new or existing .repo file in /etc/yum.repos.d/ because all files with the .repo file extension are read by YUM . The /etc/yum.conf file contains the [main] sections and can contain one or more repository sections ( [ <repository_ID> ] ) that you can use to set repository-specific options. The values you define in individual repository sections of the /etc/yum.conf file override values set in the [main] section. 10.1. YUM repository options The /etc/yum.conf configuration file contains the repository sections with a repository ID in brackets ( [ <repository_ID> ] ). You can use these sections to define individual YUM repositories. Important Repository IDs must be unique. For a complete list of available repository ID options, see the [ <repository_ID> ] OPTIONS section of the dnf.conf(5) man page on your system. 10.2. Adding a YUM repository You can add a YUM repository to your system by defining it in the .repo file in the /etc/yum.repos.d/ directory. Procedure Add a repository to your system: Note that repositories added by this command are enabled by default. Review and, optionally, update the repository settings that the command has created in the /etc/yum.repos.d/ <repository_URL> .repo file: Warning Obtaining and installing software packages from unverified or untrusted sources other than Red Hat certificate-based Content Delivery Network ( CDN ) is a potential security risk, and can lead to security, stability, compatibility, and maintainability issues. 10.3. Enabling a YUM repository Once you added a YUM repository to your system, enable it to ensure installation and updates. Procedure Enable a repository: 10.4. Disabling a YUM repository To to prevent particular packages from installation or update, you can disable a YUM repository that contains these packages. Procedure Disable a repository: | [
"yum-config-manager --add-repo <repository_URL>",
"cat /etc/yum.repos.d/ <repository_URL> .repo",
"yum-config-manager --enable <repository_id>",
"yum-config-manager --disable <repository_id>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_managing_and_removing_user-space_components/managing-software-repositories_using-appstream |
4.118. kdeutils | 4.118. kdeutils 4.118.1. RHBA-2011:1206 - kdeutils bug fix update Updated kdeutils packages that fix one bug are now available for Red Hat Enterprise Linux 6. KDE is a graphical desktop environment for the X Window System. The kdeutils packages include several utilities for the KDE desktop environment. Bug Fix BZ# 625116 Prior to this update, the icon for the Sweeper utility did not appear correctly in GNOME Application's Accessories menu. This bug has been fixed in this update so that the icon is now displayed as expected. All users of kdeutils are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/kdeutils |
Chapter 9. Preinstallation validations | Chapter 9. Preinstallation validations 9.1. Definition of preinstallation validations The Assisted Installer aims to make cluster installation as simple, efficient, and error-free as possible. The Assisted Installer performs validation checks on the configuration and the gathered telemetry before starting an installation. The Assisted Installer uses the information provided before installation, such as control plane topology, network configuration and hostnames. It will also use real time telemetry from the hosts you are attempting to install. When a host boots the discovery ISO, an agent will start on the host. The agent will send information about the state of the host to the Assisted Installer. The Assisted Installer uses all of this information to compute real time preinstallation validations. All validations are either blocking or non-blocking to the installation. 9.2. Blocking and non-blocking validations A blocking validation will prevent progress of the installation, meaning that you will need to resolve the issue and pass the blocking validation before you can proceed. A non-blocking validation is a warning and will tell you of things that might cause you a problem. 9.3. Validation types The Assisted Installer performs two types of validation: Host Host validations ensure that the configuration of a given host is valid for installation. Cluster Cluster validations ensure that the configuration of the whole cluster is valid for installation. 9.4. Host validations 9.4.1. Getting host validations by using the REST API Note If you use the web console, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the web console. You have hosts booted with the discovery ISO You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[])' Get non-passing validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[]) | map(select(.status=="failure" or .status=="pending")) | select(length>0)' 9.4.2. Host validations in detail Parameter Validation type Description connected non-blocking Checks that the host has recently communicated with the Assisted Installer. has-inventory non-blocking Checks that the Assisted Installer received the inventory from the host. has-min-cpu-cores non-blocking Checks that the number of CPU cores meets the minimum requirements. has-min-memory non-blocking Checks that the amount of memory meets the minimum requirements. has-min-valid-disks non-blocking Checks that at least one available disk meets the eligibility criteria. has-cpu-cores-for-role blocking Checks that the number of cores meets the minimum requirements for the host role. has-memory-for-role blocking Checks that the amount of memory meets the minimum requirements for the host role. ignition-downloadable blocking For Day 2 hosts, checks that the host can download ignition configuration from the Day 1 cluster. belongs-to-majority-group blocking The majority group is the largest full-mesh connectivity group on the cluster, where all members can communicate with all other members. This validation checks that hosts in a multi-node, Day 1 cluster are in the majority group. valid-platform-network-settings blocking Checks that the platform is valid for the network settings. ntp-synced non-blocking Checks if an NTP server has been successfully used to synchronize time on the host. container-images-available non-blocking Checks if container images have been successfully pulled from the image registry. sufficient-installation-disk-speed blocking Checks that disk speed metrics from an earlier installation meet requirements, if they exist. sufficient-network-latency-requirement-for-role blocking Checks that the average network latency between hosts in the cluster meets the requirements. sufficient-packet-loss-requirement-for-role blocking Checks that the network packet loss between hosts in the cluster meets the requirements. has-default-route blocking Checks that the host has a default route configured. api-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the API domain name for the cluster. api-int-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal API domain name for the cluster. apps-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal apps domain name for the cluster. compatible-with-cluster-platform non-blocking Checks that the host is compatible with the cluster platform dns-wildcard-not-configured blocking Checks that the wildcard DNS *.<cluster_name>.<base_domain> is not configured, because this causes known problems for OpenShift disk-encryption-requirements-satisfied non-blocking Checks that the type of host and disk encryption configured meet the requirements. non-overlapping-subnets blocking Checks that this host does not have any overlapping subnets. hostname-unique blocking Checks that the hostname is unique in the cluster. hostname-valid blocking Checks the validity of the hostname, meaning that it matches the general form of hostnames and is not forbidden. The hostname must have 63 characters or less. The hostname must start and end with a lowercase alphanumeric character. The hostname must have only lowercase alphanumeric characters, dashes, and periods. belongs-to-machine-cidr blocking Checks that the host IP is in the address range of the machine CIDR. lso-requirements-satisfied blocking Validates that the host meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the host meets the requirements of the OpenShift Data Foundation Operator. Each host running ODF workloads (control plane nodes in compact mode, compute nodes in standard mode) requires an eligible disk. This is a disk with at least 25GB that is not the installation disk and is of type SSD or HDD . All hosts must have manually assigned roles. cnv-requirements-satisfied blocking Validates that the host meets the requirements of Container Native Virtualization. The BIOS of the host must have CPU virtualization enabled. Host must have enough CPU cores and RAM available for Container Native Virtualization. Will validate the Host Path Provisioner if necessary. lvm-requirements-satisfied blocking Validates that the host meets the requirements of the Logical Volume Manager Storage Operator. Host has at least one additional empty disk, not partitioned and not formatted. vsphere-disk-uuid-enabled non-blocking Verifies that each valid disk sets disk.EnableUUID to TRUE . In vSphere this will result in each disk having a UUID. compatible-agent blocking Checks that the discovery agent version is compatible with the agent docker image version. no-skip-installation-disk blocking Checks that installation disk is not skipping disk formatting. no-skip-missing-disk blocking Checks that all disks marked to skip formatting are in the inventory. A disk ID can change on reboot, and this validation prevents issues caused by that. media-connected blocking Checks the connection of the installation media to the host. machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. id-platform-network-settings blocking Checks that the platform is compatible with the network settings. Some platforms are only permitted when installing Single Node Openshift or when using User Managed Networking. mtu-valid non-blocking Checks the maximum transmission unit (MTU) of hosts and networking devices in the cluster environment to identify compatibility issues. For more information, see Additional resources . Additional resources Changing the MTU for the cluster network 9.5. Cluster validations 9.5.1. Getting cluster validations by using the REST API If you use the web console, many of these validations will not show up by name. To obtain a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the web console. You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq 'map(.[])' Get non-passing cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq '. | map(.[] | select(.status=="failure" or .status=="pending")) | select(length>0)' 9.5.2. Cluster validations in detail Parameter Validation type Description machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. cluster-cidr-defined non-blocking Checks that the cluster network definition exists for the cluster. service-cidr-defined non-blocking Checks that the service network definition exists for the cluster. no-cidrs-overlapping blocking Checks that the defined networks do not overlap. networks-same-address-families blocking Checks that the defined networks share the same address families (valid address families are IPv4, IPv6) network-prefix-valid blocking Checks the cluster network prefix to ensure that it is valid and allows enough address space for all hosts. machine-cidr-equals-to-calculated-cidr blocking For a non user managed networking cluster. Checks that apiVIPs or ingressVIPs are members of the machine CIDR if they exist. api-vips-defined non-blocking For a non user managed networking cluster. Checks that apiVIPs exist. api-vips-valid blocking For a non user managed networking cluster. Checks if the apiVIPs belong to the machine CIDR and are not in use. ingress-vips-defined blocking For a non user managed networking cluster. Checks that ingressVIPs exist. ingress-vips-valid non-blocking For a non user managed networking cluster. Checks if the ingressVIPs belong to the machine CIDR and are not in use. all-hosts-are-ready-to-install blocking Checks that all hosts in the cluster are in the "ready to install" status. sufficient-masters-count blocking For a multi-node OpenShift Container Platform installation, checks that the current number of hosts in the cluster designated either manually or automatically to be control plane (master) nodes equals the number that the user defined for the cluster as the control_plane_count value. For a single-node OpenShift installation, checks that there is exactly one control plane (master) node and no compute (worker) nodes. dns-domain-defined non-blocking Checks that the base DNS domain exists for the cluster. pull-secret-set non-blocking Checks that the pull secret exists. Does not check that the pull secret is valid or authorized. ntp-server-configured blocking Checks that each of the host clocks are no more than 4 minutes out of sync with each other. lso-requirements-satisfied blocking Validates that the cluster meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the cluster meets the requirements of the OpenShift Data Foundation Operator. The cluster has either at least three control plane (master) nodes and no compute (worker) nodes at all ( compact mode), or at least three control plane (master) nodes and at least three compute (worker) nodes ( standard mode). Each host running ODF workloads (control plane nodes in compact mode, compute nodes in standard mode) requires a non-installation disk of type SSD` or HDD and with at least 25GB of storage. All hosts must have manually assigned roles. cnv-requirements-satisfied blocking Validates that the cluster meets the requirements of Container Native Virtualization. The CPU architecture for the cluster is x86 lvm-requirements-satisfied blocking Validates that the cluster meets the requirements of the Logical Volume Manager Storage Operator. The cluster must be single node. The cluster must be running Openshift >= 4.11.0. network-type-valid blocking Checks the validity of the network type if it exists. The network type must be OpenshiftSDN (OpenShift Container Platform 4.14 or earlier) or OVNKubernetes. OpenshiftSDN does not support IPv6 or Single Node Openshift. OpenshiftSDN is not supported for OpenShift Container Platform 4.15 and later releases. OVNKubernetes does not support VIP DHCP allocation. | [
"source refresh-token",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[])'",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[]) | map(select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'",
"source refresh-token",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq 'map(.[])'",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq '. | map(.[] | select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_preinstallation-validations |
Chapter 2. Installing Debezium connectors on RHEL | Chapter 2. Installing Debezium connectors on RHEL Install Debezium connectors through AMQ Streams by extending Kafka Connect with connector plugins. Following a deployment of AMQ Streams, you can deploy Debezium as a connector configuration through Kafka Connect. 2.1. Kafka topic creation recommendations Debezium stores data in multiple Apache Kafka topics. The topics must either be created in advance by an administrator, or you can configure Kafka Connect to configure topics automatically . The following list describes limitations and recommendations to consider when creating topics: Database schema history topics for the Debezium Db2, MySQL, Oracle, and SQL Server connectors For each of the preceding connectors a database schema history topic is required. Whether you manually create the database schema history topic, use the Kafka broker to create the topic automatically, or use Kafka Connect to create the topic , ensure that the topic is configured with the following settings: Infinite or very long retention. Replication factor of at least three in production environments. Single partition. Other topics When you enable Kafka log compaction so that only the last change event for a given record is saved, set the following topic properties in Apache Kafka: min.compaction.lag.ms delete.retention.ms To ensure that topic consumers have enough time to receive all events and delete markers, specify values for the preceding properties that are larger than the maximum downtime that you expect for your sink connectors. For example, consider the downtime that might occur when you apply updates to sink connectors. Replicated in production. Single partition. You can relax the single partition rule, but your application must handle out-of-order events for different rows in the database. Events for a single row are still totally ordered. If you use multiple partitions, the default behavior is that Kafka determines the partition by hashing the key. Other partition strategies require the use of single message transformations (SMTs) to set the partition number for each record. 2.2. Planning the Debezium connector configuration Before you deploy a Debezium connector, determine how you want to configure the connector. The configuration provides information that specifies the behavior of the connector and enables Debezium to connect to the source database. You specify the connector configuration as JSON, and when you are ready to register the connector, you use curl to submit the configuration to the Kafka Connect API endpoint. Prerequisites A source database is deployed and the Debezium connector can access the database. You know the following information, which the connector requires to access the source database: Name or IP address of the database host. Port number for connecting to the database. Name of the account that the connector can use to sign in to the database. Password of the database user account. Name of the database. The names of the tables from which you want the connector to capture information. The name of the Kafka broker to which you want the connector to emit change events. The name of the Kafka topic to which you want the connector to send database history information. Procedure Specify the configuration that you want to apply to the Debezium connector in JSON format. The following example shows a simple configuration for a Debezium MySQL connector: { "name": "inventory-connector", 1 "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", 2 "tasks.max": "1", 3 "database.hostname": "mysql", 4 "database.port": "3306", 5 "database.user": "debezium", 6 "database.password": "dbz", 7 "database.server.id": "184054", 8 "topic.prefix": "dbserver1", 9 "table.include.list": "public.inventory", 10 "schema.history.internal.kafka.bootstrap.servers": "kafka:9092", 11 "schema.history.internal.kafka.topic": "dbhistory.inventory" 12 } } 1 The name of the connector to register with the Kafka Connect cluster. 2 The name of the connector class. 3 The number of tasks that can operate concurrently. Only one task should operate at a time. 4 The host name or IP address of the host database instance. 5 The port number of the database instance. 6 The name of the user account through which Debezium connects to the database. 7 The password for the database user account. 8 A unique numeric ID for the connector. This property is used for the MySQL connector only. 9 A string that serves as the logical identifier for the database server or cluster of servers from which the connector captures changes. The specified string designates a namespace. Debezium prefixes this name to each Kafka topic that the connector writes to, as well as to the names of Kafka Connect schemas, and the namespaces of the corresponding Avro schema, when the Avro converter is used. 10 The list of tables from which the connector captures change events. 11 The name of the Kafka broker where the connector sends the database schema history. The specified broker also receives the change events that the connector emits. 12 The name of the Kafka topic that stores the schema history. After a connector restart, the connector resumes reading the database log from the point at which it stopped, emitting events for any transactions that occurred while it was offline. Before the connector writes change events for an unread transaction to Kafka, it checks the schema history and then applies the schema that was in effect when the original transaction occurred. Additional information For information about the configuration properties that you can set for each type of connector, see the deployment documentation for the connector in the Debezium User Guide . steps Section 2.3, "Deploying Debezium with AMQ Streams on Red Hat Enterprise Linux" . 2.3. Deploying Debezium with AMQ Streams on Red Hat Enterprise Linux This procedure describes how to set up connectors for Debezium on Red Hat Enterprise Linux. Connectors are deployed to an AMQ Streams cluster using Apache Kafka Connect, a framework for streaming data between Apache Kafka and external systems. Kafka Connect must be run in distributed mode rather than standalone mode. Prerequisites The host environment to which you want to deploy Debezium runs Red Hat Enterprise Linux, AMQ Streams, and Java in a supported configuration . For information about how to install AMQ Streams, see Installing AMQ Streams . For information about how to install a basic, non-production AMQ Streams cluster that contains a single ZooKeeper node, and a single Kafka node, see Running a single node AMQ Streams cluster . Note If you are running an earlier version of AMQ Streams, you must first upgrade to AMQ Streams 2.5. For information about the upgrade process, see AMQ Streams and Kafka upgrades . You have administrative privileges ( sudo access) on the host. Apache ZooKeeper and the Apache Kafka broker are running. Kafka Connect is running in distributed mode , and not in standalone mode. You know the credentials of the kafka user that was created when AMQ Streams was installed. A source database is deployed and the host where you deploy Debezium has access to the database. You know how you want to configure the connector . Procedure Download the Debezium connector or connectors that you want to use from the Red Hat Integration download site . For example, to use Debezium with a MySQL database, download the Debezium 2.3.4 MySQL Connector . On the Red Hat Enterprise Linux host where you deployed AMQ Streams, open a terminal window and create a connector-plugins directory in /opt/kafka , if it does not already exist: USD sudo mkdir /opt/kafka/connector-plugins Enter the following command to extract the contents of the Debezium connector archive that you downloaded to the /opt/kafka/connector-plugins directory. USD sudo unzip debezium-connector-mysql-2.3.4.Final.zip -d /opt/kafka/connector-plugins Repeat Steps 1 -3 for each connector that you want to install. From a terminal window, sign in as the kafka user: USD su - kafka USD Password: Stop the Kafka Connect process if it is running. Check whether Kafka Connect is running in distributed mode by entering the following command: USD jcmd | grep ConnectDistributed If the process is running, the command returns the process ID, for example: Stop the process by entering the kill command with the process ID, for example, USD kill 18514 Edit the connect-distributed.properties file in /opt/kafka/config/ and set the value of plugin.path to the location of the parent directory for the Debezium connector plug-ins: plugin.path=/opt/kafka/connector-plugins Start Kafka Connect in distributed mode. USD /opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties After Kafka Connect is running, use the Kafka Connect API to register the connector. Enter a curl command to submit a POST request that sends the connector configuration JSON that you specified in Section 2.2, "Planning the Debezium connector configuration" to the Kafka Connect REST API endpoint at localhost:8083/connectors . For example: curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ \ -d '{"name": "inventory-connector", "config": \ { "connector.class": "io.debezium.connector.mysql.MySqlConnector", \ "tasks.max": "1", \ "database.hostname": "mysql", \ "database.port": "3306", \ "database.user": "debezium", \ "database.password": "dbz", \ "database.server.id": "184054", \ "topic.prefix": "dbserver1", \ "table.include.list": "public.inventory", \ "schema.history.internal.kafka.bootstrap.servers": "kafka:9092", \ "schema.history.internal.kafka.topic": "dbhistory.inventory" } }' To register multiple connectors, submit a separate request for each one. Restart Kafka Connect to implement your changes. As Kafka Connect starts, it loads the configured Debezium connectors from the connector-plugins directory. After you complete the configuration, the deployed connector connects to the source database and produces events for each inserted, updated, or deleted row or document. Repeat Steps 5-10 for each Kafka Connect worker node. steps Verify the deployment . Additional resources Adding connector plugins 2.4. Verifying the deployment After the connector starts, it performs a snapshot of the configured database, and creates topics for each table that you specify. Prerequisites You deployed a connector on Red Hat Enterprise Linux, based on the instructions in Section 2.3, "Deploying Debezium with AMQ Streams on Red Hat Enterprise Linux" . .Procedure From a terminal window on the host, enter the following command to request the list of connectors from the Kafka Connect API: USD curl -H "Accept:application/json" localhost:8083/connectors/ The query returns the name of the deployed connector, for example: From a terminal window on the host, enter the following command to view the tasks that the connector is running: USD curl -i -X GET -H "Accept:application/json" localhost:8083/connectors/inventory-connector The command returns output that is similar to the following example: HTTP/1.1 200 OK Date: Thu, 06 Feb 2020 22:12:03 GMT Content-Type: application/json Content-Length: 531 Server: Jetty(9.4.20.v20190813) { "name": "inventory-connector", ... "tasks": [ { "connector": "inventory-connector", "task": 0 } ] } Display a list of topics in the Kafka cluster. From a terminal window, navigate to /opt/kafka/bin/ and run the following shell script: ./kafka-topics.sh --bootstrap-server=localhost:9092 --list The Kafka broker returns a list of topics that the connector creates. The available topics depends on the settings of the connector's snapshot.mode , snapshot.include.collection.list , and table.include.list configuration properties. By default, the connector creates a topic for each non-system table in the database. View the contents of a topic. From a terminal window, navigate to /opt/kafka/bin/ , and run the kafka-console-consumer.sh shell script to display the contents of one of the topics returned by the preceding command: For example: ./kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=dbserver1.inventory.products_on_hand For each event in the topic, the command returns information that is similar to the following output: Example 2.1. Content of a Integration change event In the preceding example, the payload value shows that the connector snapshot generated a read ( "op" ="r" ) event from the table inventory.products_on_hand . The "before" state of the product_id record is null , indicating that no value exists for the record. The "after" state shows a quantity of 3 for the item with product_id 101 . Steps For information about the configuration settings that are available for each connector, and to learn how to configure source databases to enable change data capture, see the Debezium User Guide . 2.5. Updating Debezium connector plug-ins in the Kafka Connect cluster To replace the version of a Debezium connector that is deployed on Red Hat Enterprise Linux, you update the connector plug-in. Procedure Download a copy of the Debezium connector plug-in that you want to replace from the Red Hat Integration download site . Extract the contents of the Debezium connector archive to the /opt/kafka/connector-plugins directory. USD sudo unzip debezium-connector-mysql-2.3.4.Final.zip -d /opt/kafka/connector-plugins Restart Kafka Connect. | [
"{ \"name\": \"inventory-connector\", 1 \"config\": { \"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\", 2 \"tasks.max\": \"1\", 3 \"database.hostname\": \"mysql\", 4 \"database.port\": \"3306\", 5 \"database.user\": \"debezium\", 6 \"database.password\": \"dbz\", 7 \"database.server.id\": \"184054\", 8 \"topic.prefix\": \"dbserver1\", 9 \"table.include.list\": \"public.inventory\", 10 \"schema.history.internal.kafka.bootstrap.servers\": \"kafka:9092\", 11 \"schema.history.internal.kafka.topic\": \"dbhistory.inventory\" 12 } }",
"sudo mkdir /opt/kafka/connector-plugins",
"sudo unzip debezium-connector-mysql-2.3.4.Final.zip -d /opt/kafka/connector-plugins",
"su - kafka Password:",
"jcmd | grep ConnectDistributed",
"18514 org.apache.kafka.connect.cli.ConnectDistributed /opt/kafka/config/connect-distributed.properties",
"kill 18514",
"plugin.path=/opt/kafka/connector-plugins",
"/opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties",
"curl -i -X POST -H \"Accept:application/json\" -H \"Content-Type:application/json\" localhost:8083/connectors/ -d '{\"name\": \"inventory-connector\", \"config\": { \"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\", \"tasks.max\": \"1\", \"database.hostname\": \"mysql\", \"database.port\": \"3306\", \"database.user\": \"debezium\", \"database.password\": \"dbz\", \"database.server.id\": \"184054\", \"topic.prefix\": \"dbserver1\", \"table.include.list\": \"public.inventory\", \"schema.history.internal.kafka.bootstrap.servers\": \"kafka:9092\", \"schema.history.internal.kafka.topic\": \"dbhistory.inventory\" } }'",
"curl -H \"Accept:application/json\" localhost:8083/connectors/",
"[\"inventory-connector\"]",
"curl -i -X GET -H \"Accept:application/json\" localhost:8083/connectors/inventory-connector",
"HTTP/1.1 200 OK Date: Thu, 06 Feb 2020 22:12:03 GMT Content-Type: application/json Content-Length: 531 Server: Jetty(9.4.20.v20190813) { \"name\": \"inventory-connector\", \"tasks\": [ { \"connector\": \"inventory-connector\", \"task\": 0 } ] }",
"./kafka-topics.sh --bootstrap-server=localhost:9092 --list",
"./kafka-console-consumer.sh > --bootstrap-server localhost:9092 > --from-beginning > --property print.key=true > --topic=dbserver1.inventory.products_on_hand",
"{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"}],\"optional\":false,\"name\":\"dbserver1.inventory.products_on_hand.Key\"},\"payload\":{\"product_id\":101}} {\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"quantity\"}],\"optional\":true,\"name\":\"dbserver1.inventory.products_on_hand.Value\",\"field\":\"before\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"quantity\"}],\"optional\":true,\"name\":\"dbserver1.inventory.products_on_hand.Value\",\"field\":\"after\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"version\"},{\"type\":\"string\",\"optional\":false,\"field\":\"connector\"},{\"type\":\"string\",\"optional\":false,\"field\":\"name\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"ts_ms\"},{\"type\":\"string\",\"optional\":true,\"name\":\"io.debezium.data.Enum\",\"version\":1,\"parameters\":{\"allowed\":\"true,last,false\"},\"default\":\"false\",\"field\":\"snapshot\"},{\"type\":\"string\",\"optional\":false,\"field\":\"db\"},{\"type\":\"string\",\"optional\":true,\"field\":\"sequence\"},{\"type\":\"string\",\"optional\":true,\"field\":\"table\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"server_id\"},{\"type\":\"string\",\"optional\":true,\"field\":\"gtid\"},{\"type\":\"string\",\"optional\":false,\"field\":\"file\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"pos\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"row\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"thread\"},{\"type\":\"string\",\"optional\":true,\"field\":\"query\"}],\"optional\":false,\"name\":\"io.debezium.connector.mysql.Source\",\"field\":\"source\"},{\"type\":\"string\",\"optional\":false,\"field\":\"op\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"ts_ms\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"id\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"total_order\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"data_collection_order\"}],\"optional\":true,\"field\":\"transaction\"}],\"optional\":false,\"name\": \"dbserver1.inventory.products_on_hand.Envelope\" }, \"payload\" :{ \"before\" : null , \"after\" :{ \"product_id\":101,\"quantity\":3 },\"source\":{\"version\":\"2.3.4.Final-redhat-00001\",\"connector\":\"mysql\",\"name\":\"inventory_connector_mysql\",\"ts_ms\":1638985247805,\"snapshot\":\"true\",\"db\":\"inventory\",\"sequence\":null,\"table\":\"products_on_hand\",\"server_id\":0,\"gtid\":null,\"file\":\"mysql-bin.000003\",\"pos\":156,\"row\":0,\"thread\":null,\"query\":null}, \"op\" : \"r\" ,\"ts_ms\":1638985247805,\"transaction\":null}}",
"sudo unzip debezium-connector-mysql-2.3.4.Final.zip -d /opt/kafka/connector-plugins"
]
| https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/installing_debezium_on_rhel/installing-debezium-connectors-on-rhel-debezium |
14.4. Steal Time Accounting | 14.4. Steal Time Accounting Steal time is the amount of CPU time desired by a guest virtual machine that is not provided by the host. Steal time occurs when the host allocates these resources elsewhere: for example, to another guest. Steal time is reported in the CPU time fields in /proc/stat as st . It is automatically reported by utilities such as top and vmstat , and cannot be switched off. Large amounts of steal time indicate CPU contention, which can reduce guest performance. To relieve CPU contention, increase the guest's CPU priority or CPU quota, or run fewer guests on the host. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/form-virtualization_host_configuration_and_guest_installation_guide-kvm_guest_timing_management-steal_time_accounting |
Chapter 89. MapStruct | Chapter 89. MapStruct Since Camel 3.19 Only producer is supported . The camel-mapstruct component is used for converting POJOs using MapStruct . 89.1. Dependencies When using mapstruct with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mapstruct-starter</artifactId> </dependency> 89.2. URI format Where className is the fully qualified class name of the POJO to convert to. 89.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 89.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 89.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 89.4. Component Options The MapStruct component supports 4 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean mapperPackageName (producer) Required Package name(s) where Camel should discover Mapstruct mapping classes. Multiple package names can be separated by comma. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean mapStructConverter (advanced) Autowired To use a custom MapStructConverter such as adapting to a special runtime. MapStructMapperFinder 89.5. Endpoint Options The MapStruct endpoint is configured using URI syntax: with the following path and query parameters: 89.5.1. Path Parameters (1 parameters) Name Description Default Type className (producer) Required The fully qualified class name of the POJO that mapstruct should convert to (target). String 89.5.2. Query Parameters (2 parameters) Name Description Default Type mandatory (producer) Whether there must exist a mapstruct converter to convert to the POJO. true boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 89.6. Setting up MapStruct The camel-mapstruct component must be configured with one or more package names, for classpath scanning MapStruct Mapper classes. This is needed because the Mapper classes are to be used for converting POJOs with MapStruct. For example, to set up two packages you can do as following: MapstructComponent mc = context.getComponent("mapstruct", MapstructComponent.class); mc.setMapperPackageName("com.foo.mapper,com.bar.mapper"); This can also be configured in application.properties : camel.component.mapstruct.mapper-package-name = com.foo.mapper,com.bar.mapper Camel will on startup scan these packages for classes which names ends with Mapper . These classes are then introspected to discover the mapping methods. These mapping methods are then registered into the Camel registry. This means that you can also use type converter to convert the POJOs with MapStruct, such as: from("direct:foo") .convertBodyTo(MyFooDto.class); Where MyFooDto is a POJO that MapStruct is able to convert to/from. 89.7. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.mapstruct.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mapstruct.enabled Whether to enable auto configuration of the mapstruct component. This is enabled by default. Boolean camel.component.mapstruct.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mapstruct.map-struct-converter To use a custom MapStructConverter such as adapting to a special runtime. The option is a org.apache.camel.component.mapstruct.MapStructMapperFinder type. MapStructMapperFinder camel.component.mapstruct.mapper-package-name Package name(s) where Camel should discover Mapstruct mapping classes. Multiple package names can be separated by comma. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mapstruct-starter</artifactId> </dependency>",
"mapstruct:className[?options]",
"mapstruct:className",
"MapstructComponent mc = context.getComponent(\"mapstruct\", MapstructComponent.class); mc.setMapperPackageName(\"com.foo.mapper,com.bar.mapper\");",
"camel.component.mapstruct.mapper-package-name = com.foo.mapper,com.bar.mapper",
"from(\"direct:foo\") .convertBodyTo(MyFooDto.class);"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mapstruct-component-starter |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/replacing_devices/providing-feedback-on-red-hat-documentation_rhodf |
18.2. Getting Help for ID View Commands | 18.2. Getting Help for ID View Commands To display all commands used to manage ID views and overrides: To display detailed help for a particular command, add the --help option to the command: | [
"ipa help idviews",
"ipa idview-add --help"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/id-views-help |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.