title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
5.4. Specifying Transaction Batching | 5.4. Specifying Transaction Batching To improve the update performance when a full transaction durability is not required, use the following command: The --txn-batch-val specifies how many transactions be batched before Directory Server commits them to the transaction log. Setting this value to a value greater than 0 causes the server to delay committing transactions until the number of queued transactions is equal to this value. | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --txn-batch-val= value"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning_database_performance-specifying_transaction_batching |
Installation Guide | Installation Guide Red Hat Directory Server 11 Instructions for installing Red Hat Directory Server | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/installation_guide/index |
9.8. Active Directory Authentication (Non-Kerberos) | 9.8. Active Directory Authentication (Non-Kerberos) See Example 9.2, "Example of JBoss EAP LDAP login module configuration" for a non-Kerberos Active Directory Authentication configuration example. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/active_directory_authentication_non-kerberos |
Chapter 2. Your first steps | Chapter 2. Your first steps 2.1. Obtain the latest JDV for OpenShift image To download the JDV for OpenShift image and application templates, click here: Red Hat Registry . 2.2. Preparing JDV project artifacts 2.2.1. How to add your configuration artifacts to the OpenShift image A simple way to add your artifacts, such as the virtual databases files, modules, drivers, translators, and additional generic deployments, to the image is to include them in the application source deployment directory. The artifacts are downloaded during the build process and injected into the image. This configuration is built into the image when these artifacts are uploaded so that only the data sources and associated resource adapters need to be added at runtime. To deploy a virtual database, create an empty marker file in the same directory and with the same name as the VDB but with the additional extension .dodeploy . For example, if the VDB file is called database.vdb , the marker file must be called database.vdb.dodeploy . 2.2.2. How to add your runtime artifacts to the OpenShift image Runtime artifacts from environment files are provided through the OpenShift Secret mechanism. They are referenced in environment files that are, in turn, referenced in your JDV template. JDV application template Description datavirt64-basic This is an application template for JBoss Data Virtualization 6.4 services built using S2I. datavirt64-secure This template allows you to configure certificates for serving secure content. datavirt64-extensions-support This template allows you to install extensions (such as third-party database drivers) and configure certificates to serve secure content. 2.2.2.1. Data sources artifacts There are three types of data sources: Default internal data sources. These are PostgreSQL, MySQL, and MongoDB databases. These data sources are available on OpenShift by default through the Red Hat Registry so you do not need to configure any additional environment files. Set the environment variable to the name of the OpenShift service for the database to be discovered and used as a data source. For more information, click here: DB_SERVICE_PREFIX_MAPPING Other internal data sources. These are not available by default through the Red Hat Registry but do run on OpenShift. To add these data sources, you must supply environment files to OpenShift Secrets . External data sources that are not run on OpenShift. To add these data sources you must supply environment files to OpenShift Secrets . Here is an example data source environment file: The DATASOURCES property is a comma-separated list of data source property prefixes. These prefixes are appended to every property belonging that data source. Multiple data sources can then be included in a single environment file. (Alternatively, each data source can be provided in a separate environment file.) Data sources contain two property types: connection pool-specific properties and data driver-specific properties. In the above example, ACCOUNTS is the data source prefix, XA_CONNECTION_PROPERTY is the generic driver property, and DatabaseName is the property specific to the driver. After you add the environment files to OpenShift Secrets , they are called within the JDV template using the ENV_FILES environment property, the value of which is a comma-separated list of fully qualified environment files: 2.2.2.2. Resource adapter artifacts To add a resource adapter, you must supply an environment file to OpenShift Secrets : The RESOURCE_ADAPTERS property is a comma-separated list of resource adapter property prefixes. These prefixes are appended to all properties for that resource adapter. Multiple resource adapter can then be included in a single environment file. The resource adapter environment files are added to the OpenShift Secret for the project namespace. These environment files are then called within the JDV template using the ENV_FILES environment property, the value of which is a comma-separated list of fully-qualified environment files: 2.3. How to configure secrets Before you begin, you must have configured two keystores: A secure socket layer (SSL) keystore to provide private and public keys for https traffic encryption. A JGroups keystore to provide private and public keys for network traffic encryption between nodes in the cluster. Warning Self-signed certificates do not provide secure communication and are intended for internal testing purposes. For production environments Red Hat recommends that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS). Create a secret to hold the two keystores that provide authorization to applications in the project: Send your runtime artifacts to the JDV for OpenShift image using the OpenShift Secrets mechanism. (These files need to be present locally so that the secrets can be created for them.) Important If the project does not require any runtime artifacts, the secret must still be present in the OpenShift project or the deployment will fail. You can create an empty secret: Create a service account: Add the view role to the service account: Add the project's secrets to the account: To use Red Hat Single Sign-On (SSO) for authentication, use the datavirt64-secure-s2i application template. For more information on configuring SSO, click here: Automatic and Manual SSO Client Registration Methods 2.4. Using Data Grid for OpenShift with JDV for OpenShift Important Before you start spinning up clusters, make sure your JDV and RHDG instance versions are aligned for Hot Rod client compatibility. See Version Compatibility with Red Hat Data Grid . There are two use cases for integration: You want to use JDG as a data source for JDV. You want to use JDG as an external materialization target for JDV. When deployed as a materialization target, JDG for OpenShift uses in-memory caching to store results from common queries to other remote databases, increasing performance. In each of these use cases, both the JDG for OpenShift and JDV for OpenShift deployments need to be configured. The environment variable to specify these cache names is different depending on whether JDG for OpenShift is to be used as a data source or a materialization: Using JDG for OpenShift as a data source : CACHE_NAMES Comma-separated list of the cache names to be used for the JDG for OpenShift data source. Using JDG for OpenShift as a materialization target : DATAVIRT_CACHE_NAMES This is a comma-separated list of the cache names to be used for the JDG for OpenShift materialization target. When the image is built, two caches will be created per cache name provided: {cachename} and ST_{cachename}. The required cache of teiid-alias-naming-cache, will also be created. These three caches enable JBoss Data Grid to simultaneously maintain and refresh materialization caches. Important Red Hat JBoss Data Grid, and the JDG for OpenShift image, support multiple protocols; however, when deployed with JDV for OpenShift, only the Hot Rod protocol is supported. This protocol is not encrypted. For more information on the Hot Rod protocol, click here: Remote Querying chapter . 2.4.1. JDG for OpenShift authentication environment variables To use JDG for OpenShift as an authenticated data source, additional environment variables must be provided in the JDG for OpenShift application template. These environment variables provide authentication details for the Hot Rod protocol and authorization details for the caches in the JDG for OpenShift deployment: Environment Variable Description Example value USERNAME This is the username for the JDG user. jdg-user PASSWORD This is the password for the JDG user. JBoss.123 HOTROD_AUTHENTICATION This enablea Hot Rod authentication. true CONTAINER_SECURITY_ROLE_MAPPER This is the role mapper for the Hot Rod protocol. identity-role-mapper CONTAINER_SECURITY_ROLES This provides security roles and permissions for the role mapper. admin=ALL These resource adapter properties can also be configured: Resource Adapter Property Description Example value <cache-name>_CACHE_SECURITY_AUTHORIZATION_ENABLED This enables authorization checks for the cache. true <cache-name>_CACHE_SECURITY_AUTHORIZATION_ROLES This sets the valid roles required to access the cache. admin 2.4.2. JDG for OpenShift resource adapter properties To use JDG for OpenShift with JDV for OpenShift, properties specific to JDG are required within a resource adapter. As with all resource adapters, these can be included as a separate resource adapter environment file or along with other resource adapters in a larger environment file and supplied to the build as an OpenShift secret. Here are the standard properties required by JDV for OpenShift to configure a resource adapter: RA_NAME1 is the user-defined name of the resource adapter, which will be used as the prefix to defining the properties associated with that resource adapter. Additional properties for the JDG resource adapter Resource adapter property Description Required RemoteServerList Server List (host:port[;host:port... ]) with which to connect. Yes. Additional resource adapter for using JDG for OpenShift as a data source Resource adapter property Description Required UserName SASL mechanisms defined for the JDG Hot Rod connector. This is true if you are using JDG as an authenticated data source. Password SASL mechanisms defined for the JDG Hot Rod connector. This is true if you are using JDG as an authenticated data source. AuthenticationRealm Security realm defined for the Hot Rod connector. This is true if you are using JDG as an authenticated data source. AuthenticationServerName SASL server name defined for the Hot Rod connector. This is true if you are using JDG as an authenticated data source. SASLMechanism SASL mechanisms defined for the JDG Hot Rod connector. This is true if you are using JDG as an authenticated data source. Here is a resource adapter you can use to integrate JDG with OpenShift: Here is a resource adapter you can use to make JDG for OpenShift a data source: Line breaks separate the standard JDV for OpenShift resource adapter configuration, the additional properties required for JDG for OpenShift, and the authentication properties for the JDG data source. The PROPERTY_UserName and its associated password correspond to the values provided in the JDG for OpenShift application template. | [
"derby datasource ACCOUNTS_DERBY_DATABASE=accounts ACCOUNTS_DERBY_JNDI=java:/accounts-ds ACCOUNTS_DERBY_DRIVER=derby ACCOUNTS_DERBY_USERNAME=derby ACCOUNTS_DERBY_PASSWORD=derby ACCOUNTS_DERBY_TX_ISOLATION=TRANSACTION_READ_UNCOMMITTED ACCOUNTS_DERBY_JTA=true Connection info for an xa datasource ACCOUNTS_DERBY_XA_CONNECTION_PROPERTY_DatabaseName=/opt/eap/standalone/data/databases/derby/accounts _HOST and _PORT are required, but not used ACCOUNTS_DERBY_SERVICE_HOST=dummy ACCOUNTS_DERBY_SERVICE_PORT=1527",
"{ \"Name\": \"ENV_FILES\", \"Value\": \"/etc/jdv-extensions/datasources1.env,/etc/jdv-extensions/datasources2.env\" }",
"#RESOURCE_ADAPTER RESOURCE_ADAPTERS=QSFILE QSFILE_ID=fileQS QSFILE_MODULE_SLOT=main QSFILE_MODULE_ID=org.jboss.teiid.resource-adapter.file QSFILE_CONNECTION_CLASS=org.teiid.resource.adapter.file.FileManagedConnectionFactory QSFILE_CONNECTION_JNDI=java:/marketdata-file QSFILE_PROPERTY_ParentDirectory=/home/jboss/source/injected/injected-files/data QSFILE_PROPERTY_AllowParentPaths=true",
"{ \"Name\": \"ENV_FILES\", \"Value\": \"/etc/jdv-extensions/resourceadapter1.env,/etc/jdv-extensions/resourceadapter2.env\" }",
"oc secret new <jdv-secret-name> <ssl.jks> <jgroups.jceks>",
"oc secrets new <datavirt-app-config> <datasource.env> <resourceadapter.env> <additional/data/files/>",
"touch <empty.env> oc secrets new <datavirt-app-config> <empty.env>",
"oc create serviceaccount <service-account-name>",
"oc policy add-role-to-user view system:serviceaccount:<project-name>:<service-account-name> -n <project-name>",
"oc secret link <service-account-name> <jdv-secret-name> <jdv-datasource-secret> <jdv-resourceadapter-secret> <jdv-datafiles-secret>",
"RESOURCE_ADAPTERS={RA_NAME1},{RA_NAME2},.. {RA_NAME1}_ID {RA_NAME1}_MODULE_SLOT {RA_NAME1}_MODULE_ID {RA_NAME1}_CONNECTION_CLASS {RA_NAME1}_CONNECTION_JNDI",
"RESOURCE_ADAPTERS=MAT_CACHE MAT_CACHE_ID=infinispanDS MAT_CACHE_MODULE_SLOT=main MAT_CACHE_MODULE_ID=org.jboss.teiid.resource-adapter.infinispan.hotrod MAT_CACHE_CONNECTION_CLASS=org.teiid.resource.adapter.infinispan.hotrod.InfinispanManagedConnectionFactory MAT_CACHE_CONNECTION_JNDI=java:/infinispanRemoteDSL MAT_CACHE_PROPERTY_RemoteServerList=USD{DATAGRID_APP_HOTROD_SERVICE_HOST}:USD{DATAGRID_APP_HOTROD_SERVICE_PORT}",
"RESOURCE_ADAPTERS=MAT_CACHE MAT_CACHE_ID=infinispanDS MAT_CACHE_MODULE_SLOT=main MAT_CACHE_MODULE_ID=org.jboss.teiid.resource-adapter.infinispan.hotrod MAT_CACHE_CONNECTION_CLASS=org.teiid.resource.adapter.infinispan.hotrod.InfinispanManagedConnectionFactory MAT_CACHE_CONNECTION_JNDI=java:/infinispanRemoteDSL MAT_CACHE_PROPERTY_RemoteServerList=USD{DATAGRID_APP_HOTROD_SERVICE_HOST}:USD{DATAGRID_APP_HOTROD_SERVICE_PORT} MAT_CACHE_PROPERTY_UserName=jdg MAT_CACHE_PROPERTY_Password=JBoss.123 MAT_CACHE_PROPERTY_AuthenticationRealm=ApplicationRealm MAT_CACHE_PROPERTY_AuthenticationServerName=jdg-server MAT_CACHE_PROPERTY_SaslMechanism=DIGEST-MD5"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/red_hat_jboss_data_virtualization_for_openshift/your_first_steps |
Chapter 23. Limiting storage space usage on ext4 with quotas | Chapter 23. Limiting storage space usage on ext4 with quotas You have to enable disk quotas on your system before you can assign them. You can assign disk quotas per user, per group or per project. However, if there is a soft limit set, you can exceed these quotas for a configurable period of time, known as the grace period. 23.1. Installing the quota tool You must install the quota RPM package to implement disk quotas. Procedure Install the quota package: 23.2. Enabling quota feature on file system creation Enable quotas on file system creation. Procedure Enable quotas on file system creation: Note Only user and group quotas are enabled and initialized by default. Change the defaults on file system creation: Mount the file system: Additional resources ext4(5) man page on your system. 23.3. Enabling quota feature on existing file systems Enable the quota feature on existing file system by using the tune2fs command. Procedure Unmount the file system: Enable quotas on existing file system: Note Only user and group quotas are initialized by default. Change the defaults: Mount the file system: Additional resources ext4(5) man page on your system. 23.4. Enabling quota enforcement The quota accounting is enabled by default after mounting the file system without any additional options, but quota enforcement is not. Prerequisites Quota feature is enabled and the default quotas are initialized. Procedure Enable quota enforcement by quotaon for the user quota: Note The quota enforcement can be enabled at mount time using usrquota , grpquota , or prjquota mount options. Enable user, group, and project quotas for all file systems: If neither of the -u , -g , or -P options are specified, only the user quotas are enabled. If only -g option is specified, only group quotas are enabled. If only -P option is specified, only project quotas are enabled. Enable quotas for a specific file system, such as /home : Additional resources quotaon(8) man page on your system 23.5. Assigning quotas per user The disk quotas are assigned to users with the edquota command. Note The text editor defined by the EDITOR environment variable is used by edquota . To change the editor, set the EDITOR environment variable in your ~/.bash_profile file to the full path of the editor of your choice. Prerequisites User must exist prior to setting the user quota. Procedure Assign the quota for a user: Replace username with the user to which you want to assign the quotas. For example, if you enable a quota for the /dev/sda partition and execute the command edquota testuser , the following is displayed in the default editor configured on the system: Change the desired limits. If any of the values are set to 0, limit is not set. Change them in the text editor. For example, the following shows the soft and hard block limits for the testuser have been set to 50000 and 55000 respectively. The first column is the name of the file system that has a quota enabled for it. The second column shows how many blocks the user is currently using. The two columns are used to set soft and hard block limits for the user on the file system. The inodes column shows how many inodes the user is currently using. The last two columns are used to set the soft and hard inode limits for the user on the file system. The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once this limit is reached, no further disk space can be used. The soft block limit defines the maximum amount of disk space that can be used. However, unlike the hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace period . The grace period can be expressed in seconds, minutes, hours, days, weeks, or months. Verification Verify that the quota for the user has been set: 23.6. Assigning quotas per group You can assign quotas on a per-group basis. Prerequisites Group must exist prior to setting the group quota. Procedure Set a group quota: For example, to set a group quota for the devel group: This command displays the existing quota for the group in the text editor: Modify the limits and save the file. Verification Verify that the group quota is set: 23.7. Assigning quotas per project You can assign quotas per project. Prerequisites Project quota is enabled on your file system. Procedure Add the project-controlled directories to /etc/projects . For example, the following adds the /var/log path with a unique ID of 11 to /etc/projects . Your project ID can be any numerical value mapped to your project. Add project names to /etc/projid to map project IDs to project names. For example, the following associates a project called Logs with the project ID of 11 as defined in the step. Set the desired limits: Note You can choose the project either by its project ID ( 11 in this case), or by its name ( Logs in this case). Using quotaon , enable quota enforcement: See Enabling quota enforcement . Verification Verify that the project quota is set: Note You can verify either by the project ID, or by the project name. Additional resources edquota(8) , projid(5) , and projects(5) man pages on your system 23.8. Setting the grace period for soft limits If a given quota has soft limits, you can edit the grace period, which is the amount of time for which a soft limit can be exceeded. You can set the grace period for users, groups, or projects. Procedure Edit the grace period: Important While other edquota commands operate on quotas for a particular user, group, or project, the -t option operates on every file system with quotas enabled. Additional resources edquota(8) man page on your system 23.9. Turning file system quotas off Use quotaoff to turn disk quota enforcement off on the specified file systems. Quota accounting stays enabled after executing this command. Procedure To turn all user and group quotas off: If neither of the -u , -g , or -P options are specified, only the user quotas are disabled. If only -g option is specified, only group quotas are disabled. If only -P option is specified, only project quotas are disabled. The -v switch causes verbose status information to display as the command executes. Additional resources quotaoff(8) man page on your system 23.10. Reporting on disk quotas Create a disk quota report by using the repquota utility. Procedure Run the repquota command: For example, the command repquota /dev/sda produces this output: View the disk usage report for all quota-enabled file systems: The -- symbol displayed after each user determines whether the block or inode limits have been exceeded. If either soft limit is exceeded, a + character appears in place of the corresponding - character. The first - character represents the block limit, and the second represents the inode limit. The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time specification equal to the amount of time remaining on the grace period. If the grace period has expired, none appears in its place. Additional resources The repquota(8) man page for more information. | [
"yum install quota",
"mkfs.ext4 -O quota /dev/sda",
"mkfs.ext4 -O quota -E quotatype=usrquota:grpquota:prjquota /dev/sda",
"mount /dev/sda",
"umount /dev/sda",
"tune2fs -O quota /dev/sda",
"tune2fs -Q usrquota,grpquota,prjquota /dev/sda",
"mount /dev/sda",
"mount /dev/sda /mnt",
"quotaon /mnt",
"mount -o usrquota,grpquota,prjquota /dev/sda /mnt",
"quotaon -vaugP",
"quotaon -vugP /home",
"edquota username",
"Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/sda 44043 0 0 37418 0 0",
"Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/sda 44043 50000 55000 37418 0 0",
"quota -v testuser Disk quotas for user testuser: Filesystem blocks quota limit grace files quota limit grace /dev/sda 1000* 1000 1000 0 0 0",
"edquota -g groupname",
"edquota -g devel",
"Disk quotas for group devel (gid 505): Filesystem blocks soft hard inodes soft hard /dev/sda 440400 0 0 37418 0 0",
"quota -vg groupname",
"echo 11:/var/log >> /etc/projects",
"echo Logs:11 >> /etc/projid",
"edquota -P 11",
"quota -vP 11",
"edquota -t",
"quotaoff -vaugP",
"repquota",
"*** Report for user quotas on device /dev/sda Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root -- 36 0 0 4 0 0 kristin -- 540 0 0 125 0 0 testuser -- 440400 500000 550000 37418 0 0",
"repquota -augP"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/limiting-storage-space-usage-on-ext4-with-quotas_managing-file-systems |
Chapter 32. Networking | Chapter 32. Networking Bad offload warnings are no longer displayed using virtio_net Previously, using the virtio_net network adapter in bridge connections, user space programs sometimes generated Generic Segmentation Offload (GSO) packets with no checksum offload and passed them to the kernel. As a consequence, the kernel checksum offloading code displayed bad offload warnings unnecessarily. With this update, a patch has been applied, and the kernel does not warn anymore about bad checksum offload messages for such packets. (BZ#1544920) The L2TP sequence number handling now works correctly Previously, the kernel did not handle Layer 2 Tunneling Protocol (L2TP) sequence numbers properly and it was not compliant with RFC 3931. As a consequence, L2TP sessions stopped working unexpectedly. With this update, a patch has been applied to correctly handle sequence numbers in case of a packet loss. As a result, when users enable sequence numbers, L2TP sessions work as expected in the described scenario. (BZ#1527799) The kernel no longer crashes when a tunnel_key mode is not specified Previously, parsing configuration data in the tunnel_key action rules was incorrect if neither set nor unset mode was specified in the configuration. As a consequence, the kernel dereferenced an incorrect pointer and terminated unexpectedly. With this update, the kernel does not install tunnel_key if set or unset was not specified. As a result, the kernel no longer crashes in the described scenario. (BZ#1554907) The sysctl net.ipv4.route.min_pmtu setting no longer set invalid values Previously, the value provided by administrators for the sysctl net.ipv4.route.min_pmtu setting was not restricted. As a consequence, administrators were able to set a negative value for net.ipv4.route.min_pmtu . This sometimes resulted in setting the path Maximum Transmission Unit (MTU) of some routes to very large values because of an integer overflow. This update restricts values for net.ipv4.route.min_pmtu set to >= 68 , the minimum valid MTU for IPv4. As a result, net.ipv4.route.min_pmtu can no longer be set to invalid values (negative value or < 68 ). (BZ#1541250) wpa_supplicant no longer responds to packets whose destination address does not match the interface address Previously, when wpa_supplicant was running on a Linux interface that was configured in promiscuous mode, incoming Extensible Authentication Protocol over LAN (EAPOL) packets were processed regardless of the destination address in the frame. However, wpa_supplicant checked the destination address only if the interface was enslaved to a bridge. As a consequence, in certain cases, wpa_supplicant was responding to EAPOL packets when the destination address was not the interface address. With this update, a socket filter has been added that allows the kernel to discard unicast EAPOL packets whose destination address does not match the interface address, and the described problem no longer occurs. (BZ# 1434434 ) NetworkManager no longer fails to detect duplicate IPv4 addresses Previously, NetworkManager used to spawn an instance of the arping process to detect duplicate IPv4 addresses on the network. As a consequence, if the timeout configured for IPv4 Duplicate Address Detection (DAD) was short and the system was overloaded, NetworkManager sometimes failed to detect a duplicate address in time. With this update, the detection of duplicate IPv4 addresses is now performed internally to NetworkManager without spawning external binaries, and the described problem no longer occurs. (BZ# 1507864 ) firewalld now prevents partially applied rules Previously, if a direct rule failed to be inserted for any reason, then all following direct rules with a higher priority also failed to insert. As a consequence, direct rules were not applied completely. The processing has been changed to either apply all direct rules successfully or revert them all. As a result, if a rule failure occurs at startup, firewalld enters the failed status and allows the user to remedy the situation. This prevents unexpected results by having partially applied rules. (BZ# 1498923 ) The wpa_supplicant upgrade no longer causes disconnections Previously, the upgrade of the wpa_supplicant package caused a restart of the wpa_supplicant service. As a consequence, the network disconnected temporarily. With this update, the systemd unit is not restarted during the upgrade. As a result, the network connectivity no longer fails during the wpa_supplicant upgrade. (BZ#1505404) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/bug_fixes_networking |
Chapter 10. MachineSet [machine.openshift.io/v1beta1] | Chapter 10. MachineSet [machine.openshift.io/v1beta1] Description MachineSet ensures that a specified number of machines replicas are running at any given time. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineSetSpec defines the desired state of MachineSet status object MachineSetStatus defines the observed state of MachineSet 10.1.1. .spec Description MachineSetSpec defines the desired state of MachineSet Type object Property Type Description deletePolicy string DeletePolicy defines the policy used to identify nodes to delete when downscaling. Defaults to "Random". Valid values are "Random, "Newest", "Oldest" minReadySeconds integer MinReadySeconds is the minimum number of seconds for which a newly created machine should be ready. Defaults to 0 (machine will be considered available as soon as it is ready) replicas integer Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. selector object Selector is a label query over machines that should match the replica count. Label keys and values that must match in order to be controlled by this MachineSet. It must match the machine template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template object Template is the object that describes the machine that will be created if insufficient replicas are detected. 10.1.2. .spec.selector Description Selector is a label query over machines that should match the replica count. Label keys and values that must match in order to be controlled by this MachineSet. It must match the machine template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.5. .spec.template Description Template is the object that describes the machine that will be created if insufficient replicas are detected. Type object Property Type Description metadata object Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the machine. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 10.1.6. .spec.template.metadata Description Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 10.1.7. .spec.template.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 10.1.8. .spec.template.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 10.1.9. .spec.template.spec Description Specification of the desired behavior of the machine. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description lifecycleHooks object LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. metadata object ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. providerID string ProviderID is the identification ID of the machine provided by the provider. This field must match the provider ID as seen on the node object corresponding to this machine. This field is required by higher level consumers of cluster-api. Example use case is cluster autoscaler with cluster-api as provider. Clean-up logic in the autoscaler compares machines to nodes to find out machines at provider which could not get registered as Kubernetes nodes. With cluster-api as a generic out-of-tree provider for autoscaler, this field is required by autoscaler to be able to have a provider view of the list of machines. Another list of nodes is queried from the k8s apiserver and then a comparison is done to find out unregistered machines and are marked for delete. This field will be set by the actuators and consumed by higher level entities like autoscaler that will be interfacing with cluster-api as generic provider. providerSpec object ProviderSpec details Provider-specific configuration to use during node creation. taints array The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. 10.1.10. .spec.template.spec.lifecycleHooks Description LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. Type object Property Type Description preDrain array PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. preDrain[] object LifecycleHook represents a single instance of a lifecycle hook preTerminate array PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. preTerminate[] object LifecycleHook represents a single instance of a lifecycle hook 10.1.11. .spec.template.spec.lifecycleHooks.preDrain Description PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. Type array 10.1.12. .spec.template.spec.lifecycleHooks.preDrain[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 10.1.13. .spec.template.spec.lifecycleHooks.preTerminate Description PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. Type array 10.1.14. .spec.template.spec.lifecycleHooks.preTerminate[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 10.1.15. .spec.template.spec.metadata Description ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 10.1.16. .spec.template.spec.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 10.1.17. .spec.template.spec.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 10.1.18. .spec.template.spec.providerSpec Description ProviderSpec details Provider-specific configuration to use during node creation. Type object Property Type Description value `` Value is an inlined, serialized representation of the resource configuration. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field, akin to component config. 10.1.19. .spec.template.spec.taints Description The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints Type array 10.1.20. .spec.template.spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 10.1.21. .status Description MachineSetStatus defines the observed state of MachineSet Type object Property Type Description availableReplicas integer The number of available replicas (ready for at least minReadySeconds) for this MachineSet. errorMessage string errorReason string In the event that there is a terminal problem reconciling the replicas, both ErrorReason and ErrorMessage will be set. ErrorReason will be populated with a succinct value suitable for machine interpretation, while ErrorMessage will contain a more verbose string suitable for logging and human consumption. These fields should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the MachineTemplate's spec or the configuration of the machine controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the machine controller, or the responsible machine controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the MachineSet object and/or logged in the controller's output. fullyLabeledReplicas integer The number of replicas that have labels matching the labels of the machine template of the MachineSet. observedGeneration integer ObservedGeneration reflects the generation of the most recently observed MachineSet. readyReplicas integer The number of ready replicas for this MachineSet. A machine is considered ready when the node has been created and is "Ready". replicas integer Replicas is the most recently observed number of replicas. 10.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1beta1/machinesets GET : list objects of kind MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets DELETE : delete collection of MachineSet GET : list objects of kind MachineSet POST : create a MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name} DELETE : delete a MachineSet GET : read the specified MachineSet PATCH : partially update the specified MachineSet PUT : replace the specified MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/scale GET : read scale of the specified MachineSet PATCH : partially update scale of the specified MachineSet PUT : replace scale of the specified MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/status GET : read status of the specified MachineSet PATCH : partially update status of the specified MachineSet PUT : replace status of the specified MachineSet 10.2.1. /apis/machine.openshift.io/v1beta1/machinesets Table 10.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind MachineSet Table 10.2. HTTP responses HTTP code Reponse body 200 - OK MachineSetList schema 401 - Unauthorized Empty 10.2.2. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets Table 10.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MachineSet Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineSet Table 10.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.8. HTTP responses HTTP code Reponse body 200 - OK MachineSetList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineSet Table 10.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.10. Body parameters Parameter Type Description body MachineSet schema Table 10.11. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 202 - Accepted MachineSet schema 401 - Unauthorized Empty 10.2.3. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name} Table 10.12. Global path parameters Parameter Type Description name string name of the MachineSet namespace string object name and auth scope, such as for teams and projects Table 10.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MachineSet Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.15. Body parameters Parameter Type Description body DeleteOptions schema Table 10.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineSet Table 10.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.18. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineSet Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.20. Body parameters Parameter Type Description body Patch schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineSet Table 10.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.23. Body parameters Parameter Type Description body MachineSet schema Table 10.24. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 401 - Unauthorized Empty 10.2.4. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/scale Table 10.25. Global path parameters Parameter Type Description name string name of the MachineSet namespace string object name and auth scope, such as for teams and projects Table 10.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified MachineSet Table 10.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified MachineSet Table 10.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.30. Body parameters Parameter Type Description body Patch schema Table 10.31. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified MachineSet Table 10.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.33. Body parameters Parameter Type Description body Scale schema Table 10.34. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 10.2.5. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/status Table 10.35. Global path parameters Parameter Type Description name string name of the MachineSet namespace string object name and auth scope, such as for teams and projects Table 10.36. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified MachineSet Table 10.37. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.38. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineSet Table 10.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.40. Body parameters Parameter Type Description body Patch schema Table 10.41. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineSet Table 10.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.43. Body parameters Parameter Type Description body MachineSet schema Table 10.44. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/machine_apis/machineset-machine-openshift-io-v1beta1 |
Migrating to Red Hat build of Apache Camel for Spring Boot | Migrating to Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel 4.0 Migrating to Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team http://access.redhat.com/support | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/index |
Chapter 18. The awx-manage Utility | Chapter 18. The awx-manage Utility Use the awx-manage utility to access detailed internal information of automation controller. Commands for awx-manage must run as the awx user only. 18.1. Inventory Import awx-manage is a mechanism by which an automation controller administrator can import inventory directly into automation controller, for those who cannot use Custom Inventory Scripts. To use awx-manage properly, you must first create an inventory in automation controller to use as the destination for the import. For help with awx-manage , run the following command: awx-manage inventory_import [--help] The inventory_import command synchronizes an automation controller inventory object with a text-based inventory file, dynamic inventory script, or a directory of one or more, as supported by core Ansible. When running this command, specify either an --inventory-id or --inventory-name , and the path to the Ansible inventory source ( --source ). awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1 By default, inventory data already stored in automation controller blends with data from the external source. To use only the external data, specify --overwrite . To specify that any existing hosts get variable data exclusively from the --source , specify --overwrite_vars . The default behavior adds any new variables from the external source, overwriting keys that already exist, but preserving any variables that were not sourced from the external data source. awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1 --overwrite Note Edits and additions to Inventory host variables persist beyond an inventory synchronization as long as --overwrite_vars is not set. 18.2. Cleanup of old data awx-manage has a variety of commands used to clean old data from automation controller. Automation controller administrators can use the automation controller Management Jobs interface for access or use the command line. awx-manage cleanup_jobs [--help] This permanently deletes the job details and job output for jobs older than a specified number of days. awx-manage cleanup_activitystream [--help] This permanently deletes any Activity stream data older than a specific number of days. 18.3. Cluster management For more information on the awx-manage provision_instance and awx-manage deprovision_instance commands, see Clustering . Note Do not run other awx-manage commands unless instructed by Ansible Support. 18.4. Token and session management Automation controller supports the following commands for OAuth2 token management: create_oauth2_token revoke_oauth2_tokens cleartokens expire_sessions clearsessions 18.4.1. create_oauth2_token Use the following command to create OAuth2 tokens (specify the username for example_user ): USD awx-manage create_oauth2_token --user example_user New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2 Ensure that you provide a valid user when creating tokens. Otherwise, an error message that you attempted to issue the command without specifying a user, or supplied a username that does not exist, is displayed. 18.4.2. revoke_oauth2_tokens Use this command to revoke OAuth2 tokens, both application tokens and personal access tokens (PAT). It revokes all application tokens (but not their associated refresh tokens), and revokes all personal access tokens. However, you can also specify a user for whom to revoke all tokens. To revoke all existing OAuth2 tokens use the following command: USD awx-manage revoke_oauth2_tokens To revoke all OAuth2 tokens and their refresh tokens use the following command: USD awx-manage revoke_oauth2_tokens --revoke_refresh To revoke all OAuth2 tokens for the user with id=example_user (specify the username for example_user ): USD awx-manage revoke_oauth2_tokens --user example_user To revoke all OAuth2 tokens and refresh token for the user with id=example_user : USD awx-manage revoke_oauth2_tokens --user example_user --revoke_refresh 18.4.3. cleartokens Use this command to clear tokens which have already been revoked. For more information, see cleartokens in Django's Oauth Toolkit documentation. 18.4.4. expire_sessions Use this command to terminate all sessions or all sessions for a specific user. Consider using this command when a user changes role in an organization, is removed from assorted groups in LDAP/AD, or the administrator wants to ensure the user can no longer execute jobs due to membership in these groups. USD awx-manage expire_sessions This command terminates all sessions by default. The users associated with those sessions are logged out. To only expire the sessions of a specific user, you can pass their username using the --user flag (replace example_user with the username in the following example): USD awx-manage expire_sessions --user example_user 18.4.5. clearsessions Use this command to delete all sessions that have expired. For more information, see Clearing the session store in Django's Oauth Toolkit documentation. For more information on OAuth2 token management in the UI, see the Applications section of the Automation controller User Guide. 18.5. Analytics gathering Use this command to gather analytics on-demand outside of the predefined window (the default is 4 hours): USD awx-manage gather_analytics --ship For customers with disconnected environments who want to collect usage information about unique hosts automated across a time period, use this command: awx-manage host_metric --since YYYY-MM-DD --until YYYY-MM-DD --json The parameters --since and --until specify date ranges and are optional, but one of them has to be present. The --json flag specifies the output format and is optional. | [
"awx-manage inventory_import [--help]",
"awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1",
"awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1 --overwrite",
"awx-manage cleanup_jobs [--help]",
"awx-manage cleanup_activitystream [--help]",
"awx-manage create_oauth2_token --user example_user New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2",
"awx-manage revoke_oauth2_tokens",
"awx-manage revoke_oauth2_tokens --revoke_refresh",
"awx-manage revoke_oauth2_tokens --user example_user",
"awx-manage revoke_oauth2_tokens --user example_user --revoke_refresh",
"awx-manage expire_sessions",
"awx-manage expire_sessions --user example_user",
"awx-manage gather_analytics --ship",
"awx-manage host_metric --since YYYY-MM-DD --until YYYY-MM-DD --json"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/assembly-controller-awx-manage-utility |
Chapter 5. Removing the OpenShift Serverless Logic Operator | Chapter 5. Removing the OpenShift Serverless Logic Operator If you need to remove OpenShift Serverless Logic from your cluster, you can do so by manually removing the OpenShift Serverless Logic Operator and other OpenShift Serverless Logic components. You can delete the OpenShift Serverless Logic Operator by using the web console. Deleting Operators from a cluster using the web console Refreshing failing subscriptions | null | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/removing_openshift_serverless/removing-openshift-serverless-logic-operator |
14.4. Adding Caching to Application Code | 14.4. Adding Caching to Application Code Caching can be added to each application by utilizing the Spring annotations found in the Spring Cache Abstraction . Adding a Cache Entry To add entries to the cache add the @Cacheable annotation to select methods. This annotation will add any returned values to the indicated cache. For instance, consider a method that returns a Book based on a particular key. By annotating this method with @Cacheable : Any Book instances returned from findBook(Integer bookId) will be placed in a named cache books , using the bookId as the value's key. Important If the key attribute is not specified then Spring will generate a hash from the supplied arguments and use this generated value as the cache key. If your application needs to reference the entries directly it is recommended to include the key attribute so that entries may be easily obtained. Deleting a Cache Entry To remove entries from the cache annotate the desired methods with @CacheEvict . This annotation can be configured to evict all entries in a cache, or to only affect entries with the indicated key. Consider the following examples: Report a bug | [
"@Cacheable(value = \"books\", key = \"#bookId\") public Book findBook(Integer bookId) {...}",
"// Evict all entries in the \"books\" cache @CacheEvict (value=\"books\", key = \"#bookId\", allEntries = true) public void deleteBookAllEntries() {...} // Evict any entries in the \"books\" cache that match the passed in bookId @CacheEvict (value=\"books\", key = \"#bookId\") public void deleteBook(Integer bookId) {...]}"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/adding_caching_to_application_code |
A.4. kvm_stat | A.4. kvm_stat The kvm_stat command is a python script which retrieves runtime statistics from the kvm kernel module. The kvm_stat command can be used to diagnose guest behavior visible to kvm . In particular, performance related issues with guests. Currently, the reported statistics are for the entire system; the behavior of all running guests is reported. To run this script you need to install the qemu-kvm-tools package. For more information, see Section 2.2, "Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System" . The kvm_stat command requires that the kvm kernel module is loaded and debugfs is mounted. If either of these features are not enabled, the command will output the required steps to enable debugfs or the kvm module. For example: Mount debugfs if required: kvm_stat Output The kvm_stat command outputs statistics for all guests and the host. The output is updated until the command is terminated (using Ctrl + c or the q key). Note that the output you see on your screen may differ. For an explanation of the output elements, click any of the terms to link to the defintion. Explanation of variables: kvm_ack_irq - Number of interrupt controller (PIC/IOAPIC) interrupt acknowledgements. kvm_age_page - Number of page age iterations by memory management unit (MMU) notifiers. kvm_apic - Number of APIC register accesses. kvm_apic_accept_irq - Number of interrupts accepted into local APIC. kvm_apic_ipi - Number of inter processor interrupts. kvm_async_pf_completed - Number of completions of asynchronous page faults. kvm_async_pf_doublefault - Number of asynchronous page fault halts. kvm_async_pf_not_present - Number of initializations of asynchronous page faults. kvm_async_pf_ready - Number of completions of asynchronous page faults. kvm_cpuid - Number of CPUID instructions executed. kvm_cr - Number of trapped and emulated control register (CR) accesses (CR0, CR3, CR4, CR8). kvm_emulate_insn - Number of emulated instructions. kvm_entry - Number of emulated instructions. kvm_eoi - Number of Advanced Programmable Interrupt Controller (APIC) end of interrupt (EOI) notifications. kvm_exit - Number of VM-exits . kvm_exit (NAME) - Individual exits that are processor-specific. See your processor's documentation for more information. kvm_fpu - Number of KVM floating-point units (FPU) reloads. kvm_hv_hypercall - Number of Hyper-V hypercalls. kvm_hypercall - Number of non-Hyper-V hypercalls. kvm_inj_exception - Number of exceptions injected into guest. kvm_inj_virq - Number of interrupts injected into guest. kvm_invlpga - Number of INVLPGA instructions intercepted. kvm_ioapic_set_irq - Number of interrupts level changes to the virtual IOAPIC controller. kvm_mmio - Number of emulated memory-mapped I/O (MMIO) operations. kvm_msi_set_irq - Number of message-signaled interrupts (MSI). kvm_msr - Number of model-specific register (MSR) accesses. kvm_nested_intercepts - Number of L1 ⇒ L2 nested SVM switches. kvm_nested_vmrun - Number of L1 ⇒ L2 nested SVM switches. kvm_nested_intr_vmexit - Number of nested VM-exit injections due to interrupt window. kvm_nested_vmexit - Exits to hypervisor while executing nested (L2) guest. kvm_nested_vmexit_inject - Number of L2 ⇒ L1 nested switches. kvm_page_fault - Number of page faults handled by hypervisor. kvm_pic_set_irq - Number of interrupts level changes to the virtual programmable interrupt controller (PIC). kvm_pio - Number of emulated programmed I/O (PIO) operations. kvm_pv_eoi - Number of paravirtual end of input (EOI) events. kvm_set_irq - Number of interrupt level changes at the generic IRQ controller level (counts PIC, IOAPIC and MSI). kvm_skinit - Number of SVM SKINIT exits. kvm_track_tsc - Number of time stamp counter (TSC) writes. kvm_try_async_get_page - Number of asynchronous page fault attempts. kvm_update_master_clock - Number of pvclock masterclock updates. kvm_userspace_exit - Number of exits to user space. kvm_write_tsc_offset - Number of TSC offset writes. vcpu_match_mmio - Number of SPTE cached memory-mapped I/O (MMIO) hits. The output information from the kvm_stat command is exported by the KVM hypervisor as pseudo files which are located in the /sys/kernel/debug/tracing/events/kvm/ directory. | [
"kvm_stat Please mount debugfs ('mount -t debugfs debugfs /sys/kernel/debug') and ensure the kvm modules are loaded",
"mount -t debugfs debugfs /sys/kernel/debug",
"kvm_stat kvm statistics kvm_exit 17724 66 Individual exit reasons follow, see kvm_exit (NAME) for more information. kvm_exit(CLGI) 0 0 kvm_exit(CPUID) 0 0 kvm_exit(CR0_SEL_WRITE) 0 0 kvm_exit(EXCP_BASE) 0 0 kvm_exit(FERR_FREEZE) 0 0 kvm_exit(GDTR_READ) 0 0 kvm_exit(GDTR_WRITE) 0 0 kvm_exit(HLT) 11 11 kvm_exit(ICEBP) 0 0 kvm_exit(IDTR_READ) 0 0 kvm_exit(IDTR_WRITE) 0 0 kvm_exit(INIT) 0 0 kvm_exit(INTR) 0 0 kvm_exit(INVD) 0 0 kvm_exit(INVLPG) 0 0 kvm_exit(INVLPGA) 0 0 kvm_exit(IOIO) 0 0 kvm_exit(IRET) 0 0 kvm_exit(LDTR_READ) 0 0 kvm_exit(LDTR_WRITE) 0 0 kvm_exit(MONITOR) 0 0 kvm_exit(MSR) 40 40 kvm_exit(MWAIT) 0 0 kvm_exit(MWAIT_COND) 0 0 kvm_exit(NMI) 0 0 kvm_exit(NPF) 0 0 kvm_exit(PAUSE) 0 0 kvm_exit(POPF) 0 0 kvm_exit(PUSHF) 0 0 kvm_exit(RDPMC) 0 0 kvm_exit(RDTSC) 0 0 kvm_exit(RDTSCP) 0 0 kvm_exit(READ_CR0) 0 0 kvm_exit(READ_CR3) 0 0 kvm_exit(READ_CR4) 0 0 kvm_exit(READ_CR8) 0 0 kvm_exit(READ_DR0) 0 0 kvm_exit(READ_DR1) 0 0 kvm_exit(READ_DR2) 0 0 kvm_exit(READ_DR3) 0 0 kvm_exit(READ_DR4) 0 0 kvm_exit(READ_DR5) 0 0 kvm_exit(READ_DR6) 0 0 kvm_exit(READ_DR7) 0 0 kvm_exit(RSM) 0 0 kvm_exit(SHUTDOWN) 0 0 kvm_exit(SKINIT) 0 0 kvm_exit(SMI) 0 0 kvm_exit(STGI) 0 0 kvm_exit(SWINT) 0 0 kvm_exit(TASK_SWITCH) 0 0 kvm_exit(TR_READ) 0 0 kvm_exit(TR_WRITE) 0 0 kvm_exit(VINTR) 1 1 kvm_exit(VMLOAD) 0 0 kvm_exit(VMMCALL) 0 0 kvm_exit(VMRUN) 0 0 kvm_exit(VMSAVE) 0 0 kvm_exit(WBINVD) 0 0 kvm_exit(WRITE_CR0) 2 2 kvm_exit(WRITE_CR3) 0 0 kvm_exit(WRITE_CR4) 0 0 kvm_exit(WRITE_CR8) 0 0 kvm_exit(WRITE_DR0) 0 0 kvm_exit(WRITE_DR1) 0 0 kvm_exit(WRITE_DR2) 0 0 kvm_exit(WRITE_DR3) 0 0 kvm_exit(WRITE_DR4) 0 0 kvm_exit(WRITE_DR5) 0 0 kvm_exit(WRITE_DR6) 0 0 kvm_exit(WRITE_DR7) 0 0 kvm_entry 17724 66 kvm_apic 13935 51 kvm_emulate_insn 13924 51 kvm_mmio 13897 50 varl-kvm_eoi 3222 12 kvm_inj_virq 3222 12 kvm_apic_accept_irq 3222 12 kvm_pv_eoi 3184 12 kvm_fpu 376 2 kvm_cr 177 1 kvm_apic_ipi 278 1 kvm_msi_set_irq 295 0 kvm_pio 79 0 kvm_userspace_exit 52 0 kvm_set_irq 50 0 kvm_pic_set_irq 50 0 kvm_ioapic_set_irq 50 0 kvm_ack_irq 25 0 kvm_cpuid 90 0 kvm_msr 12 0"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Troubleshooting-kvm_stat |
Chapter 1. OpenShift Container Platform 4.15 release notes | Chapter 1. OpenShift Container Platform 4.15 release notes Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP. Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements. 1.1. About this release OpenShift Container Platform ( RHSA-2023:7198 ) is now available. This release uses Kubernetes 1.28 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.15 are included in this topic. OpenShift Container Platform 4.15 clusters are available at https://console.redhat.com/openshift . With the Red Hat OpenShift Cluster Manager application for OpenShift Container Platform, you can deploy OpenShift Container Platform clusters to either on-premises or cloud environments. OpenShift Container Platform 4.15 is supported on Red Hat Enterprise Linux (RHEL) 8.8 and a later version of RHEL 8 that is released before End of Life of OpenShift Container Platform 4.15. OpenShift Container Platform 4.15 is also supported on Red Hat Enterprise Linux CoreOS (RHCOS) 4.15. To understand RHEL versions used by RHCOS, see RHEL Versions Utilized by Red Hat Enterprise Linux CoreOS (RHCOS) and OpenShift Container Platform (Knowledgebase article). You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for compute machines. For OpenShift Container Platform 4.12 on x86_64 architecture, Red Hat has added a 6-month Extended Update Support (EUS) phase that extends the total available lifecycle from 18 months to 24 months. For OpenShift Container Platform 4.12 running on 64-bit ARM ( aarch64 ), IBM Power(R) ( ppc64le ), and IBM Z(R) ( s390x ) architectures, the EUS lifecycle remains at 18 months. Starting with OpenShift Container Platform 4.14, each EUS phase for even numbered releases on all supported architectures, including x86_64 , 64-bit ARM ( aarch64 ), IBM Power(R) ( ppc64le ), and IBM Z(R) ( s390x ) architectures, has a total available lifecycle of 24 months. Starting with OpenShift Container Platform 4.14, Red Hat offers a 12-month additional EUS add-on, denoted as Additional EUS Term 2 , that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of OpenShift Container Platform. For more information about this support, see the Red Hat OpenShift Container Platform Life Cycle Policy . Maintenance support ends for version 4.12 on 17 July 2024 and goes to extended update support phase. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy . Commencing with the 4.15 release, Red Hat is simplifying the administration and management of Red Hat shipped cluster Operators with the introduction of three new life cycle classifications; Platform Aligned, Platform Agnostic, and Rolling Stream. These life cycle classifications provide additional ease and transparency for cluster administrators to understand the life cycle policies of each Operator and form cluster maintenance and upgrade plans with predictable support boundaries. For more information, see OpenShift Operator Life Cycles . OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . 1.2. OpenShift Container Platform layered and dependent component support and compatibility The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy . 1.3. New features and enhancements This release adds improvements related to the following components and concepts. 1.3.1. Red Hat Enterprise Linux CoreOS (RHCOS) 1.3.1.1. RHCOS now uses RHEL 9.2 RHCOS now uses Red Hat Enterprise Linux (RHEL) 9.2 packages in OpenShift Container Platform 4.15. These packages ensure that your OpenShift Container Platform instance receives the latest fixes, features, enhancements, hardware support, and driver updates. 1.3.1.2. Support for iSCSI devices (Technology Preview) RHCOS now supports the iscsi_bft driver, letting you boot directly from iSCSI devices that work with the iSCSI Boot Firmware Table (iBFT), as a Technology Preview. This lets you target iSCSI devices as the root disk for installation. For more information, see the RHEL documentation . 1.3.2. Installation and update 1.3.2.1. Encrypting Azure storage account during installation You can now encrypt Azure storage accounts during installation by providing the installation program with a customer managed encryption key. See Installation configuration parameters for descriptions of the parameters required to encrypt Azure storage accounts. 1.3.2.2. RHOSP integration into the Cluster CAPI Operator (Tech Preview) If you enable the TechPreviewNoUpgrade feature flag, the Cluster CAPI Operator deploys the Cluster API Provider OpenStack and manages its lifecycle. The Cluster CAPI Operator automatically creates Cluster and OpenStackCluster resources for the current OpenShift Container Platform cluster. It is now possible to configure the Cluster API Machine and OpenStackMachine resources similarly to how Machine API resources are configured. It is important to note that while Cluster API resources are functionally equivalent to Machine API resources, structurally they are not identical. 1.3.2.3. IBM Cloud and user-managed encryption You can now specify your own IBM(R) Key Protect for IBM Cloud(R) root key as part of the installation process. This root key is used to encrypt the root (boot) volume of control plane and compute machines, and the persistent volumes (data volumes) that are provisioned after the cluster is deployed. For more information, see User-managed encryption for IBM Cloud . 1.3.2.4. Installing a cluster on IBM Cloud with limited internet access You can now install a cluster on IBM Cloud(R) in an environment with limited internet access, such as a disconnected or restricted network cluster. With this type of installation, you create a registry that mirrors the contents of the OpenShift Container Platform installation images. You can create this registry on a mirror host, which can access both the internet and your restricted network. For more information, see Installing a cluster on IBM Cloud in a restricted network . 1.3.2.5. Installing a cluster on AWS to extend nodes to Wavelength Zones You can quickly install an OpenShift Container Platform cluster in Amazon Web Services (AWS) Wavelength Zones by setting the zone names in the edge compute pool of the install-config.yaml file, or install a cluster in an existing VPC with Wavelength Zone subnets. You can also perform postinstallation tasks to extend an existing OpenShift Container Platform cluster on AWS to use AWS Wavelength Zones. For more information, see Installing a cluster on AWS with compute nodes on AWS Wavelength Zones and Extend existing clusters to use AWS Local Zones or Wavelength Zones . 1.3.2.6. Customizing the cluster network MTU on AWS deployments Before you deploy a cluster on AWS Local Zones infrastructure, you can customize the cluster network maximum transmission unit (MTU) for your cluster network to meet the needs of your infrastructure. You can customize the MTU for a cluster by specifying the networking.clusterNetworkMTU parameter in the install-config.yaml configuration file. For more information, see Customizing the cluster network MTU . 1.3.2.7. Installing a cluster on AWS with compute nodes on AWS Outposts In OpenShift Container Platform version 4.14, you could install a cluster on AWS with compute nodes running in AWS Outposts as a Technology Preview. In OpenShift Container Platform 4.15, you can install a cluster on AWS into an existing VPC and provision compute nodes on AWS Outposts as a postinstallation configuration task. For more information, see Installing a cluster on AWS into an existing VPC and Extending an AWS VPC cluster into an AWS Outpost . 1.3.2.8. Nutanix and fault tolerant deployments By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). To improve the fault tolerance of your OpenShift Container Platform cluster, you can now specify that these machines be distributed across multiple Nutanix clusters by configuring failure domains. For more information, see Fault tolerant deployments using multiple Prism Elements . 1.3.2.9. OpenShift Container Platform on 64-bit ARM OpenShift Container Platform 4.15 now supports the ability to enable 64k page sizes in the RHCOS kernel using the Machine Config Operator (MCO). This setting is exclusive to machines with 64-bit ARM architectures. For more information, see the Machine configuration tasks documentation. 1.3.2.10. Optional OLM cluster capability In OpenShift Container Platform 4.15, you can disable the Operator Lifecycle Manager (OLM) capability during installation. For further information, see Operator Lifecycle Manager capability . 1.3.2.11. Deploying Red Hat OpenStack Platform (RHOSP) with root volume and etcd on local disk (Technology Preview) You can now move etcd from a root volume (Cinder) to a dedicated ephemeral local disk as a Day 2 deployment. With this Technology Preview feature, you can resolve and prevent performance issues of your RHOSP installation. For more information, see Deploying on OpenStack with rootVolume and etcd on local disk . 1.3.2.12. Configure vSphere integration with the Agent-based Installer You can now configure your cluster to use vSphere while creating the install-config.yaml file for an Agent-based Installation. For more information, see Additional VMware vSphere configuration parameters . 1.3.2.13. Additional bare metal configurations during Agent-based installation You can now make additional configurations for the bare metal platform while creating the install-config.yaml file for an Agent-based Installation. These new options include host configuration, network configuration, and baseboard management controller (BMC) details. These fields are not used during the initial provisioning of the cluster, but they eliminate the need to set the fields after installation. For more information, see Additional bare metal configuration parameters for the Agent-based Installer . 1.3.2.14. Use the Dell iDRAC BMC to configure a RAID during installer-provisioned installation You can now use the Dell iDRAC baseboard management controller (BMC) with the Redfish protocol to configure a redundant array of independent disks (RAID) for the bare metal platform during an installer-provisioned installation. For more information, see Optional: Configuring the RAID . 1.3.3. Postinstallation configuration 1.3.3.1. OpenShift Container Platform clusters with multi-architecture compute machines On OpenShift Container Platform 4.15 clusters with multi-architecture compute machines, you can now enable 64k page sizes in the Red Hat Enterprise Linux CoreOS (RHCOS) kernel on the 64-bit ARM compute machines in your cluster. For more information on setting this parameter, see Enabling 64k pages on the Red Hat Enterprise Linux CoreOS (RHCOS) kernel . 1.3.4. Web console 1.3.4.1. Administrator perspective This release introduces the following updates to the Administrator perspective of the web console: Enable and disable the tailing to Pod log viewer to minimize load time. View recommended values for VerticalPodAutoscaler on the Deployment page. 1.3.4.1.1. Node uptime information With this update, you can enable the ability to view additional node uptime information to track node restarts or failures. Navigate to the Compute Nodes page, click Manage columns , and then select Uptime . 1.3.4.1.2. Dynamic plugin enhancements With this update, you can add a new details item to the default resource summary on the Details page by using console.resource/details-item . The OpenShift Container Platform release also adds example implementation for annotation, label and the delete modal to the CronTab dynamic plugin. For more information, see Dynamic plugin reference For more information about console.resource/details-item , see OpenShift Container Platform console API . 1.3.4.1.3. OperatorHub support for Microsoft Entra Workload ID With this release, OperatorHub detects when a OpenShift Container Platform cluster running on Azure is configured for Microsoft Entra Workload ID. When detected, a "Cluster in Workload Identity / Federated Identity Mode" notification is displayed with additional instructions before installing an Operator to ensure it runs correctly. The Operator Installation page is also modified to add fields for the required Azure credentials information. For the updated step for the Install Operator page, see Installing from OperatorHub using the web console . 1.3.4.2. Developer Perspective This release introduces the following updates to the Developer perspective of the web console: Pipeline history and logs based on the data from Tekton Results are available in the dashboard without requiring PipelineRun CRs on the cluster. 1.3.4.2.1. Software Supply Chain Enhancements The PipelineRun Details page in the Developer or Administrator perspective of the web console provides an enhanced visual representation of PipelineRuns within a Project. For more information, see Red Hat OpenShift Pipelines . 1.3.4.2.2. Red Hat Developer Hub in the web console With this update, a quick start is now available for you to learn more about how to install and use the developer hub. For more information, see Product Documentation for Red Hat Developer Hub . 1.3.4.2.3. builds for OpenShift Container Platform is supported in the web console With this update, builds for OpenShift Container Platform 1.0 is supported in the web console. Builds is an extensible build framework based on the Shipwright project . You can use builds for OpenShift Container Platform to build container images on an OpenShift Container Platform cluster. For more information, see builds for OpenShift Container Platform . 1.3.5. IBM Z and IBM LinuxONE With this release, IBM Z(R) and IBM(R) LinuxONE are now compatible with OpenShift Container Platform 4.15. You can perform the installation with z/VM, LPAR, or Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM). For installation instructions, see the following documentation: Installing a cluster with on IBM Z and IBM LinuxONE Important Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS). 1.3.5.1. IBM Z and IBM LinuxONE notable enhancements The IBM Z(R) and IBM(R) LinuxONE release on OpenShift Container Platform 4.15 adds improvements and new capabilities to OpenShift Container Platform components and concepts. This release introduces support for the following features on IBM Z(R) and IBM(R) LinuxONE: Agent-based Installer cert-manager Operator for Red Hat OpenShift s390x control plane with x86_64 multi-architecture compute nodes 1.3.5.2. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE OpenShift Container Platform now supports user-provisioned installation of OpenShift Container Platform 4.15 in a logical partition (LPAR) on IBM Z and IBM LinuxONE. For installation instructions, see the following documentation: Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE in a restricted network 1.3.6. IBM Power IBM Power(R) is now compatible with OpenShift Container Platform 4.15. For installation instructions, see the following documentation: Installing a cluster on IBM Power(R) Installing a cluster on IBM Power(R) in a restricted network Important Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS). 1.3.6.1. IBM Power notable enhancements The IBM Power(R) release on OpenShift Container Platform 4.15 adds improvements and new capabilities to OpenShift Container Platform components. This release introduces support for the following features on IBM Power(R): Agent-based Installer cert-manager Operator for Red Hat OpenShift IBM Power(R) Virtual Server Block CSI Driver Operator Installer-provisioned Infrastructure Enablement for IBM Power(R) Virtual Server Multi-architecture IBM Power(R) control plane with support of Intel and IBM Power(R) workers nx-gzip for Power10 (Hardware Acceleration) The openshift-install utility to support various SMT levels on IBM Power(R) (Hardware Acceleration) 1.3.7. IBM Power, IBM Z, and IBM LinuxONE support matrix Starting in OpenShift Container Platform 4.14, Extended Update Support (EUS) is extended to the IBM Power(R) and the IBM Z(R) platform. For more information, see the OpenShift EUS Overview . Table 1.1. OpenShift Container Platform features Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE Alternate authentication providers Supported Supported Agent-based Installer Supported Supported Assisted Installer Supported Supported Automatic Device Discovery with Local Storage Operator Unsupported Supported Automatic repair of damaged machines with machine health checking Unsupported Unsupported Cloud controller manager for IBM Cloud(R) Supported Unsupported Controlling overcommit and managing container density on nodes Unsupported Unsupported Cron jobs Supported Supported Descheduler Supported Supported Egress IP Supported Supported Encrypting data stored in etcd Supported Supported FIPS cryptography Supported Supported Helm Supported Supported Horizontal pod autoscaling Supported Supported Hosted control planes (Technology Preview) Supported Supported IBM Secure Execution Unsupported Supported IBM Power(R) Virtual Server Block CSI Driver Operator Supported Unsupported Installer-provisioned Infrastructure Enablement for IBM Power(R) Virtual Server Supported Unsupported Installing on a single node Supported Supported IPv6 Supported Supported Monitoring for user-defined projects Supported Supported Multi-architecture compute nodes Supported Supported Multipathing Supported Supported Network-Bound Disk Encryption - External Tang Server Supported Supported Non- volatile memory express drives (NVMe) Supported Unsupported oc-mirror plugin Supported Supported OpenShift CLI ( oc ) plugins Supported Supported Operator API Supported Supported OpenShift Virtualization Unsupported Unsupported OVN-Kubernetes, including IPsec encryption Supported Supported PodDisruptionBudget Supported Supported Precision Time Protocol (PTP) hardware Unsupported Unsupported Red Hat OpenShift Local Unsupported Unsupported Scheduler profiles Supported Supported Stream Control Transmission Protocol (SCTP) Supported Supported Support for multiple network interfaces Supported Supported Three-node cluster support Supported Supported Topology Manager Supported Unsupported z/VM Emulated FBA devices on SCSI disks Unsupported Supported 4K FCP block device Supported Supported Table 1.2. Persistent storage options Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE Persistent storage using iSCSI Supported [1] Supported [1] , [2] Persistent storage using local volumes (LSO) Supported [1] Supported [1] , [2] Persistent storage using hostPath Supported [1] Supported [1] , [2] Persistent storage using Fibre Channel Supported [1] Supported [1] , [2] Persistent storage using Raw Block Supported [1] Supported [1] , [2] Persistent storage using EDEV/FBA Supported [1] Supported [1] , [2] Persistent shared storage must be provisioned by using either Red Hat OpenShift Data Foundation or other supported storage protocols. Persistent non-shared storage must be provisioned by using local storage, such as iSCSI, FC, or by using LSO with DASD, FCP, or EDEV/FBA. Table 1.3. Operators Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE cert-manager Operator for Red Hat OpenShift Supported Supported Cluster Logging Operator Supported Supported Cluster Resource Override Operator Supported Supported Compliance Operator Supported Supported Cost Management Metrics Operator Supported Supported File Integrity Operator Supported Supported HyperShift Operator Technology Preview Technology Preview Local Storage Operator Supported Supported MetalLB Operator Supported Supported Network Observability Operator Supported Supported NFD Operator Supported Supported NMState Operator Supported Supported OpenShift Elasticsearch Operator Supported Supported Vertical Pod Autoscaler Operator Supported Supported Table 1.4. Multus CNI plugins Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE Bridge Supported Supported Host-device Supported Supported IPAM Supported Supported IPVLAN Supported Supported Table 1.5. CSI Volumes Feature IBM Power(R) IBM Z(R) and IBM(R) LinuxONE Cloning Supported Supported Expansion Supported Supported Snapshot Supported Supported 1.3.8. Authentication and authorization 1.3.8.1. OLM-based Operator support for Microsoft Entra Workload ID With this release, some Operators managed by Operator Lifecycle Manager (OLM) on Azure clusters can use the Cloud Credential Operator (CCO) in manual mode with Microsoft Entra Workload ID. These Operators authenticate with short-term credentials that are managed outside the cluster. For more information, see CCO-based workflow for OLM-managed Operators with Azure AD Workload Identity . 1.3.9. Networking 1.3.9.1. OVN-Kubernetes network plugin support for IPsec encryption of external traffic general availability (GA) OpenShift Container Platform now supports encryption of external traffic, also known as north-south traffic . IPsec already supports encryption of network traffic between pods, known as east-west traffic . You can use both features together to provide full in-transit encryption for OpenShift Container Platform clusters. This feature is supported on the following platforms: Bare metal Google Cloud Platform (GCP) Red Hat OpenStack Platform (RHOSP) VMware vSphere For more information, see Enabling IPsec encryption for external IPsec endpoints . 1.3.9.2. IPv6 unsolicited neighbor advertisements now default on macvlan CNI plugin Previously, if one pod ( Pod X ) was deleted, and a second pod ( Pod Y ) was created with a similar configuration, Pod Y might have had the same IPv6 address as Pod X , but it would have a different MAC address. In this scenario, the router was unaware of the MAC address change, and it would continue sending traffic to the MAC address for Pod X . With this update, pods created using the macvlan CNI plugin, where the IP address management CNI plugin has assigned IPs, now send IPv6 unsolicited neighbor advertisements by default onto the network. This enhancement notifies the network fabric of the new pod's MAC address for a particular IP to refresh IPv6 neighbor caches. 1.3.9.3. Configuring the Whereabouts IP reconciler schedule The Whereabouts reconciliation schedule was hard-coded to run once per day and could not be reconfigured. With this release, a ConfigMap object has enabled the configuration of the Whereabouts cron schedule. For more information, see Configuring the Whereabouts IP reconciler schedule . 1.3.9.4. Status management updates for EgressFirewall and AdminPolicyBasedExternalRoute CR The following updates have been made to the status management of EgressFirewall and AdminPolicyBasedExternalRoute custom resource policy: The status.status field is set to failure if at least one message reports failure . The status.status field is empty if no failures are reported and not all nodes have reported their status. The status.status field is set to success if all nodes report success . The status.mesages field lists messages. The messages are listed by the node name by default and are prefixed with the node name. 1.3.9.5. Additional BGP metrics for MetalLB With this update, MetalLB exposes additional metrics relating to communication between MetalLB and Border Gateway Protocol (BGP) peers. For more information, see MetalLB metrics for BGP and BFD . 1.3.9.6. Supporting all-multicast mode OpenShift Container Platform now supports configuring the all-multicast mode by using the tuning CNI plugin. This update eliminates the need to grant the NET_ADMIN capability to the pod's Security Context Constraints (SCC), enhancing security by minimizing potential vulnerabilities for your pods. For more information about all-multicast mode, see About all-multicast mode . 1.3.9.7. Multi-network policy support for IPv6 networks With this update, you can now create multi-network policies for IPv6 networks. For more information, see Supporting multi-network policies in IPv6 networks . 1.3.9.8. Ingress Operator metrics dashboard available With this release, Ingress networking metrics are now viewable from within the OpenShift Container Platform web console. See Ingress Operator dashboard for more information. 1.3.9.9. CoreDNS filtration of ExternalName service queries for subdomains As of OpenShift Container Platform 4.15, CoreDNS has been updated from 1.10.1 to 1.11.1. This update to CoreDNS resolved an issue where CoreDNS would incorrectly provide a response to a query for an ExternalName service that shared its name with a top-level domain, such as com or org . A query for subdomains of an external service should not resolve to that external service. See the associated CoreDNS GitHub issue for more information. 1.3.9.10. CoreDNS metrics deprecation and removal As of OpenShift Container Platform 4.15, CoreDNS has been updated from 1.10.1 to 1.11.1. This update to CoreDNS resulted in the deprecation and removal of certain metrics that have been relocated, including the metrics coredns_forward_healthcheck_failures_total , coredns_forward_requests_total , coredns_forward_responses_total , and coredns_forward_request_duration_seconds . See CoreDNS Metrics for more information. 1.3.9.11. Supported hardware for SR-IOV (Single Root I/O Virtualization) OpenShift Container Platform 4.15 adds support for the following SR-IOV devices: Mellanox MT2910 Family [ConnectX‐7] For more information, see Supported devices . 1.3.9.12. Host network configuration policy for SR-IOV network VFs (Technology Preview) With this release, you can use the NodeNetworkConfigurationPolicy resource to manage host network settings for Single Root I/O Virtualization (SR-IOV) network virtual functions (VF) in an existing cluster. For example, you can configure a host network Quality of Service (QoS) policy to manage network access to host resources by an attached SR-IOV network VF. For more information, see Node network configuration policy for virtual functions . 1.3.9.13. Parallel node draining during SR-IOV network policy updates With this update, you can configure the SR-IOV Network Operator to drain nodes in parallel during network policy updates. The option to drain nodes in parallel enables faster rollouts of SR-IOV network configurations. You can use the SriovNetworkPoolConfig custom resource to configure parallel node draining and define the maximum number of nodes in the pool that the Operator can drain in parallel. For more information, see Configuring parallel node draining during SR-IOV network policy updates . 1.3.10. Registry 1.3.10.1. Support for private storage endpoint on Azure With this release, the Image Registry Operator can be leveraged to use private storage endpoints on Azure. You can use this feature to seamlessly configure private endpoints for storage accounts when OpenShift Container Platform is deployed on private Azure clusters, so that users can deploy the image registry without exposing public-facing storage endpoints. For more information, see the following sections: Configuring a private storage endpoint on Azure Optional: Preparing a private Microsoft Azure cluster for a private image registry 1.3.11. Storage 1.3.11.1. Recovering volume groups from the LVM Storage installation With this release, the LVMCluster custom resource (CR) provides support for recovering volume groups from the LVM Storage installation. If the deviceClasses.name field is set to the name of a volume group from the LVM Storage installation, LVM Storage recreates the resources related to that volume group in the current LVM Storage installation. This simplifies the process of using devices from the LVM Storage installation through the reinstallation of LVM Storage. For more information, see Creating a Logical Volume Manager cluster on a worker node . 1.3.11.2. Support for wiping the devices in LVM Storage This feature provides a new optional field forceWipeDevicesAndDestroyAllData in the LVMCluster custom resource (CR) to force wipe the selected devices. Before this release, wiping the devices required you to manually access the host. With this release, you can force wipe the disks without manual intervention. This simplifies the process of wiping the disks. Warning If forceWipeDevicesAndDestroyAllData is set to true , LVM Storage wipes all data on the devices. You must use this feature with caution. For more information, see Creating a Logical Volume Manager cluster on a worker node . 1.3.11.3. Support for deploying LVM Storage on multi-node clusters This feature provides support for deploying LVM Storage on multi-node clusters. Previously, LVM Storage only supported single-node configurations. With this release, LVM Storage supports all of the OpenShift Container Platform deployment topologies. This enables provisioning of local storage on multi-node clusters. Warning LVM Storage only supports node local storage on multi-node clusters. It does not support storage data replication mechanism across nodes. When using LVM Storage on multi-node clusters, you must ensure storage data replication through active or passive replication mechanisms to avoid a single point of failure. For more information, see Deploying LVM Storage . 1.3.11.4. Integrating RAID arrays with LVM Storage This feature provides support for integrating RAID arrays that are created using the mdadm utility with LVM Storage. The LVMCluster custom resource (CR) provides support for adding paths to the RAID arrays in the deviceSelector.paths field and the deviceSelector.optionalPaths field. For more information, see Integrating software RAID arrays with LVM Storage . 1.3.11.5. FIPS compliance support for LVM Storage With this release, LVM Storage is designed for Federal Information Processing Standards (FIPS). When LVM Storage is installed on OpenShift Container Platform in FIPS mode, LVM Storage uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-3 validation only on the x86_64 architecture. 1.3.11.6. Retroactive default StorageClass assignment is generally available Before OpenShift Container Platform 4.13, if there was no default storage class, persistent volumes claims (PVCs) that were created that requested the default storage class remained stranded in the pending state indefinitely, unless you manually delete and recreate them. Starting with OpenShift Container Platform 4.14, as a Technology Preview feature, the default storage class is assigned to these PVCs retroactively so that they do not remain in the pending state. After a default storage class is created, or one of the existing storage classes is declared the default, these previously stranded PVCs are assigned to the default storage class. This feature is now generally available. For more information, see Absent default storage class . 1.3.11.7. Local Storage Operator option to facilitate removing existing data on local volumes is generally available This feature provides an optional field, forceWipeDevicesAndDestroyAllData defining whether or not to call wipefs , which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. This feature is now generally available. Note that this feature does not apply to LocalVolumeSet (LVS). For more information, see Provisioning local volumes by using the Local Storage Operator . 1.3.11.8. Detach CSI volumes after non-graceful node shutdown is generally available Starting with OpenShift Container Platform 4.13, Container Storage Interface (CSI) drivers can automatically detach volumes when a node goes down non-gracefully as a Technology Preview feature. When a non-graceful node shutdown occurs, you can then manually add an out-of-service taint on the node to allow volumes to automatically detach from the node. This feature is now generally available. For more information, see Detach CSI volumes after non-graceful node shutdown . 1.3.11.9. Shared VPC is supported for the GCP Filestore CSI Driver Operator as generally available Shared virtual private cloud (VPC) for the Google Compute Platform (GCP) Container Storage Interface (CSI) Driver Operator is now supported as a generally available feature. Shared VPC simplifies network management, allows consistent network policies, and provides a centralized view of network resources. For more information, see Creating a storage class for GCP Filestore Storage . 1.3.11.10. User-Managed encryption supports IBM VPC Block storage as generally available The user-managed encryption feature allows you to provide keys during installation that encrypt OpenShift Container Platform node root volumes, and enables all managed storage classes to use the specified encryption key to encrypt provisioned storage volumes. This feature was introduced in OpenShift Container Platform 4.13 for Google Cloud Platform (GCP) persistent disk (PD) storage, Microsoft Azure Disk, and Amazon Web Services (AWS) Elastic Block storage (EBS), and is now supported on IBM Virtual Private Cloud (VPC) Block storage. 1.3.11.11. SELinux relabeling using mount options (Technology Preview) Previously, when SELinux was enabled, the persistent volume's (PV's) files were relabeled when attaching the PV to the pod, potentially causing timeouts when the PVs contained many files, as well as overloading the storage backend. In OpenShift Container Platform 4.15, for Container Storage Interface (CSI) driver that support this feature, the driver will mount the volume directly with the correct SELinux labels, eliminating the need to recursively relabel the volume, and pod startup can be significantly faster. This feature is supported with Technology Preview status. If the following conditions are true, the feature is enabled by default: The CSI driver that provides the volume has support for this feature with seLinuxMountSupported: true in its CSIDriver instance. The following CSI drivers that are shipped as part of OpenShift Container Platform announce SELinux mount support: AWS Elastic Block Storage (EBS) Azure Disk Google Compute Platform (GCP) persistent disk (PD) IBM Virtual Private Cloud (VPC) Block OpenStack Cinder VMware vSphere The pod that uses the persistent volume has full SELinux label specified in its spec.securityContext or spec.containers[*].securityContext by using restricted SCC. Access mode set to ReadWriteOncePod for the volume. 1.3.12. Oracle(R) Cloud Infrastructure 1.3.12.1. Using the Assisted Installer to install a cluster on OCI You can run cluster workloads on Oracle(R) Cloud Infrastructure (OCI) infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. Both Red Hat and Oracle test, validate, and support running OCI in an OpenShift Container Platform cluster on OCI. OCI provides services that can meet your needs for regulatory compliance, performance, and cost-effectiveness. You can access OCI Resource Manager configurations to provision and configure OCI resources. For more information, see Using the Assisted Installer to install a cluster on OCI . 1.3.12.2. Using the Agent-based Installer to install a cluster on OCI You can use the Agent-based Installer to install a cluster on Oracle(R) Cloud Infrastructure (OCI), so that you can run cluster workloads on infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. The Agent-based installer provides the ease of use of the Assisted Installation service, but with the capability to install a cluster in either a connected or disconnected environment. OCI provides services that can meet your regulatory compliance, performance, and cost-effectiveness needs. OCI supports 64-bit x86 instances and 64-bit ARM instances. For more information, see Using the Agent-based Installer to install a cluster on OCI . 1.3.13. Operator lifecycle 1.3.13.1. Operator Lifecycle Manager (OLM) 1.0 (Technical Preview) Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.14 introduced components for a -generation iteration of OLM as a Technology Preview feature, known during this phase as OLM 1.0 . This updated framework evolves many of the concepts that have been part of versions of OLM and adds new capabilities. During this Technology Preview phase of OLM 1.0 in OpenShift Container Platform 4.15, administrators can explore the following features added to this release: Support for version ranges You can specify a version range by using a comparison string in an Operator or extension's custom resource (CR). If you specify a version range in the CR, OLM 1.0 installs or updates to the latest version of the Operator that can be resolved within the version range. For more information, see Updating an Operator and Support for version ranges Performance improvements in the Catalog API The Catalog API now uses an HTTP service to serve catalog content on the cluster. Previously, custom resource definitions (CRDs) were used for this purpose. The change to using an HTTP service to serve catalog content reduces the load on the Kubernetes API server. For more information, see Finding Operators to install from a catalog . Note For OpenShift Container Platform 4.15, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components. For more information, see About Operator Lifecycle Manager 1.0 . Important Currently, OLM 1.0 supports the installation of Operators and extensions that meet the following criteria: The Operator or extension must use the AllNamespaces install mode. The Operator or extension must not use webhooks. Operators or extensions that use webhooks or that target a single or specified set of namespaces cannot be installed. 1.3.13.2. Deprecation schema for Operator catalogs The optional olm.deprecations schema defines deprecation information for Operator packages, bundles, and channels in a file-based catalog. Operator authors can use this schema in a deprecations.yaml file to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription object. For information on the olm.deprecations schema, see Operator Framework packaging format . 1.3.14. Operator development 1.3.14.1. Token authentication for Operators on cloud providers: Microsoft Entra Workload ID With this release, Operators managed by Operator Lifecycle Manager (OLM) can support token authentication when running on Azure clusters configured for Microsoft Entra Workload ID. Updates to the Cloud Credential Operator (CCO) enable semi-automated provisioning of certain short-term credentials, provided that the Operator author has enabled their Operator to support Microsoft Entra Workload ID. For more information, see CCO-based workflow for OLM-managed Operators with Azure AD Workload Identity . 1.3.15. Builds 1.3.16. Machine Config Operator 1.3.16.1. Improved MCO state reporting by node (Technology Preview) With this release, you can monitor updates for individual nodes as a Technology Preview. For more information, see Checking machine config node status . 1.3.17. Machine API 1.3.17.1. Defining a VMware vSphere failure domain for a control plane machine set (Technology Preview) By using a vSphere failure domain resource, you can use a control plane machine set to deploy control plane machines on hardware that is separate from the primary VMware vSphere infrastructure. A control plane machine set helps balance control plane machines across defined failure domains to provide fault tolerance capabilities to your infrastructure. For more information, see Sample VMware vSphere failure domain configuration and Supported cloud providers . 1.3.18. Nodes 1.3.18.1. The /dev/fuse device enables faster builds on unprivileged pods You can configure unprivileged pods with the /dev/fuse device to access faster builds. For more information, see Accessing faster builds with /dev/fuse . 1.3.18.2. Log linking is enabled by default Beginning with OpenShift Container Platform 4.15, log linking is enabled by default. Log linking gives you access to the container logs for your pods. 1.3.18.3. ICSP, IDMS, and ITMS are now compatible ImageContentSourcePolicy (ICSP), ImageDigestMirrorSet (IDMS), and ImageTagMirrorSet (ITMS) objects now function in the same cluster at the same time. Previously, to use the newer IDMS or ITMS objects, you needed to delete any ICSP objects. Now, you can use any or all of the three types of objects to configure repository mirroring after the cluster is installed. For more information, see Understanding image registry repository mirroring . Important Using an ICSP object to configure repository mirroring is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported. However, it might be removed in a future release of this product. Because it is deprecated functionality, avoid using it for new deployments. 1.3.19. Monitoring The in-cluster monitoring stack for this release includes the following new and modified features. 1.3.19.1. Updates to monitoring stack components and dependencies This release includes the following version updates for in-cluster monitoring stack components and dependencies: Alertmanager to 0.26.0 kube-state-metrics to 2.10.1 node-exporter to 1.7.0 Prometheus to 2.48.0 Prometheus Adapter to 0.11.2 Prometheus Operator to 0.70.0 Thanos Querier to 0.32.5 1.3.19.2. Changes to alerting rules Note Red Hat does not guarantee backward compatibility for recording rules or alerting rules. The NodeClockNotSynchronising and NodeClockSkewDetected alerting rules are now disabled when the Precision Time Protocol (PTP) is in use. 1.3.19.3. New Metrics Server component to access the Metrics API (Technology Preview) This release introduces a Technology Preview option to add a Metrics Server component to the in-cluster monitoring stack. As a Technology Preview feature, Metrics Server is automatically installed instead of Prometheus Adapter if the FeatureGate custom resource is configured with the TechPreviewNoUpgrade option. If installed, Metrics Server collects resource metrics and exposes them in the metrics.k8s.io Metrics API service for use by other tools and APIs. Using Metrics Server instead of Prometheus Adapter frees the core platform Prometheus stack from handling this functionality. For more information, see MetricsServerConfig in the config map API reference for the Cluster Monitoring Operator and Enabling features using feature gates . 1.3.19.4. New feature to send exemplar data to remote write storage for user-defined projects User-defined projects can now use remote write to send exemplar data scraped by Prometheus to remote storage. To use this feature, configure remote write using the sendExemplars option in the RemoteWriteSpec resource. For more information, see RemoteWriteSpec in the config map API reference for the Cluster Monitoring Operator. 1.3.19.5. Improved alert querying for user-defined projects Applications in user-defined projects now have API access to query alerts for application namespaces via the rules tenancy port for Thanos Querier. You can now construct queries that access the /api/v1/alerts endpoint via port 9093 for Thanos Querier, provided that the HTTP request contains a namespace parameter. In releases, the rules tenancy port for Thanos Querier did not provide API access to the /api/v1/alerts endpoint. 1.3.19.6. Prometheus updated to tolerate jitters at scrape time The default Prometheus configuration in the monitoring stack has been updated so that jitters are tolerated at scrape time. For monitoring deployments that have shown sub-optimal chunk compression for data storage, this update helps to optimize data compression, thereby reducing the disk space used by the time series database in these deployments. 1.3.19.7. Improved staleness handling for the kubelet service monitor Staleness handling for the kubelet service monitor has been improved to ensure that alerts and time aggregations are accurate. This improved functionality is active by default and makes the dedicated service monitors feature obsolete. As a result, the dedicated service monitors feature has been disabled and is now deprecated, and setting the DedicatedServiceMonitors resource to enabled has no effect. 1.3.19.8. Improved ability to troubleshoot reports of tasks failing The reasons provided when tasks fail in monitoring components are now more granular so that you can more easily pinpoint whether a reported failure originated in components deployed in the openshift-monitoring namespace or in the openshift-user-workload-monitoring namespace. If the Cluster Monitoring Operator (CMO) reports task failures, the following reasons have been added to identify where the failures originated: The PlatformTasksFailed reason indicates failures that originated in the openshift-monitoring namespace. The UserWorkloadTasksFailed reason indicates failures that originated in the openshift-user-workload-monitoring namespace. 1.3.20. Network Observability Operator The Network Observability Operator releases updates independently from the OpenShift Container Platform minor version release stream. Updates are available through a single, Rolling Stream which is supported on all currently supported versions of OpenShift Container Platform 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator is found in the Network Observability release notes . 1.3.21. Scalability and performance You can set the control plane hardware speed to one of "Standard" , "Slower" , or the default, "" , which allows the system to decide which speed to use. This is a Technology Preview feature. For more information, see Setting tuning parameters for etcd . 1.3.21.1. Hub-side templating for PolicyGenTemplate CRs You can manage the configuration of fleets of clusters by using hub templates to populate the group and site values in the generated policies that get applied to managed clusters. By using hub templates in group and site PolicyGenTemplate (PGT) CRs you can significantly reduce the number of policies on the hub cluster. For more information, see Specifying group and site configuration in group PolicyGenTemplate CRs with hub templates . 1.3.21.2. Node Tuning Operator (NTO) The Cloud-native Network Functions (CNF) tests image for latency tests, cnf-tests , has been simplified. The new image has three tests for latency measurements. The tests run by default and require a performance profile configured on the cluster. If no performance profile is configured, the tests do not run. The following variables are no longer recommended for use: ROLE_WORKER_CNF NODES_SELECTOR PERF_TEST_PROFILE FEATURES LATENCY_TEST_RUN DISCOVERY_MODE To generate the junit report, the --ginkgo.junit-report flag replaces --junit . For more information, see Performing latency tests for platform verification . 1.3.21.3. Bare Metal Operator For OpenShift Container Platform 4.15, when the Bare Metal Operator removes a host from the cluster it also powers off the host. This enhancement streamlines hardware maintenance and management. 1.3.22. Hosted control planes 1.3.22.1. Configuring hosted control plane clusters by using non-bare metal agent machines (Technology Preview) With this release, you can provision a hosted control plane cluster by using non-bare metal agent machines. For more information, see Configuring hosted control plane clusters using non-bare metal agent machines (Technology Preview) . 1.3.22.2. Creating a hosted cluster by using the OpenShift Container Platform console With this release, you can now create a hosted cluster with the KubeVirt platform by using the OpenShift Container Platform console. The multicluster engine for Kubernetes Operator (MCE) enables the hosted cluster view. For more information, see Creating a hosted cluster by using the console . 1.3.22.3. Configuring additional networks, guaranteed CPUs, and VM scheduling for node pools With this release, you can now configure additional networks, request a guaranteed CPU access for Virtual Machines (VMs), and manage scheduling of KubeVirt VMs for node pools. For more information, see Configuring additional networks, guaranteed CPUs, and VM scheduling for node pools . 1.4. Notable technical changes OpenShift Container Platform 4.15 introduces the following notable technical changes. Cluster metrics ports secured With this release, the ports that serve metrics for the Cluster Machine Approver Operator and Cluster Cloud Controller Manager Operator use the Transport Layer Security (TLS) protocol for additional security. ( OCPCLOUD-2272 , OCPCLOUD-2271 ) Cloud controller manager for Google Cloud Platform The Kubernetes community plans to deprecate the use of the Kubernetes controller manager to interact with underlying cloud platforms in favor of using cloud controller managers. As a result, there is no plan to add Kubernetes controller manager support for any new cloud platforms. This release introduces the General Availability of using a cloud controller manager for Google Cloud Platform. To learn more about the cloud controller manager, see the Kubernetes Cloud Controller Manager documentation . To manage the cloud controller manager and cloud node manager deployments and lifecycles, use the Cluster Cloud Controller Manager Operator. For more information, see the Cluster Cloud Controller Manager Operator entry in the Cluster Operators reference . Future restricted enforcement for pod security admission Currently, pod security violations are shown as warnings in the audit logs without resulting in the rejection of the pod. Global restricted enforcement for pod security admission is currently planned for the minor release of OpenShift Container Platform. When this restricted enforcement is enabled, pods with pod security violations will be rejected. To prepare for this upcoming change, ensure that your workloads match the pod security admission profile that applies to them. Workloads that are not configured according to the enforced security standards defined globally or at the namespace level will be rejected. The restricted-v2 SCC admits workloads according to the Restricted Kubernetes definition. If you are receiving pod security violations, see the following resources: See Identifying pod security violations for information about how to find which workloads are causing pod security violations. See About pod security admission synchronization to understand when pod security admission label synchronization is performed. Pod security admission labels are not synchronized in certain situations, such as the following situations: The workload is running in a system-created namespace that is prefixed with openshift- . The workload is running on a pod that was created directly without a pod controller. If necessary, you can set a custom admission profile on the namespace or pod by setting the pod-security.kubernetes.io/enforce label. Secrets are no longer automatically generated when the integrated OpenShift image registry is disabled If you disable the ImageRegistry cluster capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, a service account token secret and image pull secret are no longer generated for each service account. For more information, see Automatically generated secrets . Open Virtual Network Infrastructure Controller default range With this update, the Controller uses 100.88.0.0/16 as the default IP address range for the transit switch subnet. Do not use this IP range in your production infrastructure network. ( OCPBUGS-20178 ) Introduction of HAProxy no strict-limits variable The transition to HAProxy 2.6 included enforcement for the strict-limits configuration, which resulted in unrecoverable errors when maxConnections requirements could not be met. The strict-limits setting is not configurable by end users and remains under the control of the HAProxy template. This release introduces a configuration adjustment in response to the migration to the maxConnections issues. Now, the HAProxy configuration switches to using no strict-limits . As a result, HAProxy no longer fatally exits when the maxConnection configuration cannot be satisfied. Instead, it emits warnings and continues running. When maxConnection limitations cannot be met, warnings such as the following examples might be returned: [WARNING] (50) : [/usr/sbin/haproxy.main()] Cannot raise FD limit to 4000237, limit is 1048576. [ALERT] (50) : [/usr/sbin/haproxy.main()] FD limit (1048576) too low for maxconn=2000000/maxsock=4000237. Please raise 'ulimit-n' to 4000237 or more to avoid any trouble. To resolve these warnings, we recommend specifying -1 or auto for the maxConnections field when tuning an IngressController. This choice allows HAProxy to dynamically calculate the maximum value based on the available resource limitations in the running container, which eliminates these warnings. ( OCPBUGS-21803 ) The deployer service account is no longer created if the DeploymentConfig cluster capability is disabled If you disable the DeploymentConfig cluster capability, the deployer service account and its corresponding secrets are no longer created. For more information, see DeploymentConfig capability . Must-gather storage limit default A default limit of 30% of the storage capacity of the node for the container has been added for data collected by the oc adm must-gather command. If necessary, you can use the --volume-percentage flag to adjust the default storage limit. For more information, see Changing the must-gather storage limit . Agent-based Installer interactive network configuration displays on the serial console With this update, when an Agent ISO is booted on a server with no graphical console, interactive network configuration is possible on the serial console. Status displays are paused on all other consoles while the interactive network configuration is active. Previously, the displays could be shown only on a graphical console. ( OCPBUGS-19688 ) 1.5. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Container Platform 4.15, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table. In the following tables, features are marked with the following statuses: General Availability Deprecated Removed Operator lifecycle and development deprecated and removed features Table 1.6. Operator lifecycle and development deprecated and removed tracker Feature 4.13 4.14 4.15 SQLite database format for Operator catalogs Deprecated Deprecated Deprecated Images deprecated and removed features Table 1.7. Images deprecated and removed tracker Feature 4.13 4.14 4.15 ImageChangesInProgress condition for Cluster Samples Operator Deprecated Deprecated Deprecated MigrationInProgress condition for Cluster Samples Operator Deprecated Deprecated Deprecated Monitoring deprecated and removed features Table 1.8. Monitoring deprecated and removed tracker Feature 4.13 4.14 4.15 dedicatedServiceMonitors setting that enables dedicated service monitors for core platform monitoring General Availability General Availability Deprecated prometheus-adapter component that queries resource metrics from Prometheus and exposes them in the metrics API. General Availability General Availability Deprecated Installation deprecated and removed features Table 1.9. Installation deprecated and removed tracker Feature 4.13 4.14 4.15 OpenShift SDN network plugin General Availability Deprecated Removed [1] --cloud parameter for oc adm release extract General Availability Deprecated Deprecated CoreDNS wildcard queries for the cluster.local domain Removed Removed Removed compute.platform.openstack.rootVolume.type for RHOSP General Availability Deprecated Deprecated controlPlane.platform.openstack.rootVolume.type for RHOSP General Availability Deprecated Deprecated ingressVIP and apiVIP settings in the install-config.yaml file for installer-provisioned infrastructure clusters Deprecated Deprecated Deprecated platform.gcp.licenses for Google Cloud Provider Deprecated Removed Removed While the OpenShift SDN network plugin is no longer supported by the installation program in version 4.15, you can upgrade a cluster that uses the OpenShift SDN plugin from version 4.14 to version 4.15. Storage deprecated and removed features Table 1.10. Storage deprecated and removed tracker Feature 4.13 4.14 4.15 Persistent storage using FlexVolume Deprecated Deprecated Deprecated Networking deprecated and removed features Table 1.11. Networking deprecated and removed tracker Feature 4.13 4.14 4.15 Kuryr on RHOSP Deprecated Deprecated Removed OpenShift SDN network plugin General Availability Deprecated Deprecated Building applications deprecated and removed features Table 1.12. Service Binding Operator deprecated and removed tracker Feature 4.13 4.14 4.15 Service Binding Operator Deprecated Deprecated Deprecated Node deprecated and removed features Table 1.13. Node deprecated and removed tracker Feature 4.13 4.14 4.15 ImageContentSourcePolicy (ICSP) objects Deprecated Deprecated Deprecated Kubernetes topology label failure-domain.beta.kubernetes.io/zone Deprecated Deprecated Deprecated Kubernetes topology label failure-domain.beta.kubernetes.io/region Deprecated Deprecated Deprecated OpenShift CLI (oc) deprecated and removed features Feature 4.13 4.14 4.15 --include-local-oci-catalogs parameter for oc-mirror General Availability Removed Removed --use-oci-feature parameter for oc-mirror Deprecated Removed Removed Workloads deprecated and removed features Table 1.14. Workloads deprecated and removed tracker Feature 4.13 4.14 4.15 DeploymentConfig objects General Availability Deprecated Deprecated Bare metal monitoring Table 1.15. Bare Metal Event Relay Operator tracker Feature 4.13 4.14 4.15 Bare Metal Event Relay Operator Technology Preview Technology Preview Deprecated 1.5.1. Deprecated features 1.5.1.1. Deprecation of the OpenShift SDN network plugin OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 1.5.1.2. Bare Metal Event Relay Operator The Bare Metal Event Relay Operator is deprecated. The ability to monitor bare-metal hosts by using the Bare Metal Event Relay Operator will be removed in a future OpenShift Container Platform release. 1.5.1.3. Service Binding Operator The Service Binding Operator is deprecated and will be removed with the OpenShift Container Platform 4.16 release. Red Hat will provide critical bug fixes and support for this component during the current release lifecycle, but this component will no longer receive feature enhancements. 1.5.1.4. Dedicated service monitors for core platform monitoring With this release, the dedicated service monitors feature for core platform monitoring is deprecated. The ability to enable dedicated service monitors by configuring the dedicatedServiceMonitors setting in the cluster-monitoring-config config map object in the openshift-monitoring namespace will be removed in a future OpenShift Container Platform release. To replace this feature, Prometheus functionality has been improved to ensure that alerts and time aggregations are accurate. This improved functionality is active by default and makes the dedicated service monitors feature obsolete. 1.5.1.5. Prometheus Adapter for core platform monitoring With this release, the Prometheus Adapter component for core platform monitoring is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this component during the current release lifecycle, but this component will no longer receive enhancements and will be removed. As a replacement, a new Metrics Server component has been added to the monitoring stack. Metrics Server is a simpler and more lightweight solution because it does not rely on Prometheus for its functionality. Metrics Server also ensures scalability and a more accurate tracking of resource metrics. With this release, the improved functionality of Metrics Server is available by default if you enable the TechPreviewNoUpgrade option in the FeatureGate custom resource. 1.5.1.6. oc registry info command is deprecated With this release, the experimental oc registry info command is deprecated. To view information about the integrated OpenShift image registry, run oc get imagestream -n openshift and check the IMAGE REPOSITORY column. 1.5.2. Removed features 1.5.2.1. Removal of the OPENSHIFT_DEFAULT_REGISTRY OpenShift Container Platform 4.15 has removed support for the OPENSHIFT_DEFAULT_REGISTRY variable. This variable was primarily used to enable backwards compatibility of the internal image registry for earlier setups. The REGISTRY_OPENSHIFT_SERVER_ADDR variable can be used in its place. 1.5.2.2. Installing clusters on Red Hat OpenStack Platform (RHOSP) with Kuryr is removed As of OpenShift Container Platform 4.15, support for installing clusters on RHOSP with kuryr is removed. 1.5.3. Future Kubernetes API removals The minor release of OpenShift Container Platform is expected to use Kubernetes 1.29. Kubernetes 1.29 has removed a deprecated API. See the Deprecated API Migration Guide in the upstream Kubernetes documentation for the list of Kubernetes API removals. See Navigating Kubernetes API deprecations and removals for information about how to check your cluster for Kubernetes APIs that are planned for removal. 1.6. Bug fixes API Server and Authentication Previously, the termination.log in the kube-apiserver log folder had invalid permissions due to settings in the upstream library. With this release, the upstream library was updated and the terminate.log now has the expected permissions. ( OCPBUGS-11856 ) Previously, the Cluster Version Operator (CVO) enabled a capability if the existing manifest got the capability annotation after an upgrade. This caused the console to be enabled after upgrading to OpenShift Container Platform 4.14 for users who had previously disabled the console capability. With this release, the unnecessary console capability was removed from the existing manifest and the console capability is no longer implicitly enabled. ( OCPBUGS-20331 ) Previously, when the openshift-kube-controller-manager namespace was deleted, the following error was logged repeatedly: failed to synchronize namespace . With this release, the error is no longer logged when the openshift-kube-controller-manager namespace is deleted. ( OCPBUGS-17458 ) Bare Metal Hardware Provisioning Previously, deploying IPv6-only hosts from a dual-stack GitOps ZTP hub prevented the correct callback URL from being passed to the baseboard management controller (BMC). Consequently, an IPv4 URL was passed unconditionally. This issue has been resolved, and the IP version of the URL now depends on the IP version of the BMC address. ( OCPBUGS-23759 ) Previously, the Bare Metal Operator (BMO) container had a hostPort specified as 60000 , but the hostPort was not actually in use despite the specification. As a result, other services could not use port 60000. This fix removes the hostPort specification from the container configuration. Now, port 60000 is available for use by other services. ( OCPBUGS-18788 ) Previously, the Cluster Baremetal Operator (CBO) failed when it checked the infrastructure platformStatus field and returned nil . With OpenShift Container Platform 4.15, the CBO has been updated so that it checks and returns a blank value when apiServerInternalIPs returns nil , which resolves this issue. ( OCPBUGS-17589 ) Previously, the inspector.ipxe configuration used the IRONIC_IP variable, which did not account for IPv6 addresses because they have brackets. Consequently, when the user supplied an incorrect boot_mac_address , iPXE fell back to the inspector.ipxe configuration, which supplied a malformed IPv6 host header since it did not contain brackets. With OpenShift Container Platform 4.15, the inspector.ipxe configuration has been updated to use the IRONIC_URL_HOST variable, which accounts for IPv6 addresses and resolves the issue. ( OCPBUGS-27060 ) Previously, there was a bug when attempting to deploy OpenShift Container Platform on a new bare metal host using RedFish Virtual Media with Cisco UCS hardware. This bug blocked bare metal hosts from new provisions, because Ironic was unable to find a suitable virtual media device. With this update, Ironic does more checks in all available virtual media devices. As a result, Cisco UCS hardware can now be provisioned when using RedFish Virtual Media. ( OCPBUGS-23105 ) Previously, when installing OpenShift Container Platform with the bootMode field set to UEFISecureBoot on a node where the secureBoot field was set to disabled , the installation program failed to start. With this update, Ironic has been updated so that you can install OpenShift Container Platform with secureBoot set to enabled . ( OCPBUGS-9303 ) Builds Previously, timestamps were not preserved when copying contents between containers. With this release, the -p flag is added to the cp command to allow timestamps to be preserved. ( OCPBUGS-22497 ) Cloud Compute Previously, an error in the parsing of taints from the MachineSet spec meant that the autoscaler could not account for any taint set directly on the spec. Consequently, when relying on the MachineSet taints for scaling from zero, the taints from the spec were not considered, which could cause incorrect scaling decisions. With this update, parsing issues within the scale from zero logic have been resolved. As a result, auto scaler can now scale up correctly and identify taints that would prevent workloads from scheduling. ( OCPBUGS-27750 ) Previously, an Amazon Web Services (AWS) code that provided image credentials was removed from the kubelet in OpenShift Container Platform 4.14. Consequently, pulling images from Amazon Elastic Container Registry (ECR) failed without a specified pull secret, because the kubelet could no longer authenticate itself and pass credentials to the container runtime. With this update, a separate credential provider has been configured, which is now responsible for providing ECR credentials for the kubelet. As a result, the kubelet can now pull private images from ECR. ( OCPBUGS-27486 ) Previously when deploying a hosted control plane (HCP) KubeVirt cluster the --node-selector command, the node selector was not applied to the kubevirt-cloud-controller-manager pods within the HCP namespace. Consequentially, you could not pin the entire HCP pods to specific nodes. With this update, this issue has been fixed. ( OCPBUGS-27071 ) Previously, the default virtual machine (VM) type for the Microsoft Azure load balancer was changed from Standard to VMSS . Consequently, the service type load balancer could not attach standard VMs to load balancers. This update reverts these changes to the configuration to maintain compatibility with OpenShift Container Platform deployments. As a result, load balancer attachments are now more consistent. ( OCPBUGS-26210 ) Previously, deployments on RHOSP nodes with additional ports with the enable_port_security field set to false were prevented from creating LoadBalancer services. With this update, this issue is resolved. ( OCPBUGS-22246 ) Previously worker nodes on Red Hat OpenStack Platform (RHOSP) were named with domain components if the Nova metadata service was unavailable the first time the worker nodes booted. OpenShift Container Platform expects the node names to be the same as the Nova instance. The name discrepancy caused the nodes' certificate request to be rejected and the nodes could not join the cluster. With this update, the worker nodes will wait and retry the metadata service indefinitely on first boot ensuring the nodes are correctly named. ( OCPBUGS-22200 ) Previously, the cluster autoscaler crashed when used with nodes that have Container Storage Interface (CSI) storage. The issue is resolved in this release. ( OCPBUGS-23096 ) Previously, in certain proxied environments, the Amazon Web Services (AWS) metadata service might not have been present on initial startup, and might have only been available shortly after startup. The kubelet hostname fetching did not account for this delay and, consequently, the node would fail to boot because it would not have a valid hostname. This update ensures that the hostname fetching script retries on failure for some time. As a result, inaccessibility of the metadata service is tolerated for a short period of time. ( OCPBUGS-20369 ) In OpenShift Container Platform version 4.14 and later, there is a known issue that causes installation of Microsoft Azure Stack Hub to fail. Microsoft Azure Stack Hub clusters that are upgraded to 4.14 or later might encounter load balancer configuration issues as nodes scale up or down. Installing or upgrading to 4.14 in Microsoft Azure Stack Hub environments is not recommended until this issue is resolved. ( OCPBUGS-20213 ) Previously, some conditions during the startup process of the Cluster Autoscaler Operator caused a lock that prevented the Operator from successfully starting and marking itself available. As a result, the cluster became degraded. The issue is resolved with this release. ( OCPBUGS-18954 ) Previously, attempting to perform a Google Cloud Platform XPN internal cluster installation failed when control nodes were added to a second internal instance group. This bug has been fixed. ( OCPBUGS-5755 ) Previously, the termination handler prematurely exited before marking a node for termination. This condition occurred based on the timing of when the termination signal was received by the controller. With this release, the possibility of early termination is accounted for by introducing an additional check for termination. ( OCPBUGS-2117 ) Previously, when the Build cluster capability was not enabled, the cluster version Operator (CVO) failed to synchronize the build informer, and did not start successfully. With this release, the CVO successfully starts when the Build capability is not enabled. ( OCPBUGS-22956 ) Cloud Credential Operator Previously, the Cloud Credential Operator utility ( ccoctl ) created custom GCP roles at the cluster level, so each cluster contributed to the quota limit on the number of allowed custom roles. Because of GCP deletion policies, deleted custom roles continue to contribute to the quota limit for many days after they are deleted. With this release, custom roles are added at the project level instead of the cluster level to reduce the total number of custom roles created. Additionally, an option to clean up custom roles is now available when deleting the GCP resources that the ccoctl utility creates during installation. These changes can help avoid reaching the quota limit on the number of allowed custom roles. ( OCPBUGS-28850 ) Previously, when the Build cluster capability was not enabled, the Cluster Version Operator (CVO) failed to synchronize the build informer and did not start successfully. With this release, the CVO successfully starts when the Build capability is not enabled. ( OCPBUGS-26510 ) Previously, buckets created by running the ccoctl azure create command were prohibited from allowing public blob access due to a change in the default behavior of Microsoft Azure buckets. With this release, buckets created by running the ccoctl azure create command are explicitly set to allow public blob access. ( OCPBUGS-22369 ) Previously, an Azure Managed Identity role was omitted from the Cloud Controller Manager service account. As a result, the Cloud Controller Manager could not manage service type load balancers in environments deployed to existing VNets with a private publishing method. With this release, the missing role was added to the Cloud Credential Operator utility ( ccoctl ) and Azure Managed Identity installations into an existing VNet with private publishing is possible. ( OCPBUGS-21745 ) Previously, the Cloud Credential Operator did not support updating the vCenter server value in the root secret vshpere-creds that is stored in the kube-system namespace. As a result, attempting to update this value caused both the old and new values to exist because the component secrets were not synchronized correctly. With this release, the Cloud Credential Operator resets the secret data during synchronization so that updating the vCenter server value is supported. ( OCPBUGS-20478 ) Previously, the Cloud Credential Operator utility ( ccoctl ) failed to create AWS Security Token Service (STS) resources in China regions because the China region DNS suffix .amazonaws.com.cn differs from the suffix .amazonaws.com that is used in other regions. With this release, ccoctl can detect the correct DNS suffix and use it to create the required resources. ( OCPBUGS-13597 ) Cluster Version Operator The Cluster Version Operator (CVO) continually retrieves update recommendations and evaluates known conditional update risks against the current cluster state. Previously, failing risk evaluations blocked the CVO from fetching new update recommendations. When the risk evaluations were failing because the update recommendation service served a poorly-defined update risk, this issue could prevent the CVO from noticing the update recommendation service serving an improved risk declaration. With this release, the CVO continues to poll the update recommendation service regardless of whether update risks are successfully evaluated or not. ( OCPBUGS-25949 ) Developer Console Previously, BuildRun logs were not visible in the Logs Tab of the BuildRun due to the recent update in the API version of the specified resources. With this update, the Logs of the TaskRuns were added back into the Logs tab of the BuildRun for both v1alpha1 and v1beta1 versions of the builds Operator. ( OCPBUGS-29283 ) Previously, the console UI failed when a Task in the Pipeline Builder that was previously installed from the ArtifactHub was selected and an error page displayed. With this update, the console UI no longer expects optional data and the console UI no longer fails. ( OCPBUGS-24001 ) Previously, the Edit Build and BuildRun options in the Actions menu of the Shipwright Plugin did not allow you to edit in the YAML tab. With this update, you can edit in the YAML tab. ( OCPBUGS-23164 ) Previously, the console searched only for the file name Dockerfile in a repository to identify the repository suitable for the Container strategy in the Import Flows. Since other containerization tools are available, support for the Containerfile file name is now suitable for the Container strategy. ( OCPBUGS-22976 ) Previously, when an unauthorized user opened a link to the console that contains path and query parameters, and they were redirected to a login page, the query parameters did not restore after the login was successful. As a result, the user needed to restore the search or click the link to the console again. With this update, the latest version saves and restores the query parameters similar to the path. ( OCPBUGS-22199 ) Previously, when navigating to the Create Channel page from the Add or Topology view, the default name as Channel is present, but the Create button is disabled with Required showing under the name field. With this update, if the default channel name is added then the Required message will not display when clicking the Create button. ( OCPBUGS-19783 ) Previously, there were similar options to choose from when using the quick search function. With this update, the Source-to-image option is differentiated from the Samples option in the Topology quick search. ( OCPBUGS-18371 ) Previously, when {serverless-product-name} Operator was installed and the Knative (Kn) serving instance had not been created, then when navigating to the Global configuration page from Administration Cluster Settings and clicking Knative-serving a 404 page not found error was displayed. With this update, before adding Knative-serving to the Global configuration , a check is in place to determine if a Knative serving instance is created. ( OCPBUGS-18267 ) Previously there was an issue with the Edit Knative Service form that prevented users from editing the Knative service they previously created. With this update, you can edit a Knative service that was previously created. ( OCPBUGS-6513 ) etcd Cluster Operator Previously, the cluster-backup.sh script cached the etcdctl binary on the local machine indefinitely, making updates impossible. With this update, the cluster-backup.sh script pulls the latest etcdctl binary each time it is run. ( OCPBUGS-19052 ) Hosted Control Plane Previously, when using a custom Container Network Interface (CNI) plugin in a hosted cluster, role-based access control (RBAC) rules were configured only when you set the hostedcluster.spec.networking.networkType field to Calico . Role-based access control (RBAC) rules were not configured when you set the hostedcluster.spec.networking.networkType field to Other . With this release, RBAC rules are configured properly, when you set the hostedcluster.spec.networking.networkType field to Other . ( OCPBUGS-28235 ) Previously, a node port failed to expose properly because the ipFamilyPolicy field was set to SingleStack for the kube-apiserver resource. With this update, if the ipFamilyPolicy is set to PreferredDualStack , node port is exposed properly. ( OCPBUGS-23350 ) Previously, after configuring the Open Virtual Network (OVN) for a hosted cluster, the cloud-network-config-controller , multus-admission-controller , and`ovnkube-control-plane` resources were missing the hypershift.openshift.io/hosted-control-plane:{hostedcluster resource namespace}-{cluster-name} label. With this update, after configuring the Open Virtual Network (OVN) for a hosted cluster, the cloud-network-config-controller , multus-admission-controller , and`ovnkube-control-plane` resources contain the hypershift.openshift.io/hosted-control-plane:{hostedcluster resource namespace}-{cluster-name} label. ( OCPBUGS-19370 ) Previously, after creating a hosted cluster, to create a config map, if you used a name other than user-ca-bundle , the deployment if the Control Plane Operator (CPO) failed. With this update, you can use unique names to create a config map. The CPO is deployed successfully. ( OCPBUGS-19419 ) Previously, hosted clusters with .status.controlPlaneEndpoint.port: 443 would mistakenly expose port 6443 for public and private routers. With this update, hosted clusters with .status.controlPlaneEndpoint.port: 443 only expose the port 443. ( OCPBUGS-20161 ) Previously, if the Kube API server is exposed by using IPv4 and IPv6, and the IP address is set in the HostedCluster resource, the IPv6 environment did not work properly. With this update, when the Kube API server is exposed by using IPv4 and IPv6, the IPv6 environment works properly. ( OCPBUGS-20246 ) Previously, if the console Operator and Ingress pods were located on the same node, the console Operator would fail and mark the console cluster Operator as unavailable. With this release, if the console Operator and Ingress pods are located on the same node, the console Operator no longer fails. ( OCPBUGS-23300 ) Previously, if uninstallation of a hosted cluster is stuck, status of the Control Plane Operator (CPO) was reported incorrectly. With this update, the status of the CPO is reported correctly. ( OCPBUGS-26412 ) Previously, if you tried to override the OpenShift Container Platform version while the initial upgrade was in progress, a hosted cluster upgrade would fail. With this update, if you override the current upgrade with a new OpenShift Container Platform version, the upgrade completes successfully. ( OCPBUGS-18122 ) Previously, if you update the pull secret for the hosted control planes, it did not reflect on the worker nodes immediately. With this update, when you change the pull secret, reconciliation is triggered and worker nodes are updated with a new pull secret immediately. ( OCPBUGS-19834 ) Previously, the Hypershift Operator would report time series for node pools that no longer existed. With this release, the Hypershift Operator reports time series for node pools correctly. ( OCPBUGS-20179 ) Previously, the --enable-uwm-telemetry-remote-write flag was enabled by default. This setting blocked the telemetry reconciliation. With this update, you can disable the --enable-uwm-telemetry-remote-write flag to allow telemetry reconciliation. ( OCPBUGS-26410 ) Previously, the control Plane Operator (CPO) failed to update the VPC endpoint service when an IAM role path ARN was provided as the additional allowed principal: arn:aws:iam::USD{ACCOUNT_ID}:role/USD{PATH}/name With this update, The CPO updates the VPC endpoint service with the arn:aws:iam::USD{ACCOUNT_ID}:role/USD{PATH}/name allowed principal successfully. ( OCPBUGS-23511 ) Previously, to customize OAuth templates, if you configured the HostedCluster.spec.configuration.oauth field, this setting did not reflect in a hosted cluster. With this update, you can configure the HostedCluster.spec.configuration.oauth field in a hosted cluster successfully. ( OCPBUGS-15215 ) Previously, when deploying a hosted cluster by using a dual stack networking, by default, the clusterIP field was set to an IPv6 network instead of an IPv4 network. With this update, when deploying a hosted cluster by using a dual stack networking, the clusterIP field is set to IPv4 network by default. ( OCPBUGS-16189 ) Previously, when deploying a hosted cluster, if you configure the advertiseAddress field in the HostedCluster resource, the hosted cluster deployment would fail. With this release, you can deploy a hosted cluster successfully after configuring the advertiseAddress field in the HostedCluster resource. ( OCPBUGS-19746 ) Previously, when you set the hostedcluster.spec.networking.networkType field to Calico in a hosted cluster, the Cluster Network Operator did not have enough role-based access control (RBAC) permissions to deploy the network-node-identity resource. With this update, the network-node-identity resource is deployed successfully. ( OCPBUGS-23083 ) Previously, you could not update the default configuration for audit logs in a hosted cluster. Therefore, components of a hosted cluster could not generate audit logs. With this update, you can generate audit logs for components of a hosted cluster by updating the default configuration. ( OCPBUGS-13348 ) Image Registry Previously, the Image Registry pruner relied on a cluster role that was managed by the OpenShift API server. This could cause the pruner job to intermittently fail during an upgrade. Now, the Image Registry Operator is responsible for creating the pruner cluster role, which resolves the issue. ( OCPBUGS-18969 ) The Image Registry Operator makes API calls to the storage account list endpoint as part of obtaining access keys. In projects with several OpenShift Container Platform clusters, this might lead to API limits being reached. As a result, 429 errors were returned when attempting to create new clusters. With this update, the time between calls has been increased from 5 minutes to 20 minutes, and API limits are no longer reached. ( OCPBUGS-18469 ) Previously, the default low settings for QPS and Burst caused the image registry to return with a gateway timeout error when API server requests were not returned in an appropriate time. To resolve this issue, users had to restart the image registry. With this update, the default settings for QPS and Burst have been increased, and this issue no longer occurs. ( OCPBUGS-18999 ) Previously, when creating the deployment resource for the Cluster Image Registry Operator, error handling used a pointer variable without checking if the value was nil first. Consequently, when the pointer value was nil , a panic was reported in the logs. With this update, a nil check was added so that the panic is no longer reported in the logs. ( OCPBUGS-18103 ) Previously, the OpenShift Container Platform 4.14 release introduced a change that gave users the perception that their images were lost when updating from OpenShift Container Platform version 4.13 to 4.14. A change to the default internal registry caused the registry to use an incorrect path when using the Microsoft Azure object storage. With this release, the correct path is used and a job has been added to the registry operator that moves any blobs pushed to the registry that used the wrong storage path into the correct storage path, which effectively merges the two distinct storage paths into a single path. Note This fix does not work on Azure Stack Hub (ASH). ASH users who used OCP versions 4.14.0 through 4.14.13 when upgrading to 4.14.14+ will need to execute manual steps to move their blobs to the correct storage path. ( OCPBUGS-29525 ) Installer Previously, installing a cluster on AWS might fail in some cases due to a validation error. With this update, the installation program produces the necessary cloud configuration object to satisfy the machine config operator. This results in the installation succeeding. ( OCPBUGS-12707 ) Previously, installing a cluster on GCP using a service account attached to a VM for authentication might fail due to an internal data validation bug. With this release, the installation program has been updated to correctly validate the authentication parameters when using a service account attached to a VM. ( OCPBUGS-19376 ) Previously, the vSphere connection configuration interface showed the network name instead of the cluster name in the "vCenter cluster" field. With this update, the "vCenter cluster" field has been updated to display the cluster name. ( OCPBUGS-23347 ) Previously, when you authenticated with the credentialsMode parameter not set to Manual and you used the gcloud cli tool, the installation program retrieved Google Cloud Platform (GCP) credentials from the osServiceAccount.json file. This operation caused the GCP cluster installation to fail. Now, a validation check scans the install-config.yaml file and prompts you with a message if you did not set credentialsMode to Manual . Note that in Manual mode, you must edit the manifests and provide the credentials. ( OCPBUGS-17757 ) Previously when you attempted to install an OpenShift Container Platform on VMware vSphere by using installer-provisioned infrastructure, a resource pool object would include a double backslash. This format caused the installation program to generate an incorrect path to network resources that in turn caused the installation operation to fail. After the installation program processed this resource pool object, the program outputted a "network not found" error message. Now, the installation program retrieves the cluster object for the purposes of joining the InventoryPath with the network name so that the program specifies the correct path to the resource pool object. ( OCPBUGS-23376 ) Previously, after installing an Azure Red Hat OpenShift cluster, some cluster Operators were unavailable. This was the result of one of the cluster's load balancers not being created as part of the installation process. With this update, the load balancer is correctly created. After installing a cluster, all cluster Operators are available. ( OCPBUGS-24191 ) Previously, if the VMware vSphere cluster contained an ESXi host that was offline, the installation failed with a "panic: runtime error: invalid memory address or nil pointer dereference" message. With this update, the error message states that the ESXi host is unavailable. ( OCPBUGS-20350 ) Previously, if you only used the default machine configuration to specify existing AWS security groups when installing a cluster on AWS ( platform.aws.defaultMachinePlatform.additonalSecurityGroupsIDs ), the security groups were not applied to control plane machines. With this update, existing AWS security groups are correctly applied to control planes when they are specified using the default machine configuration. ( OCPBUGS-20525 ) Previously, installing a cluster on AWS failed when the specified machine instance type ( platform.aws.type ) did not support the machine architecture that was specified for control plane or compute machines ( controlPlane.architecture and compute.architecture ). With this update, the installation program now checks to determine if the machine instance type supports the specified architecture and displays an error message if it does not. ( OCPBUGS-26051 ) Previously, the installation program did not validate some configuration settings before installing the cluster. This behavior occurred when these settings were only specified in the default machine configuration ( platform.azure.defaultMachinePlatform ). As a result, the installation would succeed even if the following conditions were met: An unsupported machine instance type was specified. Additional functionality, such as accelerated networking or the use of Azure ultra disks, was not supported by the specified machine instance type. With this fix, the installation program now displays an error message that specifies the unsupported configuration. ( OCPBUGS-20364 ) Previously, when installing an AWS cluster to the Secret Commercial Cloud Services (SC2S) region and specifying existing AWS security groups, the installation failed with an error that stated that the functionality was not available in the region. With this fix, the installation succeeds. ( OCPBUGS-18830 ) Previously, when you specified Key Management Service (KMS) encryption keys in the kmsKeyARN section of the install-config.yaml configuration file for installing a cluster on Amazon Web Services (AWS), permission roles were not added during the cluster installation operation. With this update, after you specify the keys in the configuration file, an additional set of keys are added to the cluster so that the cluster successfully installs. If you specify the credentialsMode parameter in the configuration file, all KMS encryption keys are ignored. ( OCPBUGS-13664 ) Previously, Agent-based installations on Oracle(R) Cloud Infrastructure (OCI) did not show a console displaying installation progress to users, making it more difficult to track installation progress. With this update, Agent-based installations on OCI now display installation progress on the console. ( OCPBUGS-19092 ) Previously, if static networking was defined in the install-config.yaml or agent-config.yaml files for the Agent-based Installer, and an interface name was over 15 characters long, the network manager did not allow the interface to come up. With this update, interface names longer than 15 characters are truncated and the installation can proceed. ( OCPBUGS-18552 ) Previously, if the user did not specify the rendezevousIP field in the agent-config.yaml file and hosts were defined in the same file with static network configuration, then the first host was designated as a rendezvous node regardless of its role. This caused the installation to fail. With this update, the Agent-based Installer prioritizes the rendezvous node search by first looking among the hosts with a master role and a static IP defined. If none is found, then a potential candidate is searched for through the hosts that do not have a role defined. Hosts with a static network configuration that are explicitly configured with a worker role are ignored. ( OCPBUGS-5471 ) Previously, the Agent console application was shown during the boot process of all Agent-based installations, enabling network customizations before proceeding with the installation. Because network configuration is rarely needed during cloud installations, this would unnecessarily slow down installations on Oracle(R) Cloud Infrastructure (OCI). With this update, Agent-based installations on OCI no longer show the Agent console application and proceed more quickly. ( OCPBUGS-19093 ) Previously, the Agent-based Installer enabled an external Cloud Controller Manager (CCM) by default when the platform was defined as external . This prevented users from disabling the external CCM when performing installations on cloud platforms that do not require one. With this update, users are required to enable an external CCM only when performing an Agent-based installation on Oracle(R) Cloud Infrastructure (OCI). ( OCPBUGS-18455 ) Previously, the agent wait-for command failed to record logs in the .openshift_install.log file. With this update, logs are recorded in the .openshift_install.log file when you use the agent wait-for command. ( OCPBUGS-5728 ) Previously, the assisted-service on the bootstrap machine became unavailable after the bootstrap node rebooted, preventing any communication from the assisted-installer-controller . This stopped the assisted-installer-controller from removing uninitialized taints from worker nodes, causing the cluster installation to hang waiting on cluster Operators. With this update, the assisted-installer-controller can remove the uninitialized taints even if assisted-service becomes unavailable, and the installation can proceed. ( OCPBUGS-20049 ) Previously, the platform type was erroneously required to be lowercase in the AgentClusterInstall cluster manifest used by the Agent-based Installer. With this update, mixed case values are required, but the original lowercase values are now accepted and correctly translated. ( OCPBUGS-19444 ) Previously, the manila-csi-driver-controller-metrics service had empty endpoints due to an incorrect name for the app selector. With this release the app selector name is changed to openstack-manila-csi and the issue is fixed. ( OCPBUGS-9331 ) Previously, the assisted installer removed the uninitialized taints for all vSphere nodes which prevented the vSphere CCM from initializing the nodes properly. This caused the vSphere CSI operator to degrade during the initial cluster installation because the node's provider ID was missing. With this release, the assisted installer checks if vSphere credentials were provided in the install-config.yaml . If credentials were provided, the OpenShift version is greater or equal to 4.15, and the agent installer was used, the assisted-installer and assisted-installer-controller do not remove the uninitialized taints. This means that the node's providerID and VM's UUID are properly set and the vSphere CSI operator is installed. ( OCPBUGS-29485 ) Kubernetes Controller Manager Previously, when the maxSurge field was set for a daemon set and the toleration was updated, pods failed to scale down, which resulted in a failed rollout due to a different set of nodes being used for scheduling. With this release, nodes are properly excluded if scheduling constraints are not met, and rollouts can complete successfully. ( OCPBUGS-19452 ) Machine Config Operator Previously, a misspelled environment variable prevented a script from detecting that the node.env file was present. This caused the contents of the node.env file to be overwritten after each boot, and the kubelet hostname could not be changed. With this update, the environment variable spelling is corrected and edits to the node.env file persist across reboots. ( OCPBUGS-27307 ) Previously, the Machine Config Operator allowed user-provided certificate authority updates to be made without requiring a new machine config to trigger. Because the new write method for these updates was missing a newline character, it caused validation errors for the contents of the CA file on-disk and the Machine Config Daemon became degraded. With this release, the CA file contents are fixed, and updates proceed as expected. ( OCPBUGS-25424 ) Previously, the Machine Config Operator allowed user-provided certificate authority bundle changes to be applied to the cluster without needing a machine config, to prevent disruption. Because of this, the user-ca bundle was not propagating to applications running on the cluster and required a reboot to see the changes take effect. With this update, the MCO now runs the update-ca-trust command and restarts the CRI-O service so that the new CA properly applies. ( OCPBUGS-24035 ) Previously, the initial mechanism used by the Machine Config Operator to handle image registry certs would delete and create new config maps rather than patching existing ones. This caused a significant increase in API usage from the MCO. With this update, the mechanism has been updated so that it uses a JSON patch instead, thereby resolving the issue. ( OCPBUGS-18800 ) Previously, the Machine Config Operator was pulling the baremetalRuntimeCfgImage container image multiple times: the first time to obtain node details and subsequent times to verify that the image is available. This caused issues during certificate rotation in situations where the mirror server or Quay was not available, and subsequent image pulls would fail. However, if the image is already on the nodes due to the first image pull then the nodes should start the kubelet regardless. With this update, the baremetalRuntimeCfgImage image is only pulled one time, thereby resolving the issue. ( OCPBUGS-18772 ) Previously, the nmstatectl command failed to retrieve the correct permanent MAC address during OpenShift Container Platform updates for some network environments. This caused the interface to be renamed and the bond connection on the node to break during the update. With this release, patches were applied to the nmstate package and MCO to prevent renaming, and updates proceed as expected. ( OCPBUGS-17877 ) Previously, the Machine Config Operator became the default provider of image registry certificates and the node-ca daemon was removed. This caused issues with the HyperShift Operator, because removing the node-ca daemon also removed the image registry path in the Machine Config Server (MCS), which HyperShift uses to get the Ignition configuration and start the bootstrap process. With this update, a flag containing the MCS image registry data is provided, which Ignition can use during the bootstrap process, thereby resolving the issue. ( OCPBUGS-17811 ) Previously, older RHCOS boot images contained a race condition between services on boot that prevented the node from running the rhcos-growpart command before it pulled images, preventing the node from starting up. This caused node scaling to sometimes fail on clusters that use old boot images because it was determined there was no room left on the disk. With this update, processes were added to the Machine Config Operator for stricter ordering of services so that nodes boot correctly. Note In these situations, updating to newer boot images prevents similar issues from occurring. ( OCPBUGS-15087 ) Previously, the Machine Config Operator (MCO) leveraged the oc image extract command to pull images during updates but the ImageContentSourcePolicy (ICSP) object was not respected when pulling those images. With this update, the MCO now uses the podman pull command internally and images are pulled from the location as configured in the ICSP. ( OCPBUGS-13044 ) Management Console Previously, the Expand PVC modal assumed the existing PVC had a spec.resources.requests.storage value that includes a unit. As a result, when the Expand PVC modal was used to expand a PVC that had a requests.storage value without a unit, the console would display an incorrect value in the modal. With this update, the console was updated to handle storage values with and without a unit. ( OCPBUGS-27909 ) Previously, the console check to determine if a file is binary was not robust enough. As a result, XML files were misidentified as binary and not displaying in the console. With this update, an additional check was added to more precisely check if a file is binary. ( OCPBUGS-26591 ) Previously, the Node Overview page failed to render when a MachineHealthCheck without spec.unhealthyConditions existed on a cluster. With this update, the Node Overview page was updated to allow for MachineHealthChecks without spec.unhealthyConditions . Now, the Node Overview page renders even if MachineHealthChecks without spec.unhealthyConditions are present on the cluster. ( OCPBUGS-25140 ) Previously, the console was not up-to-date with the newest matchers key for alert notification receivers, and the alert manager receivers created by the console utilized the older match key. With this update, the console uses matchers instead, and converts any existing match instances to matchers when modifying an existing alert manager receiver. ( OCPBUGS-23248 ) Previously, impersonation access was incorrectly applied. With this update, the console correctly applies impersonation access. ( OCPBUGS-23125 ) Previously, when the Advanced Cluster Management for Kubernetes (ACM) and multicluster engine for Kubernetes (MCE) Operators are installed and their plugins are enabled, the YAML code Monaco editor failed to load. With this update, optional resource chaining was added to prevent a failed resource call, and the YAML editor no longer fails to load when the ACM and MCE Operators are installed and their plugins enabled. ( OCPBUGS-22778 ) Monitoring Previously, the monitoring-plugin component did not start if IPv6 was disabled for a cluster. This release updates the component to support the following internet protocol configurations in a cluster: IPv4 only, IPv6 only, and both IPv4 and IPv6 simultaneously. This change resolves the issue, and the monitoring-plugin component now starts up if the cluster is configured to support only IPv6. ( OCPBUGS-21610 ) Previously, instances of Alertmanager for core platform monitoring and for user-defined projects could inadvertently become peered during an upgrade. This issue could occur when multiple Alertmanager instances were deployed in the same cluster. This release fixes the issue by adding a --cluster.label flag to Alertmanager that helps to block any traffic that is not intended for the cluster. ( OCPBUGS-18707 ) Previously, it was not possible to use text-only email templates in an Alertmanager configuration to send text-only email alerts. With this update, you can configure Alertmanager to send text-only email alerts by setting the html field of the email receiver to an empty string. ( OCPBUGS-11713 ) Previously, Thanos Querier was unable to query pod metrics because the supporting kube-rbac-proxy instance disallowed metrics.k8s.io/v1beta1/pods . With this update, the kube-rbac-proxy configuration for Thanos Querier is fixed and you can now successfully query pod metrics. ( OCPBUGS-17035 ) Networking Previously, when creating an IngressController with an empty spec, the IngressController's status showed Invalid . However, the route_controller_metrics_routes_per_shard metric would still get created. When the invalid IngressController was deleted, the route_controller_metrics_routes_per_shard metric would not clear, and it would show information for that metric. With this update, metrics are only created for IngressControllers that are admitted, which resolves this issue. ( OCPBUGS-3541 ) Previously, timeout values larger than what Go programming language could parse were not properly validated. Consequently, timeout values larger than what HAProxy could parse caused issues with HAProxy. With this update, if the timeout specifies a value larger than what can be parsed, it is capped at the maximum that HAProxy can parse. As a result, issues are no longer caused for HAProxy. ( OCPBUGS-6959 ) Previously, an external neighbor could have its MAC address changed while the cluster was shutting down or hibernating. Although a Gratuitous Address Resolution Protocol (GARP) should notify other neighbors about this change, the cluster would not process the GARP because it was not running. When the cluster was brought back up, that neighbor might not be reachable from the OVN-Kubernetes cluster network because the stale MAC address was being used. This update enables an aging mechanism so that a neighbor's MAC address is periodically refreshed every 300 seconds. ( OCPBUGS-11710 ) Previously, when an IngressController was configured with SSL/TLS, but did not have the clientca-configmap finalizer, the Ingress Operator would try to add the finalizer without checking whether the IngressController was marked for deletion. Consequently, if an IngressController was configured with SSL/TLS and was subsequently deleted, the Operator would correctly remove the finalizer. It would then repeatedly, and erroneously, try and fail to update the IngressController to add the finalizer back, resulting in error messages in the Operator's logs. With this update, the Ingress Operator no longer adds the clientca-configmap finalizer to an IngressController that is marked for deletion. As a result, the Ingress Operator no longer tries to perform erroneous updates, and no longer logs the associated errors. ( OCPBUGS-14994 ) Previously, a race condition occurred between the handling of pods that had been scheduled and the pods that had been completed on a node when OVN-Kubernetes started. This condition often occurred when nodes rebooted. Consequently, the same IP was erroneously assigned to multiple pods. This update fixes the race condition so that the same IP is not assigned to multiple pods in those circumstances. ( OCPBUGS-16634 ) Previously, there was an error that caused a route to be rejected due to a duplicate host claim. When this occurred, the system would mistakenly select the first route it encountered, which was not always the conflicting route. With this update, all routes for the conflicting host are first retrieved and then sorted based on their submission time. This allows the system to accurately determine and select the newest conflicting route. ( OCPBUGS-16707 ) Previously, when a new ipspec-host pod was started, it would clear or remove the existing XFRM state. Consequently, it would remove existing north-south traffic policies. This issue has been resolved. ( OCPBUGS-19817 ) Previously, the ovn-k8s-cni-overlay, topology:layer2 NetworkAttachmentDefinition did not work in a hosted pod when using the Kubevirt provider. Consequently, the pod did not start. This issue has been resolved, and pods can now start with an ovn-k8s-cni-overlay NetworkAttachmentDefinition. ( OCPBUGS-22869 ) Previously, the Azure upstream DNS did not comply with non-EDNS DNS queries because it returned a payload larger than 512 bytes. Because CoreDNS 1.10.1 no longer uses EDNS for upstream queries and only uses EDNS when the original client query uses EDNS, the combination would result in an overflow servfail error when the upstream returned a payload larger than 512 bytes for non-EDNS queries using CoreDNS 1.10.1. Consequently, upgrading from OpenShift Container Platform 4.12 to 4.13 led to some DNS queries failing that previously worked. With this release, instead of returning an overflow servfail error, the CoreDNS now truncates the response, indicating that the client can try again in TCP. As a result, clusters with a noncompliant upstream now retry with TCP when experiencing overflow errors. This prevents any disruption of functionality between OpenShift Container Platform 4.12 and 4.13. ( OCPBUGS-27904 ), ( OCPBUGS-28205 ) Previously, there was a limitation in private Microsoft Azure clusters where secondary IP addresses designated as egress IP addresses lacked outbound connectivity. This meant that pods associated with these IP addresses were unable to access the internet. However, they could still reach external servers within the infrastructure network, which is the intended use case for egress IP addresses. This update enables egress IP addresses for Microsoft Azure clusters, allowing outbound connectivity to be achieved through outbound rules. ( OCPBUGS-5491 ) Previously, when using multiple NICS, egress IP addresses were not correctly reassigned to the correct egress node when labeled or unlabeled. This bug has been fixed, and egress IP addresses are now reassigned to the correct egress node. ( OCPBUGS-18162 ) Previously, a new logic introduced for determining where to run the Keepalived process did not consider the ingress VIP or VIPs. As a result, the Keepalived pods might not have ran on ingress nodes, which could break the cluster. With this fix, the logic now includes the ingress VIP or VIPs, and the Keepalived pods should always be available. ( OCPBUGS-18771 ) Previously on Hypershift clusters, pods were not always being scheduled on separate zones. With this update, the multus-admission-controller deployment now uses a PodAntiAffinity spec for Hypershift to operate in the proper zone. ( OCPBUGS-15220 ) Previously, a certificate that existed for 10 minutes was used to implement Multus. With this update, a per node certificate is used for the Multus CNI plugin and the certificate's existence is increased to a 24 hour duration. ( OCPBUGS-19861 ), ( OCPBUGS-19859 ) Previously, the spec.desiredState.ovn.bridge-mappings API configuration deleted all the external IDs in Open vSwitch (OVS) local tables on each Kubernetes node. As a result, the OVN chassis configuration was deleted, breaking the default cluster network. With this fix, you can use the ovn.bridge-mappings configuration without affecting the OVS configuration. ( OCPBUGS-18869 ) Previously, if NMEA sentences were lost on their way to the E810 controller, the T-GM would not be able to synchronize the devices in the network synchronization chain. If these conditions were met, the PTP operator reported an error. With this release, a fix is implemented to report 'FREERUN' in case of a loss of the NMEA string. ( OCPBUGS-20514 ) Previously, pods assigned an IP from the pool created by the Whereabouts CNI plugin persisted in the ContainerCreating state after a node force reboot. With this release, the Whereabouts CNI plugin issue associated with the IP allocation after a node force reboot is resolved. ( OCPBUGS-18893 ) Previously, when using the assisted installer, OVN-Kubernetes took a long time to bootstrap. This issue occurred because there were three ovnkube-control-plane nodes. The first two started up normally, but the third delayed the installation time. The issue would only resolve after a timeout expiration; afterwards, installation would continue. With this update, the third ovnkube-control-plane node has been removed. As a result, the installation time has been reduced. ( OCPBUGS-29480 ) Node Due to how the Machine Config Operator (MCO) handles machine configurations for worker pools and custom pools, the MCO might apply an incorrect cgroup version argument for custom pools. As a consequence, nodes in the custom pool might feature an incorrect cgroup kernel argument that causes unpredictable behavior. As a workaround, specify cgroup version kernel arguments for worker and control plane pools only.( OCPBUGS-19352 ) Previously, CRI-O was not configuring the cgroup hierarchy correctly to account for the unique way that crun creates cgroups. As a consequence, disabling the CPU quota with a PerformanceProfile did not work. With this fix, using a PerformanceProfile to disable CPU quota works as expected. ( OCPBUGS-20492 ) Previously, because of a default setting ( container_use_dri_devices, true ), containers were unable to use dri devices. With this fix, containers can use dri devices as expected. ( OCPBUGS-24042 ) Previously, the kubelet was running with the unconfined_service_t SELinux type. As a consequence, all our plugins failed to deploy due to an Selinux denial. With this fix, the kubelet now runs with the kubelet_exec_t SELinux type. As a result, plugins deploy as expected. ( OCPBUGS-20022 ) Previously, the CRI-O would automatically remove container images on an upgrade. This caused issues in pre-pulling images. With this release, when OpenShift Container Platform performs a minor upgrade, the container images will not be automatically removed and instead are subject to kubelet's image garbage collection, which will trigger based on disk usage. ( OCPBUGS-25228 ) Previously, when adding RHCOS machines to an existing cluster using ansible playbooks, machines were installed with openvswitch version 2.7. With this update, RHCOS machines added to existing clusters using ansible playbooks are installed with openvswitch version 3.1. This openvswitch version increases network performance. ( OCPBUGS-18595 ) Node Tuning Operator (NTO) Previously, the Tuned profile reports Degraded condition after applying a PerformanceProfile. The generated Tuned profile was trying to set a sysctl value for the default Receive Packet Steering (RPS) mask when it already configured the same value using an /etc/sysctl.d file. Tuned warns about that and the Node Tuning Operator (NTO) treats that as a degradation with the following message The TuneD daemon issued one or more error message(s) when applying the profile profile. TuneD stderr: net.core.rps_default_mask . With this update, the duplication was solved by not setting the default RPS mask using Tuned. The sysctl.d file was left in place as it applies early during boot. ( OCPBUGS-25092 ) Previously, the Node Tuning Operator (NTO) did not set the UserAgent and used a default one. With this update, the NTO sets the UserAgent appropriately, which makes debugging the cluster easier. ( OCPBUGS-19785 ) Previously, when the Node Tuning Operator (NTO) pod restarted while there were a large number of CSVs in the cluster, the NTO pod would fail and entered into CrashBackLoop state. With this update, pagination has been added to the list CSVs requests and this avoids the api-server timeout issue that resulted in the CrashBackLoop state. ( OCPBUGS-14241 ) OpenShift CLI (oc) Previously, to filter operator packages by channel, for example, mirror.operators.catalog.packages.channels , you had to specify the default channel for the package, even if you did not intend to use the packages from that channel. Based on this information, the resulting catalog is considered invalid if the imageSetConfig does not contain the default channel for the package. This update introduces the defaultChannel field in the mirror.operators.catalog.packages section. You can now select a default channel. This action enables oc-mirror to build a new catalog that defines the selected channel in the defaultChannel field as the default for the package. ( OCPBUGS-385 ) Previously, using eus- channels for mirroring in oc-mirror resulted in failure. This was due to the restriction of eus- channels to mirror only even-numbered releases. With this update, oc-mirror can now effectively use eus- channels for mirroring releases. ( OCPBUGS-26065 ) Previously, while using oc-mirror for mirroring local OCI operator catalogs from a hidden folder resulted in the following error: error: ".hidden_folder/data/publish/latest/catalog-oci/manifest-list/kubebuilder/kube-rbac-proxy@sha256:<SHASUM>" is not a valid image reference: invalid reference format . With this update, the image references are adjusted in the local OCI catalog to prevent any errors during mirroring. ( OCPBUGS-25077 ) Previously, the OpenShift Container Platform CLI ( oc ) version was not printed when running the must-gather tool. With this release, the oc version is now listed in the summary section when running must-gather . ( OCPBUGS-24199 ) Previously, if you ran a command in oc debug . such as oc debug node/worker - sleep 5; exit 1 , without attaching to the terminal, a 0 exit code was always returned regardless of the command's exit code. With this release, the exit code is now properly returned from the command. ( OCPBUGS-20342 ) Previously, when mirroring, HTTP401 errors were observed due to expired authentication tokens. These errors occurred during the catalog introspection phase or the image mirroring phase. This issue has been fixed for catalog introspection. Additionally, fixing the Network Time Protocol (NTP) resolves the problem seen during the mirroring phase. For more information, see the article on "Access to the requested resource" error when mirroring images. ( OCPBUGS-7465 ) Operator Lifecycle Manager (OLM) After you install an Operator, if the catalog becomes unavailable, the subscription for the Operator is updated with a ResolutionFailed status condition. Before this update, when the catalog became available again, the ResolutionFailed status was not cleared. With this update, this status is now cleared from the subscription after the catalog becomes available, as expected. ( OCPBUGS-29116 ) With this update, OLM performs a best-effort verification that existing custom resources (CRs) are not invalidated when you install an updated custom resource definition (CRD). ( OCPBUGS-18948 ) Before this update, the install plan for an Operator displayed duplicate values in the clusterSeviceVersionNames field. This update removes the duplicate values. ( OCPBUGS-17408 ) Before this update, if you created an Operator group with same name as a previously existing cluster role, Operator Lifecycle Manager (OLM) overwrote the cluster role. With this fix, OLM generates a unique cluster role name for every Operator group by using the following syntax: Naming syntax olm.og.<operator_group_name>.<admin_edit_or_view>-<hash_value> For more information, see Operator groups . ( OCPBUGS-14698 ) Previously, if an Operator installation or upgrade took longer than 10 minutes, the operation could fail with the following error: Bundle unpacking failed. Reason: DeadlineExceeded, Message: Job was active longer than specified deadline This issue occurred because Operator Lifecycle Manager (OLM) had a bundle unpacking job that was configured with a timeout of 600 seconds. Bundle unpacking jobs could fail because of network or configuration issues in the cluster that might be transient or resolved with user intervention. With this bug fix, OLM automates the re-creation of failed unpack jobs indefinitely by default. This update adds the optional operatorframework.io/bundle-unpack-min-retry-interval annotation for Operator groups. This annotation sets a minimum interval to wait before attempting to re-create the failed job. ( OCPBUGS-6771 ) In Operator Lifecycle Manager (OLM), the Catalog Operator was logging many errors regarding missing OperatorGroup objects in namespaces that had no Operators installed. With this fix, if a namespace has no Subscription objects in it, OLM no longer checks if an OperatorGroup object is present in the namespace. ( OCPBUGS-25330 ) With the security context constraint (SCC) API, users are able to configure security contexts for scheduling workloads on their cluster. Because parts of core OpenShift Container Platform components run as pods that are scheduled on control plane nodes, it is possible to create a SCC that prevents those core components from being properly scheduled in openshift-* namespaces. This bug fix reduces the role-based access control (RBAC) scope for the openshift-operator-lifecycle-manager service account used to run the package-server-manager core component. With this update, it is now significantly less likely that an SCC can be applied to the cluster that causes unexpected scheduling issues with the package-server-manager component. Warning The SCC API can globally affect scheduling on an OpenShift Container Platform cluster. When applying such constraints to workloads on the cluster, carefully read the SCC documentation . ( OCPBUGS-20347 ) Scalability and performance Previously, a race condition between udev events and the creation queues associated with physical devices led to some of the queues being configured with the wrong Receive Packet Steering (RPS) mask when they should be reset to zero. This resulted in the RPS mask being configured on the queues of the physical devices, meaning they were using RPS instead of Receive Side Scaling (RSS), which could impact the performance. With this fix, the event was changed to be triggered per queue creation instead of at device creation. This guarantees that no queue will be missing. The queues of all physical devices are now set up with the correct RPS mask which is empty. ( OCPBUGS-18662 ) Previously, due to differences in setting up a container's cgroup hierarchy, containers that use the crun OCI runtime along with a PerformanceProfile configuration encountered performance degradation. With this release, when handling a PerformanceProfile request, CRI-O accounts for the differences in crun and correctly configures the CPU quota to ensure performance. ( OCPBUGS-20492 ) Storage Previously, LVM Storage did not support disabling over-provisioning, and the minimum value for the thinPoolConfig.overprovisionRatio field in the LVMCluster CR was 2. With this release, you can disable over-provisioning by setting the value of the thinPoolConfig.overprovisionRatio field to 1. ( OCPBUGS-24396 ) Previously, if the LVMCluster CR was created with an invalid device path in the deviceSelector.optionalPaths field, the LVMCluster CR was in Progressing state. With this release, if the deviceSelector.optionalPaths field contains an invalid device path, LVM Storage updates the LVMCluster CR state to Failed . ( OCPBUGS-23995 ) Previously, the LVM Storage resource pods were preempted while the cluster was congested. With this release, upon updating OpenShift Container Platform, LVM Storage configures the priorityClassName parameter to ensure proper scheduling and preemption behavior while the cluster is congested. ( OCPBUGS-23375 ) Previously, upon creating the LVMCluster CR, LVM Storage skipped the counting of volume groups. This resulted in the LVMCluster CR moving to Progressing state even when the volume groups were valid. With this release, upon creating the LVMCluster CR, LVM Storage counts all the volume groups, and updates the LVMCluster CR state to Ready if the volume groups are valid. ( OCPBUGS-23191 ) Previously, if the default device class was not present on all selected nodes, LVM Storage failed to set up the LVMCluster CR. With this release, LVM Storage detects all the default device classes even if the default device class is present only on one of the selected nodes. With this update, you can define the default device class only on one of the selected nodes. ( OCPBUGS-23181 ) Previously, upon deleting the worker node in the single-node OpenShift (SNO) and worker node topology, the LVMCluster CR still included the configuration of the deleted worker node. This resulted in the LVMCluster CR remaining in Progressing state. With this release, upon deleting the worker node in the SNO and worker node topology, LVM Storage deletes the worker node configuration in the LVMCluster CR, and updates the LVMCluster CR state to Ready . ( OCPBUGS-13558 ) Previously, CPU limits for the AWS EFS CSI driver container could cause performance degradation of volumes managed by the AWS EFS CSI Driver Operator. With this release, the CPU limits from the AWS EFS CSI driver container have been removed to help prevent potential performance degradation. ( OCPBUGS-28645 ) Previously, if you used the performancePlus parameter in the Azure Disk CSI driver and provisioned volumes 512 GiB or smaller, you would receive an error from the driver that you need a disk size of at least 512 GiB. With this release, if you use the performancePlus parameter and provision volumes 512 GiB or smaller, the Azure Disk CSI driver automatically resizes volumes to be 513 GiB. ( OCPBUGS-17542 ) 1.7. Technology Preview features status Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope In the following tables, features are marked with the following statuses: Technology Preview General Availability Not Available Deprecated Networking Technology Preview features Table 1.16. Networking Technology Preview tracker Feature 4.13 4.14 4.15 Ingress Node Firewall Operator Technology Preview General Availability General Availability Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses Technology Preview Technology Preview Technology Preview Multi-network policies for SR-IOV networks Technology Preview Technology Preview General Availability OVN-Kubernetes network plugin as secondary network Technology Preview General Availability General Availability Updating the interface-specific safe sysctls list Technology Preview Technology Preview Technology Preview Egress service custom resource Not Available Technology Preview Technology Preview VRF specification in BGPPeer custom resource Not Available Technology Preview Technology Preview VRF specification in NodeNetworkConfigurationPolicy custom resource Not Available Technology Preview Technology Preview Admin Network Policy ( AdminNetworkPolicy ) Not Available Technology Preview Technology Preview IPsec external traffic (north-south) Not Available Technology Preview General Availability Host network settings for SR-IOV VFs Not Available Not Available Technology Preview Dual-NIC hardware as PTP boundary clock General Availability General Availability General Availability Egress IPs on additional network interfaces Not Available General Availability General Availability Intel E810 Westport Channel NIC as PTP grandmaster clock Technology Preview Technology Preview Technology Preview Dual Intel E810 Westport Channel NICs as PTP grandmaster clock Not Available Technology Preview Technology Preview Storage Technology Preview features Table 1.17. Storage Technology Preview tracker Feature 4.13 4.14 4.15 Automatic device discovery and provisioning with Local Storage Operator Technology Preview Technology Preview Technology Preview Google Filestore CSI Driver Operator Technology Preview General Availability General Availability IBM Power(R) Virtual Server Block CSI Driver Operator Technology Preview Technology Preview General Availability Read Write Once Pod access mod Not available Technology Preview Technology Preview Build CSI Volumes in OpenShift Builds Technology Preview General Availability General Availability Shared Resources CSI Driver in OpenShift Builds Technology Preview Technology Preview Technology Preview Secrets Store CSI Driver Operator Not available Technology Preview Technology Preview Installation Technology Preview features Table 1.18. Installation Technology Preview tracker Feature 4.13 4.14 4.15 Installing OpenShift Container Platform on Oracle(R) Cloud Infrastructure (OCI) with VMs N/A General Availability General Availability Installing OpenShift Container Platform on Oracle(R) Cloud Infrastructure (OCI) on bare metal N/A Developer Preview Developer Preview Adding kernel modules to nodes with kvc Technology Preview Technology Preview Technology Preview Azure Tagging Technology Preview General Availability General Availability Enabling NIC partitioning for SR-IOV devices Technology Preview Technology Preview Technology Preview GCP Confidential VMs Technology Preview General Availability General Availability User-defined labels and tags for Google Cloud Platform (GCP) Not Available Technology Preview Technology Preview Installing a cluster on Alibaba Cloud by using installer-provisioned infrastructure Technology Preview Technology Preview Technology Preview Mount shared entitlements in BuildConfigs in RHEL Technology Preview Technology Preview Technology Preview Selectable Cluster Inventory Technology Preview Technology Preview Technology Preview Static IP addresses with vSphere (IPI only) Not Available Technology Preview Technology Preview Support for iSCSI devices in RHCOS Not Available Not Available Technology Preview Node Technology Preview features Table 1.19. Nodes Technology Preview tracker Feature 4.13 4.14 4.15 Cron job time zones Technology Preview General Availability General Availability MaxUnavailableStatefulSet featureset Not Available Technology Preview Technology Preview Multi-Architecture Technology Preview features Table 1.20. Multi-Architecture Technology Preview tracker Feature 4.13 4.14 4.15 IBM Power(R) Virtual Server using installer-provisioned infrastructure Technology Preview Technology Preview General Availability kdump on arm64 architecture Technology Preview Technology Preview Technology Preview kdump on s390x architecture Technology Preview Technology Preview Technology Preview kdump on ppc64le architecture Technology Preview Technology Preview Technology Preview Specialized hardware and driver enablement Technology Preview features Table 1.21. Specialized hardware and driver enablement Technology Preview tracker Feature 4.13 4.14 4.15 Driver Toolkit General Availability General Availability General Availability Hub and spoke cluster support General Availability General Availability General Availability Scalability and performance Technology Preview features Table 1.22. Scalability and performance Technology Preview tracker Feature 4.13 4.14 4.15 factory-precaching-cli tool Technology Preview Technology Preview Technology Preview Hyperthreading-aware CPU manager policy Technology Preview Technology Preview Technology Preview HTTP transport replaces AMQP for PTP and bare-metal events Technology Preview Technology Preview Technology Preview Mount namespace encapsulation Technology Preview Technology Preview Technology Preview NUMA-aware scheduling with NUMA Resources Operator General Availability General Availability General Availability Node Observability Operator Technology Preview Technology Preview Technology Preview Single-node OpenShift cluster expansion with worker nodes General Availability General Availability General Availability Topology Aware Lifecycle Manager (TALM) General Availability General Availability General Availability Tuning etcd latency tolerances Not Available Technology Preview Technology Preview Workload partitioning for three-node clusters and standard clusters Technology Preview General Availability General Availability Operator lifecycle and development Technology Preview features Table 1.23. Operator lifecycle and development Technology Preview tracker Feature 4.13 4.14 4.15 Operator Lifecycle Manager (OLM) v1 Not Available Technology Preview Technology Preview RukPak Technology Preview Technology Preview Technology Preview Platform Operators Technology Preview Technology Preview Technology Preview Hybrid Helm Operator Technology Preview Technology Preview Technology Preview Java-based Operator Technology Preview Technology Preview Technology Preview Monitoring Technology Preview features Table 1.24. Monitoring Technology Preview tracker Feature 4.13 4.14 4.15 Alerting rules based on platform monitoring metrics Technology Preview General Availability General Availability Metrics Collection Profiles Technology Preview Technology Preview Technology Preview Metrics Server Not Available Not Available Technology Preview Red Hat OpenStack Platform (RHOSP) Technology Preview features Table 1.25. RHOSP Technology Preview tracker Feature 4.13 4.14 4.15 External load balancers with installer-provisioned infrastructure Technology Preview General Availability General Availability Dual-stack networking with installer-provisioned infrastructure Not Available Technology Preview General Availability Dual-stack networking with user-provisioned infrastructure Not Available Not Available General Availability OpenStack integration into the Cluster CAPI Operator [1] Not Available Not Available Technology Preview Control Plane with rootVolumes and etcd on local disk Not Available Not Available Technology Preview For more information, see OpenStack integration into the Cluster CAPI Operator . Architecture Technology Preview features Table 1.26. Architecture Technology Preview tracker Feature 4.13 4.14 4.15 Hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS) Technology Preview Technology Preview Technology Preview Hosted control planes for OpenShift Container Platform on bare metal Technology Preview General Availability General Availability Hosted control planes for OpenShift Container Platform on OpenShift Virtualization Not Available General Availability General Availability Hosted control planes for OpenShift Container Platform using non-bare metal agent machines Not Available Not Available Technology Preview Machine management Technology Preview features Table 1.27. Machine management Technology Preview tracker Feature 4.13 4.14 4.15 Managing machines with the Cluster API for Amazon Web Services Technology Preview Technology Preview Technology Preview Managing machines with the Cluster API for Google Cloud Platform Technology Preview Technology Preview Technology Preview Defining a vSphere failure domain for a control plane machine set Not Available Not Available Technology Preview Cloud controller manager for Alibaba Cloud Technology Preview Technology Preview Technology Preview Cloud controller manager for Amazon Web Services Technology Preview General Availability General Availability Cloud controller manager for Google Cloud Platform Technology Preview Technology Preview General Availability Cloud controller manager for IBM Power(R) VS Technology Preview Technology Preview Technology Preview Cloud controller manager for Microsoft Azure Technology Preview General Availability General Availability Authentication and authorization Technology Preview features Table 1.28. Authentication and authorization Technology Preview tracker Feature 4.13 4.14 4.15 Pod security admission restricted enforcement Technology Preview Technology Preview Technology Preview Machine Config Operator Technology Preview features Table 1.29. Machine Config Operator Technology Preview tracker Feature 4.13 4.14 4.15 Improved MCO state reporting Not Available Not Available Technology Preview 1.8. Known issues A regression in the behaviour of libreswan caused some nodes with IPsec enabled to lose communication with pods on other nodes in the same cluster. To resolve this issue, consider disabling IPsec for your cluster. ( OCPBUGS-44670 ) The oc annotate command does not work for LDAP group names that contain an equal sign ( = ), because the command uses the equal sign as a delimiter between the annotation name and value. As a workaround, use oc patch or oc edit to add the annotation. ( BZ#1917280 ) Run Once Duration Override Operator (RODOO) cannot be installed on clusters managed by the HyperShift Operator. ( OCPBUGS-17533 ) When installing a cluster on VMware vSphere with static IP addresses (Tech Preview), the installation program can apply an incorrect configuration to the control plane machine sets (CPMS). This can result in control plane machines being recreated without static IP addresses defined. ( OCPBUGS-28236 ) Specifying a standard Ebdsv5 or Ebsv5 family machine type instance is not supported when installing an Azure cluster. This limitation is the result of the Azure terraform provider not supporting these machine types. ( OCPBUGS-18690 ) When running a cluster with FIPS enabled, you might receive the following error when running the OpenShift CLI ( oc ) on a RHEL 9 system: FIPS mode is enabled, but the required OpenSSL backend is unavailable . As a workaround, use the oc binary provided with the OpenShift Container Platform cluster. ( OCPBUGS-23386 ) In 4.15 with IPv6 networking running on Red Hat OpenStack Platform (RHOSP) environments, IngressController objects configured with the endpointPublishingStrategy.type=LoadBalancerService YAML attribute will not function correctly. ( BZ#2263550 , BZ#2263552 ) In 4.15 with IPv6 networking running on Red Hat OpenStack Platform (RHOSP) environments, health monitors created with IPv6 ovn-octavia load balancers will not function correctly. ( OCPBUGS-29603 ) In 4.15 with IPv6 networking running on Red Hat OpenStack Platform (RHOSP) environments, sharing a IPv6 load balancer with multiple services is not allowed because of an issue that mistakenly marks IPv6 load balancer as internal to the cluster.( OCPBUGS-29605 ) When installing an OpenShift Container Platform cluster with static IP addressing and Tang encryption, nodes start without network settings. This condition prevents nodes from accessing the Tang server, causing installation to fail. To address this condition, you must set the network settings for each node as ip installer arguments. For installer-provisioned infrastructure, before installation provide the network settings as ip installer arguments for each node by executing the following steps. Create the manifests. For each node, modify the BareMetalHost custom resource with annotations to include the network settings. For example: USD cd ~/clusterconfigs/openshift USD vim openshift-worker-0.yaml apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: annotations: bmac.agent-install.openshift.io/installer-args: '["--append-karg", "ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none", "--save-partindex", "1", "-n"]' 1 2 3 4 5 inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <fqdn> 6 bmac.agent-install.openshift.io/role: <role> 7 generation: 1 name: openshift-worker-0 namespace: mynamespace spec: automatedCleaningMode: disabled bmc: address: idrac-virtualmedia://<bmc_ip>/redfish/v1/Systems/System.Embedded.1 8 credentialsName: bmc-secret-openshift-worker-0 disableCertificateVerification: true bootMACAddress: 94:6D:AE:AB:EE:E8 bootMode: "UEFI" rootDeviceHints: deviceName: /dev/sda For the ip settings, replace: 1 <static_ip> with the static IP address for the node, for example, 192.168.1.100 2 <gateway> with the IP address of your network's gateway, for example, 192.168.1.1 3 <netmask> with the network mask, for example, 255.255.255.0 4 <hostname_1> with the node's hostname, for example, node1.example.com 5 <interface> with the name of the network interface, for example, eth0 6 <fqdn> with the fully qualified domain name of the node 7 <role> with worker or master to reflect the node's role 8 <bmc_ip> with the BMC IP address and the protocol and path of the BMC, as needed. Save the file to the clusterconfigs/openshift directory. Create the cluster. When installing with the Assisted Installer, before installation modify each node's installer arguments using the API to append the network settings as ip installer arguments. For example: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{infra_env_id}/hosts/USD{host_id}/installer-args \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "args": [ "--append-karg", "ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none", 1 2 3 4 5 "--save-partindex", "1", "-n" ] } ' | jq For the network settings, replace: 1 <static_ip> with the static IP address for the node, for example, 192.168.1.100 2 <gateway> with the IP address of your network's gateway, for example, 192.168.1.1 3 <netmask> with the network mask, for example, 255.255.255.0 4 <hostname_1> with the node's hostname, for example, node1.example.com 5 <interface> with the name of the network interface, for example, eth0 . Contact Red Hat Support for additional details and assistance. ( OCPBUGS-23119 ) In OpenShift Container Platform 4.15, all nodes use Linux control group version 2 (cgroup v2) for internal resource management in alignment with the default RHEL 9 configuration. However, if you apply a performance profile in your cluster, the low-latency tuning features associated with the performance profile do not support cgroup v2. As a result, if you apply a performance profile, all nodes in the cluster reboot to switch back to the cgroup v1 configuration. This reboot includes control plane nodes and worker nodes that were not targeted by the performance profile. To revert all nodes in the cluster to the cgroups v2 configuration, you must edit the Node resource. For more information, see Configuring Linux cgroup v2 . You cannot revert the cluster to the cgroups v2 configuration by removing the last performance profile. ( OCPBUGS-16976 ) Currently, an error might occur when deleting a pod that uses an SR-IOV network device. This error is caused by a change in RHEL 9 where the name of a network interface is added to its alternative names list when it is renamed. As a consequence, when a pod attached to an SR-IOV virtual function (VF) is deleted, the VF returns to the pool with a new unexpected name, such as dev69 , instead of its original name, such as ensf0v2 . Although this error is not severe, the Multus and SR-IOV logs might show the error while the system recovers on its own. Deleting the pod might take a few seconds longer due to this error. ( OCPBUGS-11281 , OCPBUGS-18822 , RHEL-5988 ) When you run Cloud-native Network Functions (CNF) latency tests on an OpenShift Container Platform cluster, the oslat test can sometimes return results greater than 20 microseconds. This results in an oslat test failure. ( RHEL-9279 ) When you use preempt-rt patches with the real time kernel and you update the SMP affinity of a network interrupt, the corresponding Interrupt Request (IRQ) thread does not immediately receive the update. Instead, the update takes effect when the interrupt is received, and the thread is subsequently migrated to the correct core. ( RHEL-9148 ) The global navigation satellite system (GNSS) module in an Intel Westport Channel e810 NIC that is configured as a grandmaster clock (T-GM) can report the GPS FIX state and the GNSS offset between the GNSS module and the GNSS constellation satellites. The current T-GM implementation does not use the ubxtool CLI to probe the ublox module for reading the GNSS offset and GPS FIX values. Instead, it uses the gpsd service to read the GPS FIX information. This is because the current implementation of the ubxtool CLI takes 2 seconds to receive a response, and with every call, it increases CPU usage threefold. ( OCPBUGS-17422 ) The current grandmaster clock (T-GM) implementation has a single NMEA sentence generator sourced from the GNSS without a backup NMEA sentence generator. If NMEA sentences are lost on their way to the e810 NIC, the T-GM cannot synchronize the devices in the network synchronization chain and the PTP Operator reports an error. A proposed fix is to report a FREERUN event when the NMEA string is lost. ( OCPBUGS-19838 ) Currently, the YAML tab of some pages in the web console stops unexpectedly on some browsers when the multicluster engine for Kubernetes operator (MCE) is installed. The following message is displayed: "Oh no! Something went wrong." ( OCPBUGS-29812 ) If you have IPsec enabled on your cluster, you must disable it prior to upgrading to OpenShift Container Platform 4.15. There is a known issue where pod-to-pod communication might be interrupted or lost when updating to 4.15 without disabling IPsec. For information on disabling IPsec, see Configuring IPsec encryption . ( OCPBUGS-43323 ) If you have IPsec enabled on the cluster and IPsec encryption is configured between the cluster and an external node, stopping the IPsec connection on the external node causes a loss of connectivity to the external node. This connectivity loss occurs because on the OpenShift Container Platform side of the connection, the IPsec tunnel shutdown is not recognized. ( RHEL-24802 ) If you have IPsec enabled on the cluster, and your cluster is a hosted control planes for OpenShift Container Platform cluster, the MTU adjustment to account for the IPsec tunnel for pod-to-pod traffic does not happen automatically. ( OCPBUGS-28757 ) If you have IPsec enabled on the cluster, you cannot modify existing IPsec tunnels to external hosts that you have created. Modifying an existing NMState Operator NodeNetworkConfigurationPolicy object to adjust an existing IPsec configuration to encrypt traffic to external hosts is not recognized by OpenShift Container Platform. ( RHEL-22720 ) If you have IPsec enabled on the cluster, on the node hosting the north-south IPsec connection, restarting the ipsec.service systemd unit or restarting the ovn-ipsec-host pod causes a loss of the IPsec connection. ( RHEL-26878 ) Currently, there is a known issue with the mirroring of operator catalogs. The oc-mirror rebuilds the catalogs and regenerates their internal cache according to the imagesetconfig catalog filtering specifications. This operation requires the use of the opm binary contained in the catalogs. In OpenShift Container Platform 4.15, the operator catalogs contain the opm RHEL 9 binary, which causes the mirroring process to fail on RHEL 8 systems. ( OCPBUGS-31536 ) There is currently a known issue where the version of the opm CLI tool released with OpenShift Container Platform 4.15 does not support RHEL 8. As a workaround, RHEL 8 users can navigate to the OpenShift mirror site and download the latest version of the tarball released with OpenShift Container Platform 4.14. There is a known issue in this release preventing the creation of web terminals when logged into the cluster as kubeadmin . The terminal will return the message: Error Loading OpenShift command line terminal: User is not a owner of the requested workspace. This issue will be fixed in a future OpenShift Container Platform release. ( WTO-262 ) Currently, defining a sysctl value for a setting with a slash in its name, such as for bond devices, in the profile field of a Tuned resource might not work. Values with a slash in the sysctl option name are not mapped correctly to the /proc filesystem. As a workaround, create a MachineConfig resource that places a configuration file with the required values in the /etc/sysctl.d node directory. ( RHEL-3707 ) Due to an issue with Kubernetes, the CPU Manager is unable to return CPU resources from the last pod admitted to a node to the pool of available CPU resources. These resources are allocatable if a subsequent pod is admitted to the node. However, this in turn becomes the last pod, and again, the CPU manager cannot return this pod's resources to the available pool. This issue affects CPU load balancing features because these features depend on the CPU Manager releasing CPUs to the available pool. Consequently, non-guaranteed pods might run with a reduced number of CPUs. As a workaround, schedule a pod with a best-effort CPU Manager policy on the affected node. This pod will be the last admitted pod and this ensures the resources will be correctly released to the available pool.( OCPBUGS-17792 ) When a node reboot occurs all pods are restarted in a random order. In this scenario it is possible that tuned pod started after the workload pods. This means the workload pods start with partial tuning, which can affect performance or even cause the workload to fail. ( OCPBUGS-26400 ) The installation of OpenShift Container Platform might fail when a performance profile is present in the extra manifests folder and targets the primary or worker pools. This is caused by the internal install ordering that processes the performance profile before the default primary and worker MachineConfigPools are created. It is possible to workaround this issue by including a copy of the stock primary or worker MachineConfigPools in the extra manifests folder. ( OCPBUGS-27948 ) ( OCPBUGS-18640 ) In hosted control planes for OpenShift Container Platform, the HyperShift Operator extracts the release metadata only once during Operator initialization. When you make changes in the management cluster or create a hosted cluster, the HyperShift Operator does not refresh the release metadata. As a workaround, restart the HyperShift Operator by deleting its pod deployment. ( OCPBUGS-29110 ) In hosted control planes for OpenShift Container Platform, when you create the custom resource definition (CRD) for ImageDigestMirrorSet and ImageContentSourcePolicy objects at the same time in a disconnected environment, the HyperShift Operator creates the object only for the ImageDigestMirrorSet CRD, ignoring the ImageContentSourcePolicy CRD. As a workaround, copy the ImageContentSourcePolicies object configuration in the ImageDigestMirrorSet CRD. ( OCPBUGS-29466 ) In hosted control planes for OpenShift Container Platform, when creating a hosted cluster in a disconnected environment, if you do not set the hypershift.openshift.io/control-plane-operator-image annotation explicitly in the HostedCluster resource, the hosted cluster deployment fails with an error. ( OCPBUGS-29494 ) 1.9. Asynchronous errata updates Security, bug fix, and enhancement updates for OpenShift Container Platform 4.15 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.15 errata is available on the Red Hat Customer Portal . See the OpenShift Container Platform Life Cycle for more information about asynchronous errata. Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. Note Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate. This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.15. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.15.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. Important For any OpenShift Container Platform release, always review the instructions on updating your cluster properly. 1.9.1. RHSA-2025:2454 - OpenShift Container Platform 4.15.47 bug fix and security update Issued: 12 March 2025 OpenShift Container Platform release 4.15.47, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2025:2454 advisory. The RPM packages that are included in this update are provided by the RHSA-2025:2456 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.47 --pullspecs 1.9.1.1. Bug fixes Previously, an extra name prop was passed into the resource list page extensions used to list related operands on the CSV details page. This caused the operand list to be filtered by the CSV name, which often caused it to be an empty list. With this update, the operands are listed as expected. ( OCPBUGS-51332 ) Previously, incorrect addresses were passed to the Kubernetes EndpointSlice on a cluster. This issue prevented the installation of the MetalLB Operator on an Agent-based cluster in an IPv6 disconnected environment. With this release, a fix modifies the address evaluation method. Red Hat Marketplace pods can successfully connect to the cluster API server. As a result, the installation of the MetalLB Operator and the handling of ingress traffic in IPv6 disconnected environments can occur. ( OCPBUGS-51253 ) Previously, konnectivity-https-proxy did not have the additional trust bundles that were applied in the configuration.proxy.trustCA certificate. This caused hosted clusters to fail the provisioning process. With this release, the specified certificates are added to Konnectivity and propagate the proxy environment variables, allowing hosted clusters with secure proxies and custom certificates to successfully complete their provisioning. ( OCPBUGS-52172 ) Previously, in the Red Hat OpenShift Container Platform web console Notifications section, silenced alerts were visible in the notification drawer because the alerts did not include external labels. With this release, the alerts include external labels so that silenced alerts are not visible on the notification drawer. ( OCPBUGS-49849 ) 1.9.1.2. Updating To update an OpenShift Container Platform 4.17 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.2. RHSA-2025:1711 - OpenShift Container Platform 4.15.46 bug fix and security update Issued: 27 February 2025 OpenShift Container Platform release 4.15.46, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2025:1711 advisory. The RPM packages that are included in this update are provided by the RHSA-2025:1713 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.46 --pullspecs 1.9.2.1. Bug fixes Previously, if you tried to rerun a resolver-based PipelineRun from the OpenShift Container Platform console, the Invalid PipelineRun configuration, unable to start Pipeline UI message was displayed. With this release, you can rerun a resolver-based PipelineRun with no problem. ( OCPBUGS-48593 ) Previously, a bug caused requests to update the deploymentconfigs/scale sub resource to fail when a matching admission webhook was configured. With this release, you can update to continue without an error. ( OCPBUGS-47766 ) Previously, the installation program did not validate the maximum transmission unit (MTU) for custom networks on Red Hat OpenStack platforms, which led to an installation failure when the MTU was too small. For IPv6, the minimum MTU is 1280 and 100 for OVN-Kubernetes. With this release, the installation program validates the MTU of Red Hat OpenStack custom networks. ( OCPBUGS-41815 ) 1.9.2.2. Updating To update an OpenShift Container Platform 4.16 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.3. RHSA-2025:1128 - OpenShift Container Platform 4.15.45 bug fix and security update Issued: 12 February 2025 OpenShift Container Platform release 4.15.45, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2025:1128 advisory. The RPM packages that are included in this update are provided by the RHSA-2025:1130 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.45 --pullspecs 1.9.3.1. Bug fixes Previously, crun failed to stop a container if you opened a terminal session and then disconnected from it. With this release, the issue is resolved. ( OCPBUGS-48751 ) Previously, every time a subcription was reconciled, the OLM catalog Operator requested a full view of the catalog metadata from the catalog source pod of the subscription. These requests caused performance issues for the catalog pods. With this release, the OLM catalog Operator now uses a local cache that is refreshed periodically and reused by all subscription reconciliations, so that the performance issue for the catalog pods no longer persists. ( OCPBUGS-48697 ) Previously, when you used the Form View to edit Deployment or DeploymentConfig API objects on the OpenShift Container Platform web console, duplicate ImagePullSecrets parameters existed in the YAML configuration for either object. With this release, a fix ensures that duplicate ImagePullSecrets parameters do not get automatically added for either object. ( OCPBUGS-48592 ) 1.9.3.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.4. RHSA-2025:0646 - OpenShift Container Platform 4.15.44 bug fix and security update Issued: 29 January 2025 OpenShift Container Platform release 4.15.44, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2025:0646 advisory. The RPM packages that are included in this update are provided by the RHSA-2025:0648 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.44 --pullspecs 1.9.4.1. Bug fixes Previously, East to West pod traffic over the Geneve overlay could stop working between one or multiple nodes, which prevented pods from reaching pods on other nodes. With this release, the issue is resolved. ( OCPBUGS-47799 ) Previously, when installing a cluster on IBM Cloud(R) into an existing VPC, the installation program retrieved an unsupported VPC region. Attempting to install into a supported VPC region that follows the unsupported VPC region alphabetically caused the installation program to crash. With this release, the installation program is updated to ignore any VPC regions that are not fully available during resource lookups. ( OCPBUGS-44259 ) 1.9.4.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.5. RHSA-2025:0121 - OpenShift Container Platform 4.15.43 bug fix and security update Issued: 15 January 2025 OpenShift Container Platform release 4.15.43, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2025:0121 advisory. The RPM packages that are included in this update are provided by the RHBA-2025:0125 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.43 --pullspecs 1.9.5.1. Bug fixes Previously, a machine controller failed to save the VMware vSphere task ID of an instance template clone operation. This caused the machine to go into the Provisioning state and to power off. With this release, the VMware vSphere machine controller can detect and recover from this state. ( OCPBUGS-48105 ) Previously, installation of an AWS cluster failed in certain environments on existing subnets when the MachineSet object's parameter publicIp was explicitly set to false . With this release, a fix ensures that a configuration value set for publicIp no longer causes issues when the installation program provisions machines for your AWS cluster in certain environments. ( OCPBUGS-47680 ) Previously, the IDs used to determine the number of rows in a Dashboard table were not unique and some rows would be combined if their IDs were the same. With this release, the ID uses more information to prevent duplicate IDs and the table can display each expected row. ( OCPBUGS-47646 ) Previously, the algorithm for calculating the priority of machine removal equated Machines over a specific age to Machines annotated as preferred for removal. With this release, the priority of unmarked Machines sorted by age is reduced to avoid conflict with those explicitly marked, and the algorithm has been updated to ensure age order is guaranteed for Machines up to ten years old. ( OCPBUGS-46080 ) Previously, in managed services, audit logs are sent to a local webhook service. Control plane deployments sent traffic through konnectivity and attempted to send the audit webhook traffic through the konnectivity proxies - openshift-apiserver and oauth-openshift . With this release, the audit-webhook is in the list of no_proxy hosts for the affected pods, and the audit log traffic that is sent to the audit-webhook is successfully sent. ( OCPBUGS-46075 ) 1.9.5.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.6. RHSA-2024:11562 - OpenShift Container Platform 4.15.42 bug fix and security update Issued: 02 January 2025 OpenShift Container Platform release 4.15.42, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:11562 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:11565 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.42 --pullspecs 1.9.6.1. Bug fixes Previously, when the webhook token authenticator was enabled and had the authorization type set to None , the OpenShift Container Platform web console would consistently crash. With this release, the issue is resolved. ( OCPBUGS-46482 ) Previously, when you attempted to use the Operator Lifecycle Manager (OLM) to upgrade an Operator, the upgrade was blocked and an error validating existing CRs against new CRD's schema message was generated. An issue existed with OLM, whereby OLM erroneously identified incompatibility issues validating existing custom resources (CRs) against the new Operator version's custom resource definitions (CRDs). With this release, the validation is corrected so that Operator upgrades are no longer blocked. ( OCPBUGS-46479 ) Previously, the images for custom OS layering were not present when the OS was on Red Hat Enterprise Linux CoreOS (RHCOS) 4.15, preventing some customers from upgrading from RHCOS 4.15 to RHCOS 4.16. This release adds Azure Container Registry (ACR) and Google Container Registry (GCR) image credential provider RPMs to RHCOS 4.15. ( OCPBUGS-46063 ) Previously, you could not configure your Amazon Web Services DHCP option set with a custom domain name containing a period ( . ) as the final character, as trailing periods were not allowed in a Kubernetes object name. With this release, trailing periods are allowed in a domain name in a DHCP option set. ( OCPBUGS-46034 ) Previously, when openshift-sdn pods were deployed during the OpenShift Container Platform upgrading process, the Open vSwitch (OVS) storage table was cleared. This issue occurred on OpenShift Container Platform 4.15.19 and later versions. Ports for existing pods had to be re-created and this disrupted numerous services. With this release, a fix ensures that the OVS tables do not get cleared and pods do not get disconnected during a cluster upgrade operation. ( OCPBUGS-45955 ) Previously, you could not remove a finally pipeline task from the edit Pipeline form if you created a pipeline with only one finally task. With this release, you can remove the finally task from the edit Pipeline form and the issue is resolved. ( OCPBUGS-45950 ) Previously, the aws-sdk-go-v2 software development kit (SDK) failed to authenticate an AssumeRoleWithWebIdentity API operation on an AWS Security Token Service (STS) cluster. With this release, the pod identity webhook now includes a default region, and this issue no longer persists. ( OCPBUGS-45940 ) Previously, the installation program populated the network.devices , template and workspace fields in the spec.template.spec.providerSpec.value section of the VMware vSphere control plane machine set custom resource (CR). These fields should be set in the vSphere failure domain, and the installation program populating them caused unintended behaviors. Updating these fields did not trigger an update to the control plane machines, and these fields were cleared when the control plane machine set was deleted. With this release, the installation program is updated to no longer populate values that are included in the failure domain configuration. If these values are not defined in a failure domain configuration, for instance on a cluster that is updated to OpenShift Container Platform 4.15 from an earlier version, the values defined by the installation program are used. ( OCPBUGS-45839 , OCPBUGS-37064 ) Previously, the ClusterTasks were listed on the Pipelines builder and ClusterTask list pages in the Tasks navigation menu. With this release, the ClusterTask functionality is deprecated from Pipelines 1.17 and the ClusterTask dependency is removed from static plug-in. On the Pipelines builder page, you will only see the present task in the Namespace and Community tasks. ( OCPBUGS-45248 ) 1.9.6.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.7. RHSA-2024:10839 - OpenShift Container Platform 4.15.41 bug fix and security update Issued: 12 December 2024 OpenShift Container Platform release 4.15.41, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:10839 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:10842 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.41 --pullspecs 1.9.7.1. Bug fixes Previously, the Single-Root I/O Virtualization (SR-IOV) Operator did not expire the acquired lease during the Operator's shutdown. This impacted a new instance of the Operator, because the new instance had to wait for the lease to expire before the new instance could work. With this release, an update to the Operator shutdown logic ensures that the Operator expires the lease when the Operator is shutting down. ( OCPBUGS-43361 ) Previously, when you used the Agent-based Installer to install a cluster on a node that had an incorrect date, the cluster installation failed. With this release, a patch is applied to the Agent-based Installer live ISO time synchronization. The patch fixes the date issue and configures the /etc/chrony.conf file with the list of additional Network Time Protocol (NTP) servers, so that you can set any of these additional NTP servers in the agent-config.yaml without experiencing a cluster installation issue. ( OCPBUGS-45207 ) Previously, hosted control planes-based clusters were unable to authenticate through the oc login command. The web browser displayed an error when it attepted to retrieve the the token after selecting Display Token . With this release, cloud.ibm.com and other cloud-based endpoints are no longer proxied and authentication is successful. ( OCPBUGS-44278 ) Previously, the Messaging Application Programming Interface (MAPI) for IBM Cloud currently checked the first group of subnets (50) when searching for subnet details by name. With this release, the search provides pagination support to search all subnets. ( OCPBUGS-43675 ) 1.9.7.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.8. RHSA-2024:10142 - OpenShift Container Platform 4.15.39 bug fix and security update Issued: 26 November 2024 OpenShift Container Platform release 4.15.39, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:10142 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:10145 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.39 --pullspecs 1.9.8.1. Bug fixes Previously, the approval process for a certificate signing request (CSR) failed because the node name and internal DNS entry for a CSR did not match the case-sensitive check. With this release, an update to the approval process for CSRs avoids the case-sensitive check and a CSR with a matching node name and internal DNS entry does not fail the matching-pair check. ( OCPBUGS-44705 ) Previously, when the Cluster Resource Override Operator was unable to completely deploy its operand controller, the Operator would restart the process. Each time the Operator attempted the deployment process, the Operator created a new set of secrets. This resulted in a large number of secrets created in the namespace where the Cluster Resource Override Operator was deployed. With this release, the fixed version correctly processes the service account annotations and only one set of secrets is created. ( OCPBUGS-44378 ) Previously, when the Cluster Version Operator (CVO) pod restarted while it was initializing the synchronization work, the Operator interrupted the guard of the blocked upgrade request. The blocked request was unexpectedly accepted. With this release, the guard of the blocked upgrade request continues after the CVO restarts. ( OCPBUGS-44328 ) Previously, enabling Encapsulated Security Payload (ESP) hardware offload by using IPSec on attached interfaces in Open vSwitch (OVS) interrupted connectivity because of a bug in OVS. With this release, OpenShift Container Platform automatically disables ESP hardware offload on the OVS-attached interfaces so that the issue is resolved. ( OCPBUGS-44240 ) 1.9.8.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.9. RHSA-2024:8991 - OpenShift Container Platform 4.15.38 bug fix and security update Issued: 13 November 2024 OpenShift Container Platform release 4.15.38, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:8991 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:8994 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.38 --pullspecs 1.9.9.1. Bug fixes Previously, an invalid or unreachable identity provider (IDP) blocked updates to hosted control planes. With this release, the ValidIDPConfiguration condition in the HostedCluster object now reports any IDP errors and these errors do not block updates of hosted control planes. ( OCPBUGS-44201 ) Previously, the Machine Config Operator (MCO) vSphere resolv-prepender script used systemd directives that were not compatible with old boot image versions of OpenShift Container Platform 4. With this release, these OpenShift Container Platform nodes are compatible with old boot images with one of the following solutions: scaling with a boot image 4.13 or later, by using manual intervention, or upgrading to a release with this fix. ( OCPBUGS-42110 ) Previously, when the Image Registry Operator was configured in Azure with networkAccess:Internal , you could not successfully set managementState to Removed in the Operator configuration. This issue was caused by an authorization error that occurred when the Operator started to delete the storage. With this release, the Operator successfully deletes the storage account, which automatically deletes the storage container. The managementState status in the Operator configuration is updated to the Removed state. ( OCPBUGS-43656 ) 1.9.9.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.10. RHSA-2024:8425 - OpenShift Container Platform 4.15.37 bug fix and security update Issued: 30 October 2024 OpenShift Container Platform release 4.15.37, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:8425 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:8428 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.37 --pullspecs 1.9.10.1. Enhancements Previously, you could not control log levels for the internal component that selects IP addresses for cluster nodes. With this release, you can now enable debug log levels so that you can either increase or decrease log levels on demand. To adjust log levels, you must create a config map manifest file with a configuration analogous to the following: apiVersion: v1 data: enable-nodeip-debug: "true" kind: ConfigMap metadata: name: logging namespace: openshift-vsphere-infra # ... ( OCPBUGS-37704 ) 1.9.10.2. Bug fixes Previously, when you attempted to use the oc import-image command to import an image in a hosted control planes cluster, the command failed because of access issues with a private image registry. With this release, an update to openshift-apiserver pods in a hosted control planes cluster resolves names that use the data plane so that the oc import-image command now works as expected with private image registries. ( OCPBUGS-43468 ) Previously, when you used the must-gather tool, a Multus Container Network Interface (CNI) log file, multus.log , was stored in a node's file system. This situation caused the tool to generate unnecessary debug pods in a node. With this release, the Multus CNI no longer creates a multus.log file, and instead uses a CNI plugin pattern to inspect any logs for Multus DaemonSet pods in the openshift-multus namespace. ( OCPBUGS-43057 ) Previously, the Ingress and DNS operators failed to start correctly because of rotating root certificates. With this release, the Ingress and DNS operator Kubeconfigs are conditionally managed by using the annotation that defines when the PKI requires management and the issue is resolved.. ( OCPBUGS-42992 ) Previously, when you configured the image registry to use an Microsoft Azure storage account that was located in a resource group other than the cluster's resource group, the Image Registry Operator would become degraded. This occurred because of a validation error. With this release, an update to the Operator allows for authentication only by using a storage account key. Validation of other authentication requirements is not required. ( OCPBUGS-42934 ) Previously for hosted control planes (HCP), a cluster that used mirroring release images might result in existing node pools to use the hosted cluster's operating system version instead of the NodePool version. With this release, a fix ensures that node pools use their own versions. ( OCPBUGS-42881 ) Previously, creating cron jobs to create pods for your cluster caused the component that fetches the pods to fail. Because of this issue, the Topology page on the OpenShift Container Platform web console failed. With this release, a 3 second delay is configured for the component that fetches pods that are generated from the cron job so that this issue no longer exists. ( OCPBUGS-42611 ) Previously, the Ingress Operator prevented upgrades from OpenShift Container Platform 4.15 to 4.16 if any certificate type in the default certificate chain used the SHA-1 hashing algorithm. With this release, the Ingress Operator now only checks default leaf certificates for SHA-1 hash values, so that intermediate and root certificates in the default chain can continue to use SHA-1 hash values without blocking cluster upgrades. ( OCPBUGS-42480 ) Previously, when installing a cluster on bare metal using installer provisioned infrastructure, the installation could time out if the network to the bootstrap virtual machine is slow. With this update, the timeout duration has been increased to cover a wider range of network performance scenarios. ( OCPBUGS-42335 ) Previously, Ironic inspection failed if special or invalid characters existed in the serial number of a block device. This occurred because the lsblk command failed to escape the characters. With this release, the command escapes the characters so this issue no longer persists. ( OCPBUGS-39018 ) 1.9.10.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.11. RHSA-2024:7594 - OpenShift Container Platform 4.15.36 bug fix and security update Issued: 09 October 2024 OpenShift Container Platform release 4.15.36, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:7594 advisory. There are no RPM packages for this release. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.36 --pullspecs 1.9.11.1. Bug fixes Previously, if a connection on port 9637 for Windows nodes was refused, the Kubelet Service Monitor threw a target down alert because CRIO doesn't run on Windows nodes. With this release, Windows nodes are excluded from the Kubelet Service Monitor. ( OCPBUGS-42586 ) Previously, a change in the ordering of the TextInput parameters for PatternFly v4 and v5 caused the until field to be improperly filled and you could not edit it. With this release, the until field is editable so you can input the correct information. ( OCPBUGS-42384 ) Previously, when the Node Tuning Operator (NTO) was configured using performance profiles, it created the ocp-tuned-one-shot systemd service. The service ran before the kubelet and blocked the service execution. The systemd service invoked Podman, which used the NTO image. If the NTO image was not present, Podman tried to fetch the image. With this release, support is added for cluster-wide proxy environment variables that are defined in the /etc/mco/proxy.env environment. This allows Podman to pull NTO images in environments that need the HTTP/HTTPS proxy for out-of-cluster connections. ( OCPBUGS-42284 ) Previously, a node registration issue prevented you from using Redfish Virtual Media to add an xFusion bare-metal node to your cluster. The issue occurred because the hardware was not fully compliant with Redfish. With this release, you can add xFusion bare-metal nodes to your cluster. ( OCPBUGS-38798 ) 1.9.11.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.12. RHSA-2024:7179 - OpenShift Container Platform 4.15.35 bug fix and security update Issued: 02 October 2024 OpenShift Container Platform release 4.15.35, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:7179 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:7182 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.35 --pullspecs 1.9.12.1. Bug fixes Previously, when you upgraded an OpenShift Container Platform cluster from 4.14 to 4.15, the vCenterCluster parameter was not populated with a value in the use-connection-form.ts configuration file. As a result, the VMware vSphere GUI did not display VMware vSphere vCenter information. With this release, an update to the Infrastructure custom resource (CR) ensures it has that the GUI checks the cloud-provider-config ConfigMap for the vCenterCluster value. ( OCPBUGS-42144 ) Previously, deploying a self-managed private hosted cluster on Amazon Web Services (AWS) fails because the bootstrap-kubeconfig file uses an incorrect kube-apiserver port. As a result, the AWS instances are provisioned but cannot join the hosted cluster as nodes. With this release, the issue is fixed so this issue no longer occurs. ( OCPBUGS-42214 ) Previously, when the hosted cluster controllerAvailabilityPolicy was set to SingleReplica , the podAntiAffinity property on networking components blocked the availability of the components to a cluster. With this release, the issue is resolved. ( OCPBUGS-42020 ) Previously, adding IPv6 support for user-provisioned installation platforms caused an issue with naming Red Hat OpenStack Platform (RHOSP) resources, especially when you run two user-provisioned installation clusters on the same RHOSP platform. This happened because the two clusters share the same names for network, subnets, and router resources. With this release, all the resources names for a cluster remain unique for that cluster so no interfere occurs. ( OCPBUGS-42011 ) Previously, when you configured a hosted cluster to use an identity provider (IdP) that has either an http or https endpoint, the IdP hostname did not resolve when sent through the proxy. With this release, a DNS lookup operation checks the IdP before IdP traffic is sent through a proxy, so that IdPs with hostnames can only be resolved by the data plane and verified by the Control Plane Operator (CPO). ( OCPBUGS-41373 ) Previously, a group ID was not added to the /etc/group within the container when the spec.securityContext.runAsGroup attribute was set in the Pod resource. With this release, this issue is fixed. ( OCPBUGS-41245 ) Previously, the order of an Ansible playbook was modified to run before the metadata.json file was created, which caused issues with older versions of Ansible. With this release, the playbook is more tolerant of missing files and the issue is resolved. ( OCPBUGS-39288 ) Previously, dynamic plugins using PatternFly 4 referenced variables that are not available in OpenShift Container Platform 4.15 and later. This caused contrast issues for Red Hat Advanced Cluster Management (RHACM) in dark mode. With this update, older chart styles are now available to support PatternFly 4 charts used by dynamic plugins. ( OCPBUGS-38537 ) Previously, proxying for identity provider (IdP) communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname was no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying ( http or https ) and protocols that do not (LDAP). In addition, it did not honor the no_proxy variable that is configured in the HostedCluster.spec.configuration.proxy spec. With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your no_proxy settings. As a result, the OAUTH server can communicate properly with IdPs when a proxy is configured for the hosted cluster. ( OCPBUGS-38058 ) Previously, the Assisted Installer did not reload new data from the Assisted Service when the Assisted Installer checked control plane nodes for readiness and a conflict existed with a write operation from the Assisted Installer controller. This conflict prevented the Assisted Installer from detecting a node that was marked by the Assisted Installer controller as Ready because the Assisted Installer relied on older information. With this release, the Assisted Installer can receive the newest information from the Assisted Service, so that it the Assisted Installer can accurately detect the status of each node. ( OCPBUGS-38003 ) 1.9.12.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.13. RHSA-2024:6818 - OpenShift Container Platform 4.15.34 bug fix and security update Issued: 25 September 2024 OpenShift Container Platform release 4.15.34, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:6818 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:6821 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.34 --pullspecs 1.9.13.1. Enhancements The following enhancement is included in this z-stream release: 1.9.13.1.1. Insights Operator collects new metric The Insights Operator (IO) can now collect data from the haproxy_exporter_server_threshold metric. ( OCPBUGS-38593 ) 1.9.13.2. Bug fixes Previously, when the Operator Lifecycle Manager (OLM) evaluated a potential update, it used the dynamic client list for all custom resource (CR) instances in the cluster. For clusters with a large number of CRs, that could result in timeouts from the apiserver and stranded updates. With this release, the issue is resolved. ( OCPBUGS-41819 ) Previously, if you created a hosted cluster by using a proxy for the purposes of making the cluster reach a control plane from a compute node, the compute node would be unavailable to the cluster. With this release, the proxy settings are updated for the node so that the node can use a proxy to successfully communicate with the control plane. ( OCPBUGS-41947 ) Previously, an AdditionalTrustedCA field that was specified in the Hosted Cluster image configuration was not reconciled into the openshift-config namespace as expected and the component was not available. With this release, the issue is resolved. ( OCPBUGS-41809 ) Previously, when you created a LoadBalancer service for the Ingress Operator, a log message was generated that stated the change was not effective. This log message should only trigger for a change to an Infra custom resource. With this release, this log message is no longer generated when you create a LoadBalancer service for the Ingress Operator. ( OCPBUGS-41635 ) Previously, if an IP address was assigned to an egress node and was deleted, then pods selected by that egress IP address might have had incorrect routing information to that egress node. With this release, the issue has been resolved. ( OCPBUGS-41340 ) Previously, proxying for Operators that run in the control plane of a hosted cluster was done through proxy settings on the Konnectivity agent pod that runs in the data plane. As a consequence, it was not possible to distinguish whether proxying was needed based on application protocol. For parity with OpenShift Container Platform, IDP communication through HTTPS or HTTP should be proxied, but LDAP communication should not be proxied. This type of proxying also ignores NO_PROXY entries that rely on host names because by the time traffic reaches the Konnectivity agent, only the destination IP address is available. With this release, in hosted clusters, proxy is invoked in the control plane by konnectivity-https-proxy and konnectivity-socks5-proxy , and proxying traffic is stopped from the Konnectivity agent. As a result, traffic that is destined for LDAP servers is no longer proxied. Other HTTPS or HTTPS traffic is proxied correctly. The NO_PROXY setting is honored when you specify hostnames. ( OCPBUGS-38065 ) 1.9.13.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.14. RHSA-2024:6685 - OpenShift Container Platform 4.15.33 bug fix and security update Issued: 19 September 2024 OpenShift Container Platform release 4.15.33, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:6685 advisory. There are no RPM packages in this release. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.33 --pullspecs 1.9.14.1. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.15. RHSA-2024:6637 - OpenShift Container Platform 4.15.32 bug fix and security update Issued: 18 September 2024 OpenShift Container Platform release 4.15.32, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:6637 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:6640 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.32 --pullspecs 1.9.15.1. Bug fixes Previously, the Cloud Credential Operator (CCO) would produce an error when starting or restarting when there were a large number of secrets in the cluster fetched at once. With this release, the CCO fetches the secrets in batches of 100 and the issue is resolved. ( OCPBUGS-41235 ) Previously, the ControlPlaneMachineSet (CPMS) checked templates and the resource pool based on the full vCenter path. This caused the CPMS to start when it was not needed. With this release, CPMS also checks the file name and the issue is resolved. ( OCPBUGS-24632 ) 1.9.15.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.16. RHSA-2024:6409 - OpenShift Container Platform 4.15.31 bug fix and security update Issued: 11 September 2024 OpenShift Container Platform release 4.15.31, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:6409 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:6414 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.31 --pullspecs 1.9.16.1. Bug fixes Previously, the spec.noProxy field from the cluster-wide proxy was not considered when the Cluster Monitoring Operator (CMO) configured proxy capabilities for Prometheus remote write endpoints. With this release, the CMO no longer configures proxy capabilities for any remote write endpoints whose URL would bypass the proxy according to the noProxy field. ( OCPBUGS-39172 ) Previously, utlization cards displayed limit in a way that incorrectly implied a relationship between capacity and limits. With this release, the position of limit is changed to remove this implication. ( OCPBUGS-39085 ) Previously, the openvswitch service used older cluster configurations after a cluster upgrade and this caused the openvswitch service to stop. With this release, the openvswitch service is now restarted after a cluster upgrade so that the service uses the newer cluster configurations. ( OCPBUGS-34842 ) Previously, after you submitted the same value into the VMware vSphere configuration dialog, cluster nodes unintentionally rebooted. With this release, nodes reboot after you enter new values into the dialog and not the same values.( OCPBUGS-33938 ) Previously, if a virtual machine (VM) was deleted and the network interface controller (NIC) still existed for that VM, the Microsoft Azure VM verification check failed. With this release, the verification check can now handle this situation by gracefully processing the issue without failing. ( OCPBUGS-31467 ) Previously, Red Hat HyperShift periodic conformance jobs failed because of changes to the core operating system. These failed jobs caused the OpenShift API deployment to fail. With this release, an update recursively copies individual trusted certificate authority (CA) certificates instead of copying a single file, so that the periodic conformance jobs succeed and the OpenShift API runs as expected OCPBUGS-38943 ) Previously, the grace period for a node to become ready was not aligned with upstream behavior. This grace period sometimes caused a node to cycle between Ready and Not ready states. With this release, the issue is fixed so that the grace period does not cause a node to cycle between the two states. ( OCPBUGS-39077 ) 1.9.16.2. Known issues On Red Hat OpenShift Service on AWS, node pools might stop scaling workloads or updating configurations for a cluster that uses the 4.15.23 or later version of the Hosted control planes service. Depending on the version of components that interact with your cluster, you can resolve this issue be completing the steps in one of the following Red Hat Knowledgebase articles: ROSA HCP clusters fail to add new nodes in MachinePool version older than 4.15.23 ROSA upgrade issue mitigation for HOSTEDCP-1941 ( OCPBUGS-39463 ) 1.9.16.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.17. RHSA-2024:6013 - OpenShift Container Platform 4.15.30 bug fix and security update Issued: 5 September 2024 OpenShift Container Platform release 4.15.30, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:6013 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:6016 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.30 --pullspecs 1.9.17.1. Bug fixes Previously, when running the oc logs -f <pod> command, the logs would not output anything after the log file was rotated. With this release, the kubelet outputs a log file after it has been rotated, and as a result the issue is resolved. ( OCPBUGS-38861 ) Previously, an internal timeout would occur when the service account had short lived credentials. With this release, that timeout is removed, and the timeout is now controlled by the parent context. ( OCPBUGS-38198 ) Previously, setting an invalid .spec.endpoints.proxyUrl attribute in the ServiceMonitor resource would result in breaking, reloading, and restarting Prometheus. This update fixes the issue by validating the proxyUrl attribute against invalid syntax. ( OCPBUGS-36719 ) Previously, there was an error when adding parameters to the Pipeline when the resource field was added to the payload, and as a result resources were deprecated. With this update, the resource fields have been removed from the payload, and you can add parameters to the Pipeline without getting an error. ( OCPBUGS-33076 ) 1.9.17.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.18. RHSA-2024:5439 - OpenShift Container Platform 4.15.29 bug fix and security update Issued: 28 August 2024 OpenShift Container Platform release 4.15.29, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHBA-2024:5751 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:5754 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.29 --pullspecs 1.9.18.1. Bug fixes Previously, the ordering of Network Interface Controllers (NICs) in the cloud provider was non-deterministic, which could result in the node using the wrong NIC for communication with the cluster. With this update, the ordering is now consistent. This fix prevents the random ordering that was causing the node networking fail. ( OCPBUGS-38577 ) 1.9.18.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.19. RHSA-2024:5439 - OpenShift Container Platform 4.15.28 bug fix and security update Issued: 2024-08-22 OpenShift Container Platform release 4.15.28, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:5439 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:5442 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.28 --pullspecs 1.9.19.1. Bug fixes Previously, the OpenShift Container Platform web console failed to restart a bare metal node. This release fixes this issue so that you can restart a bare metal node by using the OpenShift Container Platform web console. ( OCPBUGS-37099 ) 1.9.19.2. Known issues An error might occur when deleting a pod that uses an SR-IOV network device. This error is caused by a change in RHEL 9 where the name of a network interface is added to its alternative names list when it is renamed. As a consequence, when a pod attached to an SR-IOV virtual function (VF) is deleted, the VF returns to the pool with a new unexpected name; for example, dev69 , instead of its original name, ensf0v2 . Although this error is non-fatal, Multus and SR-IOV logs might show the error while the system reboots. Deleting the pod might take a few extra seconds due to this error. ( OCPBUGS-11281 , OCPBUGS-18822 , RHEL-5988 ) 1.9.19.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.20. RHSA-2024:5160 - OpenShift Container Platform 4.15.27 bug fix and security update Issued: 15 August 2024 OpenShift Container Platform release 4.15.27, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:5160 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:5163 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.27 --pullspecs 1.9.20.1. Enhancements The following enhancement is included in this z-stream release: 1.9.20.1.1. Collecting Ingress controller certificate information The Insights Operator now collects NotBefore and NotAfter date information about all Ingress Controller certificates and aggregates the information into a JSON file in the aggregated/ingress_controllers_certs.json path. ( OCPBUGS-37672 ) 1.9.20.2. Bug fixes Previously, when installing a cluster using the Agent-based installation program, generating a large number of manifests prior to installation could fill the Ignition storage, causing the installation to fail. With this update, the Ignition storage has been increased to allow for a much greater number of installation manifests. ( OCPBUGS-33402 ) Previously, when using the Agent-based installation program in a disconnected environment, unnecessary certificates were added to the CA trust bundle. With this update, the CA bundle ConfigMap only contains CAs explicitly specified by the user. ( OCPBUGS-34721 ) Previously, HostedClusterConfigOperator did not delete the ImageDigestMirrorSet (IDMS) object after a user removed the ImageContentSources field from the HostedCluster object. This caused the IDMS object to remain in the HostedCluster object. With this release, HostedClusterConfigOperator removes all IDMS resources in the HostedCluster object so that this issue no longer exists. ( OCPBUGS-37174 ) Previously, when the Cloud Credential Operator checked if passthrough mode permissions were correct, the Operator sometimes received a response from the Google Cloud Platform (GCP) API about an invalid permission for a project. This bug caused the Operator to enter a degraded state that in turn impacted the installation of the cluster. With this release, the Cloud Credential Operator checks specifically for this error so that it diagnoses it separately without impacting the installation of the cluster. ( OCPBUGS-37288 ) Previously, the OVNKubernetesNorthdInactive alert did not fire as expected. With this release, the OVNKubernetesNorthdInactive alert works as expected and the issue is resolved. ( OCPBUGS-36821 ) 1.9.20.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.21. RHSA-2024:4955 - OpenShift Container Platform 4.15.25 bug fix and security update Issued: 7 August 2024 OpenShift Container Platform release 4.15.25, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:4955 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:4958 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.25 --pullspecs 1.9.21.1. Enhancements The following enhancements are included in this z-stream release: 1.9.21.1.1. Configuring Capacity Reservation by using machine sets OpenShift Container Platform release 4.15.25 introduces support for on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. For more information, see Configuring Capacity Reservation by using machine sets for compute or control plane machine sets. ( OCPCLOUD-1646 ) 1.9.21.2. Bug fixes Previously, a machine config pool (MCP) with a higher maxUnavailable value than the amount of unavailable nodes would cause cordoned nodes to receive updates if they were in a certain position on the node list. With this release, a fix ensures that cordoned nodes are not added to a queue to receive updates. ( OCPBUGS-37629 ) Prevously, an errant code change resulted in a duplicated oauth.config.openshift.io item on the Global Configuration page. With this release, the duplicated item is removed. ( OCPBUGS-37458 ) Previously, the Cluster Network Operator (CNO) IPsec mechanism incorrectly behaved on a cluster that had multiple worker machine config pools. With this release, CNO Ipsec mechanism works as intended for a cluster with multiple worker machine config pools. This fix does not apply to updating an IPsec-enabled cluster with multiple paused machine config pools. ( OCPBUGS-37205 ) Previously, the Open vSwitch (OVS) pinning procedure set the CPU affinity of the main thread, but other CPU threads did not pick up this affinity if they had already been created. As a consequence, some OVS threads did not run on the correct CPU set, which might interfere with the performance of pods with a Quality of Service (QoS) class of Guaranteed . With this release, the OVS pinning procedure updates the affinity of all the OVS threads, ensuring that all OVS threads run on the correct CPU set. ( OCPBUGS-37196 ) Previously, when you created or deleted large volumes of service objects simultaneously, the service controller's ability to process each service sequentially would slow down. This caused short timeout issues for the service controller and backlog issues for the objects. With this release, the service controller can now process up to 10 service objects simultaneously to reduce the backlog and timeout issues. ( OCPBUGS-36821 ) 1.9.21.3. Known issues On clusters with the SR-IOV Network Operator installed and configured, pods with a secondary interface of SRI-OV VF fail to create a pod sandbox and do not function. ( OCPBUGS-38090 ) 1.9.21.4. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.22. RHSA-2024:4850 - OpenShift Container Platform 4.15.24 bug fix and security update Issued: 31 July 2024 OpenShift Container Platform release 4.15.24, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:4850 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:4853 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.24 --pullspecs 1.9.22.1. Bug fixes Previously, the TuneD daemon could unnecessarily reload an additional time after a Tuned custom resource (CR) update. With this release, the Tuned object has been removed and the TuneD (daemon) profiles are carried directly in the Tuned Profile Kubernetes objects. As a result, the issue has been resolved. ( OCPBUGS-36870 ) 1.9.22.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.23. RHSA-2024:4699 - OpenShift Container Platform 4.15.23 bug fix and security update Issued: 25 July 2024 OpenShift Container Platform release 4.15.23 is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:4699 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:4702 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.23 --pullspecs 1.9.23.1. Enhancements The following enhancement is included in this z-stream release: 1.9.23.1.1. Adding connectTimeout tuning option to Ingress Controller API The IngressController API is updated with a new tuning option, ingresscontroller.spec.tuningOptions.connectTimeout , which defines how long the router will wait for a response when establishing a connection to a backend server. ( OCPBUGS-36208 ) 1.9.23.2. Bug fixes Previously, the OpenSSL versions for the Machine Config Operator and the hosted control plane were not the same. With this release, the FIPS cluster NodePool resource creation for OpenShift Container Platform 4.14 and OpenShift Container Platform 4.15 has been fixed and the issue is resolved. ( OCPBUGS-37266 ) Previously, the operand details displayed information for the first custom resource definition (CRD) that matched by name. With this release, the operand details page displays information for the CRD that matches by name and the version of the operand. ( OCPBUGS-36971 ) Previously, the HyperShift hosted control plane (HCP) would fail to generate ignition because of mismatched versions of Red Hat Enterprise Linux (RHEL) OpenSSL versions used by the HyperShift Control Plane Operator and the Machine Config Operator. With this release, the versions of Red Hat Enterprise Linux (RHEL) OpenSSL match correctly and the issue is resolved. ( OCPBUGS-36863 ) Previously, the Ingress Operator could not successfully update the canary route because the Operator did not have permission to update spec.host or spec.subdomain on an existing route. With this release, the required permission is added to the cluster role for the Operator's ServiceAccount and the Ingress Operator can update the canary route. ( OCPBUGS-36466 ) Previously, installing an Operator could sometimes fail if the same Operator had been previously installed and uninstalled. This was due to a caching issue. This bug fix updates Operator Lifecycle Manager (OLM) to correctly install the Operator in this scenario, and as a result this issue no longer occurs. ( OCPBUGS-36451 ) Previously, after installing the Pipelines Operator, Pipeline templates took some time to become available in the cluster, but users were still able to create the Deployment. With this update, the Create button on the Import from Git page is disabled if there is no pipeline template present for the resource selected. ( OCPBUGS-34477 ) 1.9.23.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.24. RHSA-2024:4474 - OpenShift Container Platform 4.15.22 bug fix and security update Issued: 18 July 2024 OpenShift Container Platform release 4.15.22 is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:4474 advisory. There are no RPM packages in this release. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.22 --pullspecs 1.9.24.1. Enhancements The following enhancement is included in this z-stream release: 1.9.24.1.1. Introducing TaskRun status Previously, the TaskRun status was not displayed near the TaskRun name on the TaskRun details page. With this update, the TaskRun status is located beside the name of the TaskRun in the page heading. ( OCPBUGS-32156 ) 1.9.24.2. Bug fixes Previously, the HighOverallControlPlaneCPU alert triggered warnings based on criteria for multi-node clusters with high availability. As a result, misleading alerts were triggered in single-node OpenShift clusters because the configuration did not match the environment criteria. This update refines the alert logic to use single-node OpenShift-specific queries and thresholds and account for workload partitioning settings. As a result, CPU utilization alerts in single-node OpenShift clusters are accurate and relevant to single-node configurations. ( OCPBUGS-35832 ) In an AWS STS cluster, the Cloud Credential Operator (CCO) checks awsSTSIAMRoleARN in CredentialsRequest to create the secret. Previously, CCO logged an error if awsSTSIAMRoleARN was not present, which resulted in multiple errors per second. With this release, CCO does not log the error and the issue is resolved. ( OCPBUGS-36291 ) Previously, if a new deployment was completed at the OSTree level on a host that was identical to the current deployment but on a different stateroot, OSTree treated them as equal. With this release, the OSTree logic is modified and the issue is resolved. ( OCPBUGS-36436 ) Previously, a change of dependency targets introduced in OpenShift Container Platform 4.14 prevented Microsoft Azure OpenShift Container Platform installations from scaling up new nodes after upgrading to later versions. With this release, the issue is resolved for OpenShift Container Platform 4.15. ( OCPBUGS-36550 ) 1.9.24.3. Known issue If the ConfigMap object maximum transmission unit (MTU) is absent in the openshift-network-operator namespace, you must create the ConfigMap object manually with the machine MTU value before you start the live migration. Otherwise, the live migration fails. ( OCPBUGS-35829 ) 1.9.24.4. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.25. RHSA-2024:4321 - OpenShift Container Platform 4.15.21 bug fix and security update Issued: 10 July 2024 OpenShift Container Platform release 4.15.21 is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:4321 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:4324 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.21 --pullspecs 1.9.25.1. Bug fixes Previously, the alertmanager-trusted-ca-bundle config map was not injected into the user-defined Alertmanager container, which prevented the verification of HTTPS web servers receiving alert notifications. With this update, the trusted CA bundle config map is mounted into the Alertmanager container at the /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem path. ( OCPBUGS-36312 ) Previously, the internal image registry did not correctly authenticate users on clusters configured for externalAWS IAM OpenID Connect (OIDC) users. This causes issues for users when pushing or pulling images to and from the internal image registry. With this release, the internal image registry starts by using the SelfSubjectReview API instead of the OpenShift-specific user API. The OpenShift-specific user API is not compatible with external OIDC users. ( OCPBUGS-36287 ) Previously, for clusters upgraded from earlier versions of OpenShift Container Platform, enabling kdump on an OVN-enabled cluster sometimes prevented the node from rejoining the cluster or returning to the Ready state. With this release, stale data from earlier OpenShift Container Platform versions are removed, so that nodes can now correctly start and rejoin the cluster. ( OCPBUGS-36258 ) Previously, the OpenShift Container Platform installation program included a pair of slashes ( // ) in a path to a resource pool for a cluster installed on VMware vSphere. This issue caused the ControlPlaneMachineSet (CPMS) Operator to create additional contol plane machines. With this release, the pair of slashes is removed to prevent this issue from occuring. ( OCPBUGS-36225 ) Previously, the GrowPart tool locked a device. This impacted Linux Unified Key Setup-on-disk-format (LUKS) devices from being opened and caused the operating system to boot into emergency mode. With this release, the call to the GrowPart tool is removed, so that LUKS devices are not unintentionally locked and the operating system can successfully boot. ( OCPBUGS-35988 ) Previously, a bug in systemd might cause the coreos-multipath-trigger.service unit to hang forever. As a result, the system would never finish booting. With this release, the systemd unit was removed and the issue is fixed. ( OCPBUGS-35749 ) Previously, a transient failure to fetch bootstrap data during machine creation, such as a transient failure to connect to the API server, caused the machine to enter a terminal failed state. With this release, failure to fetch bootstrap data during machine creation is retried until it eventually succeeds. ( OCPBUGS-34665 ) 1.9.25.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.26. RHSA-2024:4151 - OpenShift Container Platform 4.15.20 bug fix and security update Issued: 2 July 2024 OpenShift Container Platform release 4.15.20 is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:4151 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:4154 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.20 --pullspecs 1.9.26.1. Enhancements The following enhancements are included in this z-stream release: 1.9.26.1.1. Enabling imagestream builds in disconnected environments This release adds additional TrustedBundles to the OpenShift API server (OAS) container and enables imagestream builds in disconnected environments. ( OCPBUGS-34579 ) 1.9.26.1.2. Collecting the Prometheus and Alert Manager instances by the Insights Operator The Insights Operator (IO) now collects the Prometheus and AlertManager resources in addition to the openshift-monitoring custom resource. ( OCPBUGS-35865 ) 1.9.26.2. Bug fixes Previously, when an optional internal function of the cluster autoscaler was not implemented, the function caused repeated log entries. With this release, the issue has been resolved. ( OCPBUGS-33885 ) Previously, default Operator Lifecycle Manager (OLM) catalog pods remained in a termination state when there was an outage of the node that was being used. With this release, the OLM catalog pods that are backed by a CatalogSource correctly recover from planned and unplanned node maintenance. ( OCPBUGS-35305 ). Previously, what the Azure API returns for a subnet caused the Installer to terminate unexpectedly. With this release, the code has been updated to handle the old and new data for subnets, as well as to return an error in case the expected information is not found. ( OCPBUGS-35502 ). Previously, AWS HyperShift clusters leveraged their VPC's primary CIDR range to generate security group rules on the data plane. As a consequence, installing AWS HyperShift clusters into an AWS VPC with multiple CIDR ranges caused the generated security group rules to be insufficient. With this release, security group rules are generated based on the provided Machine CIDR range instead to resolve this issue. ( OCPBUGS-35714 ) Previously, in User Provisioned Infrastructure (UPI) or clusters that were upgraded from older versions, failureDomains may be missing in Infrastructure objects which caused certain checks to fail. With this release, a fallback failureDomains is synthesized from cloudConfig if none are available in infrastructures.config.openshift.io . ( OCPBUGS-35732 ) Previously, a rare timing issue could prevent all control plane nodes from being added to an Agent-based cluster during installation. With this update, all control plane nodes are successfully rebooted and added to the cluster during installation. ( OCPBUGS-35894 ) 1.9.26.3. Known issue Compact clusters with 3 masters that are configured to run customer workloads are supported with OpenShift IPI installs on GCP, but not on AWS or Azure. ( OCPBUGS-35359 ) 1.9.26.4. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.27. RHSA-2024:4041 - OpenShift Container Platform 4.15.19 bug fix and security update Issued: 27 June 2024 OpenShift Container Platform release 4.15.19 is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:4041 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:4044 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.19 --pullspecs 1.9.27.1. Bug fixes Previously, when a new version of a Custom Resource Definition (CRD) specified a new conversion strategy, this conversion strategy was expected to successfully convert resources. This was not the case because Operator Lifecycle Manager (OLM) cannot run the new conversion strategies for CRD validation without actually performing the update operation. With this release, the OLM implementation generates a warning message during the update process when CRD validations fail with the existing conversion strategy and the new conversion strategy is specified in the new version of the CRD. ( OCPBUGS-35720 ). Previously, during node reboots, especially during update operations, the node that interacts with the rebooting machine entered a Ready=Unknown state for a short amount of time. This situation caused the Control Plane Machine Set Operator to enter an UnavailableReplicas condition and then an Available=false state. The Available=false state triggers alerts that demand urgent action, but in this case, intervention was only required for a short period of time until the node rebooted. With this release, a grace period for node unreadiness is provided where if a node enters an unready state, the Control Plane Machine Set Operator does not instantly enter an UnavailableReplicas condition or an Available=false state. ( OCPBUGS-34971 ). Previously, the OpenShift Cluster Manager container did not have the right TLS Certificates. As a result, image streams could not be used in disconnected deployments. With this update, the TLS Certificates are added as projected volumes. ( OCPBUGS-34580 ) Previously, when a serverless function was created in the create serverless form, BuilldConfig was not created. With this update, if the Pipelines Operator is not installed, or if the pipeline resource is not created for particular resource, or if the pipeline is not added while creating a serverless function, the BuildConfig is created as expected. ( OCPBUGS-34350 ) Previously, the reduction of the network queue did not work as expected for inverted rules such as !ens0 . This happened because the exclamation mark symbol was duplicated in the generated tuned profile. With this release, the duplication no longer occurs so that inverted rules apply as intended. ( OCPBUGS-33929 ). Previously, registry overrides configured by a cluster administrator on the management side applied to non-relevant data-plane components. With this release, registry overrides no longer apply to these components. ( OCPBUGS-33627 ). Previously, when installing a cluster on VMware vSphere, the installation failed if an ESXi host was in maintenance mode because the installation program could not retrieve version information from the host. With this update, the installation program does not attempt to retrieve version information from ESXi hosts that are in maintenance mode, allowing the installation to proceed. ( OCPBUGS-31387 ) 1.9.27.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.28. RHSA-2024:3889 - OpenShift Container Platform 4.15.18 bug fix and security update Issued: 18 June 2024 OpenShift Container Platform release 4.15.18 is now available. The list of bug fixes that are included in this update is documented in the RHSA-2024:3889 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:3892 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.18 --pullspecs 1.9.28.1. Enhancements The following enhancements are included in this z-stream release: 1.9.28.1.1. Blocking upgrades for SHA1 serving certificates on routes and Ingress Controllers OpenShift Container Platform 4.15 supports SHA1 certificates on routes, but sets Upgradeable=False . Consequently, upgrades from 4.15 to 4.16 cause routes and Ingress Controllers with SHA1 certificates to be rejected. Only serving certificates are affected. For routes, this is the certificate specified in spec.tls.certificate . For Ingress Controllers, this is the serving certificate in the secret specified in spec.defaultCertificate . CA certificates using SHA1 are not impacted. However, due to OCPBUGS-42480 , some Ingress Controllers were incorrectly blocking upgrades if the serving certificate was not using SHA1, but the CA certificate was. To resolve this, upgrade to version 4.15.37 to receive this bug fix. Additionally, this update introduces a new UnservableInFutureVersions status condition to routes that contain a SHA1 certificate. It also adds an admin-gate to block upgrades if this new status is present on any route. As a result, if cluster administrators have routes that use SHA1 certificates in OpenShift Container Platform 4.15, they must either upgrade these certificates to a supported algorithm or provide an admin-ack for the created admin-gate . This admin-ack allows administrators to proceed with the upgrade without resolving the SHA1 certificate issues, even though the routes will be rejected. The full admin-ack command is: USD oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.15-route-config-not-supported-in-4.16":"true"}}' --type=merge ( OCPBUGS-28928 ). 1.9.28.1.2. Allowing pull secret passwords to contain a colon character This release introduces the ability to include a colon character in your password for the Openshift Assisted Installer. ( OCPBUGS-34641 ) 1.9.28.1.3. Introducing an etcd defragmentation controller for Hypershift This release introduces an etcd degragmentation controller for hosted clusters on Hypershift. ( OCPBUGS-35002 ) 1.9.28.2. Bug fixes Previously, the OpenShift Container Platform web console terminated unexpectedly if authentication discovery failed on the first attempt. With this update, authentication initialization was updated to retry up to 5 minutes before failing. ( OCPBUGS-30208 ) Previously, the metal3-ironic and metal3-ironic-inspector pods failed when upgrading to OpenShift Container Platform 4.15.11 from 4.15.8 due to an install failure related to FIPS mode enablement. With this release, the issue has been resolved. ( OCPBUGS-33736 ) Previously, the OpenShift Agent Installer reported installed SATA SDDs as removable and refused to use any of them as installation targets. With this release removable disks are eligible for installation and the issue has been resolved. ( OCPBUGS-34732 ) Previously, the AWS EFS driver controller returned a runtime error when it provisioned a new volume on EFS filesystem if pre-existing access points without a POSIX user were present. With this release, the driver has been fixed and the issue has been resolved. ( OCPBUGS-34843 ) Previously, the secrets-store CSI driver on Hypershift was failing to mount secrets due to an issue with the Hypershift CLI. With this release, the driver is able to mount volumes and the issue has been resolved. ( OCPBUGS-34997 ) Previously, in disconnected environments, the HyperShift Operator ignored registry overrides. As a consequence, changes to node pools were ignored, and node pools encountered errors. With this update, the metadata inspector works as expected during the HyperShift Operator reconciliation, and the override images are properly populated. ( OCPBUGS-35074 ) 1.9.28.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.29. RHBA-2024:3673 - OpenShift Container Platform 4.15.17 bug fix and security update Issued: 11 June 2024 OpenShift Container Platform release 4.15.17 is now available. The list of bug fixes that are included in this update is documented in the RHBA-2024:3673 advisory. The RPM packages that are included in this update are provided by the RHSA-2024:3676 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.17 --pullspecs 1.9.29.1. Enhancements The following enhancement is included in this z-stream release: 1.9.29.1.1. Storage migration for cluster versions 4.8 or earlier This release introduces a storage migration that supports a safe cluster update from version 4.8 or earlier to the latest supported release. If you created a cluster at version 4.7 or earlier, your stored objects remain accessible when updating your cluster to the latest supported release. ( OCPBUGS-31445 ) 1.9.29.2. Bug fixes Previously, when multiple domains were configured for an Amazon Virtual Private Cloud (VPC) DHCP option, the hostname could return multiple values. However, the logic did not account for multiple values and crashed when it returned a node name containing a space. With this release, the logic has been updated to use the first returned hostname as the node name and the issue has been resolved. ( OCPBUGS-33847 ) Previously, when enabling virtualHostedStyle with regionEndpoint set in the Image Registry Operator configuration, the image registry ignored the virtual hosted style configuration and failed to start. With this release, the image registry uses a new upstream distribution configuration and the issue has been resolved. ( OCPBUGS-34539 ) Previously, the OperatorHub incorrectly excluded the Amazon Resource Name (ARN) role information for ROSA Hosted Control Plane (HCP) clusters. With this update, the OperatorHub correctly displays ARN information and the issue has been resolved. ( OCPBUGS-34550 ) Previously, when attempting to delete a cluster or BareMetalHost (BMH) resource before installation, the metal3-operator tried to unnecessarily generate a pre-provisioning image. With this release, an exception has been created to prevent the creation of a pre-provisioning image during a BMH deletion and the issue has been resolved. ( OCPBUGS-34682 ) Previously, some text areas were no longer resizable when editing a config map in the Form View of the web console in OpenShift Container Platform 4.15. With this release, those text areas are now resizable. ( OCPBUGS-34703 ) 1.9.29.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.30. RHBA-2024:3488 - OpenShift Container Platform 4.15.16 bug fix update Issued: 5 June 2024 OpenShift Container Platform release 4.15.16 is now available. The list of bug fixes that are included in this update is documented in the RHBA-2024:3488 advisory. The RPM packages that are included in this update are provided by the RHBA-2024:3491 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.16 --pullspecs 1.9.30.1. Bug fixes Previously, in HAProxy 2.6 deployments on OpenShift Container Platform, shutting down HAproxy could result in a race condition. The main thread (tid=0) would wait for other threads to complete, but some threads would enter an infinite loop, consuming 100% CPU. With this release, the variable controlling the loop's termination is now properly reset, preventing non-main threads from looping indefinitely. This ensures that the thread's poll loop can terminate correctly. ( OCPBUGS-33883 ) Previously, the console Operator health check controller had a missing return statement, which caused the Operator to crash unexpectedly in some cases. With this release, the issue has been fixed. ( OCPBUGS-33720 ) Previously, the wait-for-ceo command used during the bootstrapping process to verify etcd rollout did not report errors for some failure modes. With this release, those error messages are visible on the bootkube script if the wait-for-ceo command exits in an error case. ( OCPBUGS-33564 ) 1.9.30.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.31. RHSA-2024:3327 - OpenShift Container Platform 4.15.15 bug fix and security update Issued: 29 May 2024 OpenShift Container Platform release 4.15.15, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:3327 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:3332 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.15 --pullspecs 1.9.31.1. Bug fixes Previously, certain situations caused transfer of an Egress IP address from one node to a different node to fail, and this failure impacted the OVN-Kubernetes network. The network failed to send gratuitous Address Resolution Protocol (ARP) requests to peers to inform them of the new node's medium access control (MAC) address. As a result, peers would temporarily send reply traffic to an old node and this traffic led to failover issues. With this release, the OVN-Kubernetes network correctly sends a gratuitous ARP to peers to inform them of the new Egress IP node MAC address, so that each peer can send reply traffic to the new node without causing failover time issues. ( OCPBUGS-33960 ) Previously, when mirroring Operator catalogs, the oc-mirror CLI plugin rebuilt the catalogs and regenerated the catalog's internal cache according to the imagesetconfig catalog filtering specifications. This operation required the use of the opm binary found within the catalogs. In OpenShift Container Platform 4.15, Operator catalogs include the opm Red Hat Enterprise Linux (RHEL) 9 binary, and this caused the mirroring process to fail when running on RHEL 8 systems. With this release, oc-mirror no longer builds catalogs by default. Instead, catalogs are mirrored directly to their destination registries. ( OCPBUGS-33575 ) Previously, the image registry did not support Amazon Web Services (AWS) region ca-west-1 . With this release, the image registry can now be deployed in this region. ( OCPBUGS-33672 ) Previously, service accounts (SAs) could not be used as OAuth2 clients because there were no tokens associated with the SAs. With this release, the OAuth registry client has been modified to anticipate this case and the issue has been resolved. ( OCPBUGS-33210 ) Previously, the proxy information set in the install-config.yaml file was not applied to the bootstrap process. With this release, the proxy information is applied to the bootstrap Ignition data which is applied to the bootstrap machine and the issue has been resolved. ( OCPBUGS-33205 ) Previously, the information from the imageRegistryOverrides setting was only extracted once on the HyperShift Operator initialization and did not refresh. With this release, the Hypershift Operator retrieves the new ImageContentSourcePolicy files from the management cluster and adds them to the Hypershift Operator and Control Plane Operator in every reconciliation loop. ( OCPBUGS-33117 ) Previously, the Hypershift Operator was not using the RegistryOverrides mechanism to inspect the image from the internal registry. With this release, the metadata inspector works as expected during the Hypershift Operator reconciliation, and the OverrideImages are properly populated. ( OCPBUGS-32220 ) Previously, attempting to update the VMware vSphere connection configuration for OpenShift Container Platform failed if the configuration included "" characters. With this release, the characters are stored correctly and the issue has been resolved. ( OCPBUGS-31863 ) 1.9.31.2. Known issue Previously, after upgrading to OpenShift Container Platform 4.15.6, attempting to use the oc-mirror CLI plugin within a cluster failed. With this release, there is now a FIPS-compliant version of oc-mirror for RHEL 9 and a version of oc-mirror for RHEL 8 that is not FIPS-compliant. ( OCPBUGS-31609 ) 1.9.31.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.32. RHSA-2024:2865 - OpenShift Container Platform 4.15.14 bug fix and security update Issued: 21 May 2024 OpenShift Container Platform release 4.15.14, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:2865 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:2870 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.14 --pullspecs 1.9.32.1. Bug fixes Previously, if traffic was forwarded to terminating endpoints that were not functioning, communications problems occured unless the readiness probes on these endpoints were configured to quickly flag the endpoints as not serving. This occurred because the the endpoint selection for services partially implemented KEP-1669 ProxyTerminatingEndpoints traffic to services inside the OpenShift Container Platform cluster. As a result, this traffic was forwarded to all endpoints that were either ready, such as ready=true , serving=true , terminating=false , or terminating and serving, such as ready=false , serving=true , terminating=true . This caused communication issues when traffic was forwarded to terminating endpoints and the readiness probes on these endpoints were not configured to quickly flag the endpoints as not serving, serving=false , when they were no longer functional. With this release, the endpoints selection logic now fully implements KEP-1669 ProxyTerminatingEndpoints for any given service so that all ready endpoints are selected. If no ready endpoints are found, functional terminating and serving endpoints are used.( OCPBUGS-27852 ) Previously, if you configured an OpenShift Container Platform cluster with a high number of internal services or user-managed load balancer IP addresses, you experienced a delayed startup time for the OVN-Kubernetes service. This delay occurred when the OVN-Kubernetes service attempted to install iptables rules on a node. With this release, the OVN-Kubernetes service can process a large number of services in a few seconds. Additionally, you can access a new log to view the status of installing iptables rules on a node. ( OCPBUGS-32426 ) Previously, some container processes created by using the exec command persisted even when CRI-O stopped the container. Consequently, lingering processes led to tracking issues, causing process leaks and defunct statuses. With this release, CRI-O tracks the exec calls processed for a container and ensures that the processes created as part of the exec calls are terminated when the container is stopped. ( OCPBUGS-32481 ) Previously, the Topology view in the OpenShift Container Platform web console did not show the visual connector between a virtual machine (VM) node and other non-VM components. With this release, the visual connector shows interaction activity of a component. ( OCPBUGS-32505 ) Previously, a logo in the masthead element of the OpenShift Container Platform web console could grow beyond 60 pixels in height. This caused the masthead to increase in height. With this release, the masthead logo is constrained to a max-height of 60 pixels. ( OCPBUGS-33548 ) Previously, if you need the Form view in the OpenShift Container Platform web console to remove an alternate service from a Route resource, the alternate service remained in the cluster. With this release, if you delete an alternate service in this way, the alternate service is fully removed from the cluster. ( OCPBUGS-33058 ) Previously, OpenShift Container Platform cluster connections to the Microsoft Azure API were delayed because of an issue with the API's codebase. With this release, a timeout schedule is set for any calls to the Azure API, so that an API call that hangs for a period of time is terminated. ( OCPBUGS-33127 ) Previously, a kernel regression that was introduced in OpenShift Container Platform 4.15.0 caused kernel issues, such as nodes crashing and rebooting, in nodes that mounted to CephFS storage. In this release, the regression issue is fixed so that the kernel regression issue no longer occurs. ( OCPBUGS-33250 ) Previously, the VMware vSphere Problem Detector Operator did not have HTTP and HTTPS proxies configured for it. This resulted in invalid cluster configuration error messages because connection issues between the Operator and the VMware vSphere vCenter server. With this release, the vSphere Problem Detector Operator uses the same HTTP and HTTPS proxies as other OpenShift Container Platform cluster Operators so that the vSphere Problem Detector Operator can connect to the VMware vSphere vCenter. ( OCPBUGS-33466 ) Previously, Alertmanager would send notification emails that contained a backlink to the Thanos Querier web interface. This web interface is an unreachable web service. With this release, monitoring alert notification emails contain a backlink to the OpenShift Container Platform web console's *Alerts page. ( OCPBUGS-33512 ) 1.9.32.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.33. RHSA-2024:2773 - OpenShift Container Platform 4.15.13 bug fix and security update Issued: 15 May 2024 OpenShift Container Platform release 4.15.13, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:2773 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:2776 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.13 --pullspecs 1.9.33.1. Bug fixes Previously, the name of the Security Context Constraint (SCC) was incorrect so there was no functioning built-in cluster role. With this release, the name was changed to hostmount-anyuid and the SCC now has a functioning built-in cluster role. ( OCPBUGS-33277 ) Previously, the Ironic Python Agent (IPA) failed when trying to wipe disks because it expected the wrong byte sector size, which caused the node provisioning to fail. With this release, the IPA checks the disk sector size and node provisioning succeeds. ( OCPBUGS-33133 ) Previously, Static Persistent Volumes in Azure File on Workload Identity clusters could not be configured due to a bug in the driver causing volume mounts to fail. With this release, the driver has now been fixed, and Static Persistent Volumes mount correctly. ( OCPBUGS-33038 ) Previously, during OpenShift Container Platform updates in performance-tuned clusters, resuming a MachineConfigPool resource resulted in additional restarts for nodes in the pool. This was due to the performance profile controller reconciling against outdated machine configurations while the pool was paused. With this update, the controller reconciles against the latest planned machine configurations before the pool resumes, preventing additional node reboots. ( OCPBUGS-32978 ) Previously, the load balancing algorithm did not differentiate between active and inactive services when determining weights, and it employed the random algorithm excessively in environments with many inactive services or environments routing backends with weight 0. This led to increased memory usage and a higher risk of excessive memory consumption. With this release, changes are made to optimize traffic direction towards active services only and prevent unnecessary use of the random algorithm with higher weights, reducing the potential for excessive memory consumption. ( OCPBUGS-32977 ) Previously, If a user created a ContainerRuntimeConfig resource as an extra manifest for a single-node OpenShift Container Platform cluster (SNO) installation, the boostrap process failed with the error: more than one ContainerRuntimeConfig found that matches MCP labels . With this release, the incorrect processing of ContainerRuntimeConfig resources is fixed, which resolves the issue. ( OCPBUGS-30152 ) Previously, the Helm Plugin index view did not display the same number of charts as the Helm CLI if the chart names varied. With this release, the Helm catalog now looks for charts.openshift.io/name and charts.openshift.io/provider so that all versions are grouped together in a single catalog title. ( OCPBUGS-32716 ) Previously, the description of the hosted control plane CLI flag api-server-address was unclear. With this release, the description has been updated for clarity and completeness. ( OCPBUGS-25858 ) 1.9.33.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.34. RHSA-2024:2664 - OpenShift Container Platform 4.15.12 bug fix and security update Issued: 9 May 2024 OpenShift Container Platform release 4.15.12, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:2664 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:2669 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.12 --pullspecs 1.9.34.1. Enhancements The following enhancements are included in this z-stream release: 1.9.34.1.1. API version for the ClusterTriggerBinding, TriggerTemplate, and EventListener CRDs upgraded from v1alpha1 to v1beta1 Previously, the API version for the ClusterTriggerBinding , TriggerTemplate , and EventListener CRDs was v1alpha1 . With this release, the API version is upgraded to v1beta1 so that the pipelines plugin supports the latest Pipeline Trigger API version for the ClusterTriggerBinding , TriggerTemplate , and EventListener CRDs. ( OCPBUGS-31445 ) 1.9.34.1.2. PipelineRun list view performance improvement Previously, in the PipelineRun list page, all of the TaskRun objects were fetched and separated based on their PipelineRun name. With this release, TaskRun objects are only fetched for failed and cancelled PipelineRun objects, and a caching mechanism is added to fetch the PipelineRun and TaskRun objects that are associated with the failed and cancelled PipelineRun objects. ( OCPBUGS-31799 ) 1.9.34.1.3. Installer handles the escaping of the % character Previously, if a cluster was installed using a proxy, and the proxy information contained escaped characters in the format %XX , the installation failed. With this release, the installer now handles the escaping of the % character. ( OCPBUGS-32259 ) 1.9.34.1.4. Cluster Fleet Evaluation status information added to the Machine Config Operator Previously, the Machine Config Operator (MCO) did not include the Cluster Fleet Evaluation (CFE) status. With this release, the CFE status information is added to the MCO and available to customers. ( OCPBUGS-32922 ) 1.9.34.1.5. OperatorHub filter renamed from FIPS Mode to Designed for FIPS Previously, OperatorHub included a filter named FIPS Mode . With this release, that filter is named Designed for FIPS . ( OCPBUGS-32933 ) 1.9.34.2. Bug fixes Previously, containers had an incorrect view of the pids limit in their cgroup hierarchy and reported as a random number instead of max . The containers do not have max PIDs and are limited by the pod PID limit, which is set outside of the container's cgroup hierarchy and not visible from within the container. With this release, the issue has been resolved. ( OCPBUGS-28926 ) Previously, for OpenShift Container Platform deployments on Red Hat OpenStack Platform (RHOSP), the MachineSet object did not correctly apply the value for the Port Security parameter. With this release, the MachineSet object applies the port_security_enabled flag as expected. ( OCPBUGS-30857 ) Previously, the installation program erroneously attempted to verify the libvirt network interfaces when an agent-based installation was configured with the openshift-baremetal-install binary. With this release, the agent installation method does not require libvirt and this validation is disabled. ( OCPBUGS-30944 ) Previously, the cpuset-configure.sh script could run before all of the system processes were created. With this release, the script is only triggered to run when CRI-O is initialized and the issue is resolved. ( OCPBUGS-31692 ) Previously, an incorrect dnsPolicy was used for the konnectivity-agent daemon set in the data plane. As a result, when CoreDNS was down, konnectivity-agent pods on the data plane could not resolve the proxy-server-address and could fail the konnectivity-server in the control plane. With this release, konnectivity-agent uses the host system DNS service to lookup the proxy-server-address and no longer depends on CoreDNS. ( OCPBUGS-31826 ) Previously, if gathering logs from the bootstrap node failed during the gather bootstrap execution, the virtual machine (VM) serial console logs were not included in the gather output even if they were collected. With this release, serial logs are always included if they are collected. ( OCPBUGS-32264 ) Previously, port 22 was missing from the compute node's security group in AWS SDK installations, therefore connecting to the compute nodes with SSH failed when users used AWS SDK provisioning. With this release, port 22 is added to the compute node's security group and the issue is resolved. ( OCPBUGS-32383 ) Previously, the installation program required the s3:HeadBucket permission for AWS, even though it does not exist. The correct permission for the HeadBucket action is s3:ListBucket . With this release, s3:HeadBucket is removed from the list of required permissions and only s3:ListBucket is required, as expected. ( OCPBUGS-32690 ) Previously, there was an issue with OpenShift Container Platform Ansible upgrades because the IPsec configuration was not idempotent. With this release, changes are made to the OpenShift Container Platform Ansible playbooks, ensuring that all IPsec configurations are idempotent. ( OCPBUGS-33102 ) 1.9.34.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.35. RHSA-2024:2068 - OpenShift Container Platform 4.15.11 bug fix and security update Issued: 2 May 2024 OpenShift Container Platform release 4.15.11, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:2068 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:2071 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.11 --pullspecs 1.9.35.1. Enhancements The following enhancements are included in this z-stream release: 1.9.35.1.1. Increased the number of supported nodes on the Topology view Previously, the OpenShift Container Platform web console Topology view could only display a maximum of 100 nodes. If you attempted to view more than 100 nodes, the web console would output a Loading is taking longer than expected. error message. With this release, the MAX_NODES_LIMIT parameter for the web console is set to 200 , so that the web console can display a maximum of 200 nodes. ( OCPBUGS-32340 ) 1.9.35.1.2. Added gcr and acr RHEL credential providers OpenShift Container Platform 4.15 includes gcr and acr Red Hat Enterprise Linux (RHEL) credential providers so that future upgrades to later versions of OpenShift Container Platform that require RHEL compute nodes deployed on a cluster do not result in a failed installation. ( OCPBUGS-30970 ) 1.9.35.1.3. Added permission for reading the featureGates resource to RBAC rule OpenShift Container Platform 4.15 adds a permission to the role-based access control (RBAC) rule so that the DNS Operator can read the featureGates resource. Without this permission, an upgrade operation to a later version of OpenShift Container Platform could fail. ( OCPBUGS-32093 ) 1.9.35.2. Bug fixes The installation of OpenShift Container Platform failed when a performance profile was located in the extra manifests folder and targeted master or worker node roles. This was caused by the internal installation that processes the performance profile before the default master or worker node roles were created. With this release, the internal installation processes the performance profile after the node roles are created so that this issue no longer exists. ( OCPBUGS-27948 ) Previously, the image registry did not support Amazon Web Services (AWS) region ca-west-1 . With this release, the image registry can now be deployed in this region. ( OCPBUGS-31641 ) Previously, a cluster upgraded to OpenShift Container Platform 4.14 or later experienced router pods unexpectedly closing keep-alive connections that caused traffic degradation issues for Apache HTTP clients. This issue was caused by router pods using a version of an HAProxy router that closed idle connections after the HAProxy router was restarted. With this release, the pods use a version of an HAProxy router that includes an idle-close-on-response option. The HAProxy router now waits for the last request and response transaction before the idle connection is closed. ( OCPBUGS-32435 ) Previously, a Redfish virtual media Hewlett Packard Enterprise (HPE) integrated Lights Out (iLO) 5 bare-metal machine's compression was forcibly disabled to workaround other unrelated issues in different hardware models. This caused the FirmwareSchema resource to be missing from each iLO 5 bare-metal machine. Each machine needs compression to fetch message registries from their Redfish Baseboard Management Controller (BMC) endpoints. With this release, each iLO 5 bare-metal machine that needs the FirmwareSchema resource does not have compression forcibly disabled. ( OCPBUGS-31686 ) Previously, nodes of paused MachineConfigPools might have their pause status dropped when performing a cluster update. With this release, nodes of paused MachineConfigPools correctly stay paused when performing a cluster update. ( OCPBUGS-31839 ) Previously, newer versions of Redfish used Manager resources to deprecate the Uniform Resource Identifier (URI) for the RedFish Virtual Media API. This caused any hardware that used the newer Redfish URI for Virtual Media to not be provisioned. With this release, the Ironic API identifies the correct Redfish URI to deploy for the RedFish Virtual Media API so that hardware relying on either the deprecated or the newer URI could be provisioned. ( OCPBUGS-31830 ) Previously, the Cloud Credential Operator (CCO) checked for a non-existent s3:HeadBucket permission during the validation checks in mint mode that resulted in a failed cluster installation. With this release, CCO removes the validation check for this non-existing permission so that validation checks pass in mint mode and the cluster installation does not fail. ( OCPBUGS-31924 ) Previously, a new Operator Lifecycle Manager (OLM) Operator that upgraded to OpenShift Container Platform 4.15.3 resulted in failure because important resources were not injected into the upgrade operation. With this release, these resources are now cached so that newer OLM Operator upgrades can succeed. ( OCPBUGS-32311 ) Previously, the Red OpenShift Container Platform web console did not require the Creator field as a mandatory field. API changes specified an empty value for this field, but a user profile could still create silent alerts. With this release, the API marks the Creator field as a mandatory field for a user profile that needs to create silent alerts. ( OCPBUGS-32097 ) Previously, in hosted control planes for OpenShift Container Platform, when you created the custom resource definition (CRD) for ImageDigestMirrorSet and ImageContentSourcePolicy objects at the same time in a disconnected environment, the HyperShift Operator created the object only for the ImageDigestMirrorSet CRD, ignoring the ImageContentSourcePolicy CRD. With this release, the HyperShift Operator can create objects at the same time for the ImageDigestMirrorSet and ImageContentSourcePolicy CRDs. ( OCPBUGS-32164 ) Previously, IPv6 networking services that operated in Red Hat OpenStack Platform (RHOSP) environments could not share an IPv6 load balancer that was configured with multiple services because of an issue that mistakenly marks an IPv6 load balancer as Internal to the cluster. With this release, IPv6 load balancers are no longer marked as Internal so that an IPv6 load balancer with multiple services can be shared among IPv6 networking services. ( OCPBUGS-32246 ) Previously, the control plane machine sets (CPMS) did not allow template names for vSphere in a CPMS definition. With this release, a CPMS Operator fix allows template names for vSphere in the CPMS definition so that this issue no longer persists. ( OCPBUGS-32357 ) Previously, the control plane machine sets (CPMS) Operator was not correctly handling older OpenShift Container Platform version configurations that had a vSphere definition in the infrastructure custom resource. This would cause cluster upgrade operations to fail and the CPMS Operator to remain in a crashloopback state. With this release, the cluster upgrade operations do not fail because of this issue. ( OCPBUGS-32414 ) Previously, the image registry's Azure path fix job incorrectly required the presence of AZURE_CLIENT_ID and TENANT_CLIENT_ID parameters to function. This caused a valid configuration to throw an error message. With this release, adds a check to the Identity and Access Management (IAM) service account key to validate if these parameters are needed, so that a cluster upgrade operation no longer fails. ( OCPBUGS-32396 ) Previously, a build pod that failed because of a memory limitation would have its pod status changed to Error instead of OOMKilled . This caused these pods to not be reported correctly. The issue would only occur on cgroup v2 nodes. With this release, a pod with a status of OOMKilled is correctly detected and reported. ( OCPBUGS-32498 ) 1.9.35.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.36. RHSA-2024:1887 - OpenShift Container Platform 4.15.10 bug fix and security update Issued: 26 April 2024 OpenShift Container Platform release 4.15.10, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1887 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:1892 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.10 --pullspecs 1.9.36.1. Bug fixes Previously, clusters created before OpenShift Container Platform 4.7 will have had signer keys for api-int endpoint updated unexpectedly when users upgraded to OpenShift Container Platform 4.15 due to the installer deleting the SecretTypeTLS and then recreating the secret with kubernetes.io/tls type. With this release, the issue is resolved by the installer changing the secret type without deleting the secret. ( OCPBUGS-31807 ) Previously, when users imported image stream tags, ImageContentSourcePolicy (ICSP) was not allowed to co-exist with ImageDigestMirrorSet (IDMS) and ImageTagMirrorSet (ITMS). OpenShift Container Platform ignored any IDMS/ITMS created by the user and favored ICSP. With this release, they are allowed to co-exist since importing image stream tags will now respect IDMS/ITMS when ICSP is also present. ( OCPBUGS-31469 ) Previously, Terraform would create the compute server group with the policy set for the control plane. As a consequence, the 'serverGroupPolicy' property of the install-config.yaml file was ignored for the compute server group. With this release, the server group policy set in the install-config.yaml file for the compute MachinePool is correctly applied at install-time in the Terraform flow. ( OCPBUGS-31335 ) Previously, projects that specified a non-intersecting openshift.io/node-selector project selector with pods .spec.nodeName could cause runaway Pod creation in Deployments. With this release, pods with non-intersecting .spec.nodeName are not admitted by the API server which resolves the issue. ( OCPBUGS-29922 ) Previously, a remote attacker with basic login credentials could check the pod manifest to discover a repository pull secret. With this release, the vulnerability has been fixed. ( OCPBUGS-28769 ) 1.9.36.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.37. RHSA-2024:1770 - OpenShift Container Platform 4.15.9 bug fix and security update Issued: 16 April 2024 OpenShift Container Platform release 4.15.9, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1770 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:1773 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.9 --pullspecs 1.9.37.1. Enhancements The following enhancements are included in this z-stream release: 1.9.37.1.1. Number of configured control plane replicas validated Previously, the number of control plane replicas could be set to an invalid value, such as 2. With this release, a validation is added to prevent any misconfiguration of the control plane replicas at the ISO generation time. ( OCPBUGS-30822 ) 1.9.37.2. Bug fixes Previously, saving kdump logs to an SSH target was failing in Open Virtual Network (OVN) deployments. The kdump crash logs were not created to the SSH remote when OVN was configured. With this release, OVS-configurations are no longer run before kdump. ( OCPBUGS-30884 ) Previously, the coreos-installer CLI tool did not correctly modify, reset, or show the kernel arguments for an ISO generated by the openshift-install agent create image command. With this release, the coreos-installer iso kargs modify <iso> , coreos-installer iso kargs reset <iso> , and coreos-installer iso kargs show <iso> commands all work as expected. ( OCPBUGS-30922 ) Previously, the services secondary IP family test was failing with dual-stack clusters. With this release, the 30000:32767 traffic range is enabled and the issue has been resolved. ( OCPBUGS-31284 ) 1.9.37.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.38. RHSA-2024:1668 - OpenShift Container Platform 4.15.8 bug fix and security update Issued: 8 April 2024 OpenShift Container Platform release 4.15.8, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1668 advisory. There are no RPM packages for this update. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.8 --pullspecs 1.9.38.1. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.39. RHSA-2024:1559 - OpenShift Container Platform 4.15.6 bug fix and security update Issued: 2 April 2024 OpenShift Container Platform release 4.15.6, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1559 advisory. The RPM packages that are included in the update are provided by the RHSA-2024:1563 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.6 --pullspecs 1.9.39.1. Known issue There is a known issue in this release which causes the oc-mirror binary to fail on Red Hat Enterprise Linux (RHEL) 8 systems. Workaround: Use the Red Hat OpenShift Container Platform 4.15.5 oc-mirror binary or extract oc-mirror.rhel8 . ( OCPBUGS-31609 ) 1.9.39.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.40. RHSA-2024:1449 - OpenShift Container Platform 4.15.5 bug fix and security update Issued: 27 March 2024 OpenShift Container Platform release 4.15.5, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1449 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:1452 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.5 --pullspecs 1.9.40.1. Bug fixes Previously, the OpenShift Installer could fail to retrieve instance type information from Microsoft Azure in the allotted time, even when the type existed when verified with the Azure CLI. With this release, the timeout duration has increased to wait for an Azure response, and the error message includes the correct reason for the failure. ( OCPBUGS-29964 ) Previously, when creating clusters through OpenShift Cluster Manager (OCM) using the Hive provisioner, which uses OpenShift Installer, the installer failed to delete AWS IAM instance profiles after deleting the cluster. This issue led to an accumulation of instance profiles. With this release, the installer tags the instance profiles and deletes the appropriately tagged profiles. ( OCPBUGS-18986 ) 1.9.40.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.41. RHSA-2024:1255 - OpenShift Container Platform 4.15.3 bug fix and security update Issued: 19 March 2024 OpenShift Container Platform release 4.15.3, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1255 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:1258 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.3 --pullspecs 1.9.41.1. Bug fixes Previously, if the root credentials were removed from a Google Cloud Platform (GCP) cluster that was in mint mode, the Cloud Credential Operator (CCO) would go into a degraded state after approximately 1 hour. This issue means that CCO could not manage the credentials root secret for a component. With this update, mint mode supports custom roles, so that removing root credentials from a GCP cluster does not cause the CCO to go into a degraded state. ( OCPBUGS-30412 ) 1.9.41.2. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . 1.9.42. RHSA-2024:1210 - OpenShift Container Platform 4.15.2 bug fix and security update Issued: 13 March 2024 OpenShift Container Platform release 4.15.2, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:1210 advisory. The RPM packages that are included in the update are provided by the RHBA-2024:1213 advisory. Space precluded documenting all of the container images for this release in the advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.15.2 --pullspecs 1.9.42.1. Known issues Providing a performance profile as an extra manifest at Day 0 did not work in OpenShift Container Platform 4.15.0, but it is now possible in 4.15.2 with the following limitation: The installation of OpenShift Container Platform might fail when a performance profile is present in the extra manifests folder and targets the primary or worker pools. This is caused by the internal installation ordering that processes the performance profile before the default primary and worker MachineConfigPools are created. It is possible to workaround this issue by including a copy of the stock primary or worker MachineConfigPools in the extra manifests folder. ( OCPBUGS-27948 , OCPBUGS-29752 ) 1.9.42.2. Bug fixes Previously, when updating to OpenShift Container Platform 4.15, CatalogSource objects never refreshed, which caused the optional Operator catalogs to fail to update. With this release, the image pull policy is changed to Always , which enables the optional Operator catalogs to update correctly. ( OCPBUGS-30193 ) Previously, the nodeStatusReportFrequency setting was linked to the nodeStatusUpdateFrequency setting. With this release, the nodeStatusReportFrequency setting is set to 5 minutes. ( OCPBUGS-29797 ) Previously, under certain conditions, the installer would fail with the error message unexpected end of JSON input . With this release, the error message is clarified and suggests users set the serviceAccount field in the install-config.yaml file to fix the issue. ( OCPBUGS-29495 ) Previously, the oauthMetadata property provided in the HostedCluster object was not honored. With this release, the oauthMetadata property is honored by the HostedCluster object. ( OCPBUGS-29025 ) 1.9.42.3. Updating To update an OpenShift Container Platform 4.15 cluster to this latest release, see Updating a cluster by using the CLI . | [
"olm.og.<operator_group_name>.<admin_edit_or_view>-<hash_value>",
"Bundle unpacking failed. Reason: DeadlineExceeded, Message: Job was active longer than specified deadline",
"cd ~/clusterconfigs/openshift vim openshift-worker-0.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: annotations: bmac.agent-install.openshift.io/installer-args: '[\"--append-karg\", \"ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none\", \"--save-partindex\", \"1\", \"-n\"]' 1 2 3 4 5 inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <fqdn> 6 bmac.agent-install.openshift.io/role: <role> 7 generation: 1 name: openshift-worker-0 namespace: mynamespace spec: automatedCleaningMode: disabled bmc: address: idrac-virtualmedia://<bmc_ip>/redfish/v1/Systems/System.Embedded.1 8 credentialsName: bmc-secret-openshift-worker-0 disableCertificateVerification: true bootMACAddress: 94:6D:AE:AB:EE:E8 bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/sda",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{infra_env_id}/hosts/USD{host_id}/installer-args -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"args\": [ \"--append-karg\", \"ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none\", 1 2 3 4 5 \"--save-partindex\", \"1\", \"-n\" ] } ' | jq",
"oc adm release info 4.15.47 --pullspecs",
"oc adm release info 4.15.46 --pullspecs",
"oc adm release info 4.15.45 --pullspecs",
"oc adm release info 4.15.44 --pullspecs",
"oc adm release info 4.15.43 --pullspecs",
"oc adm release info 4.15.42 --pullspecs",
"oc adm release info 4.15.41 --pullspecs",
"oc adm release info 4.15.39 --pullspecs",
"oc adm release info 4.15.38 --pullspecs",
"oc adm release info 4.15.37 --pullspecs",
"apiVersion: v1 data: enable-nodeip-debug: \"true\" kind: ConfigMap metadata: name: logging namespace: openshift-vsphere-infra",
"oc adm release info 4.15.36 --pullspecs",
"oc adm release info 4.15.35 --pullspecs",
"oc adm release info 4.15.34 --pullspecs",
"oc adm release info 4.15.33 --pullspecs",
"oc adm release info 4.15.32 --pullspecs",
"oc adm release info 4.15.31 --pullspecs",
"oc adm release info 4.15.30 --pullspecs",
"oc adm release info 4.15.29 --pullspecs",
"oc adm release info 4.15.28 --pullspecs",
"oc adm release info 4.15.27 --pullspecs",
"oc adm release info 4.15.25 --pullspecs",
"oc adm release info 4.15.24 --pullspecs",
"oc adm release info 4.15.23 --pullspecs",
"oc adm release info 4.15.22 --pullspecs",
"oc adm release info 4.15.21 --pullspecs",
"oc adm release info 4.15.20 --pullspecs",
"oc adm release info 4.15.19 --pullspecs",
"oc adm release info 4.15.18 --pullspecs",
"oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.15-route-config-not-supported-in-4.16\":\"true\"}}' --type=merge",
"oc adm release info 4.15.17 --pullspecs",
"oc adm release info 4.15.16 --pullspecs",
"oc adm release info 4.15.15 --pullspecs",
"oc adm release info 4.15.14 --pullspecs",
"oc adm release info 4.15.13 --pullspecs",
"oc adm release info 4.15.12 --pullspecs",
"oc adm release info 4.15.11 --pullspecs",
"oc adm release info 4.15.10 --pullspecs",
"oc adm release info 4.15.9 --pullspecs",
"oc adm release info 4.15.8 --pullspecs",
"oc adm release info 4.15.6 --pullspecs",
"oc adm release info 4.15.5 --pullspecs",
"oc adm release info 4.15.3 --pullspecs",
"oc adm release info 4.15.2 --pullspecs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/release_notes/ocp-4-15-release-notes |
probe::stap.pass2.end | probe::stap.pass2.end Name probe::stap.pass2.end - Finished stap pass2 (elaboration) Synopsis stap.pass2.end Values session the systemtap_session variable s Description pass2.end fires just before the jump to cleanup if s.last_pass = 2 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-pass2-end |
15.6. Additional Resources | 15.6. Additional Resources For more information about vsftpd , refer to the following resources. 15.6.1. Installed Documentation The /usr/share/doc/vsftpd- <version-number> / directory - Replace <version-number> with the installed version of the vsftpd package. This directory contains a README with basic information about the software. The TUNING file contains basic performance tuning tips and the SECURITY/ directory contains information about the security model employed by vsftpd . vsftpd related man pages - There are a number of man pages for the daemon and configuration files. The following lists some of the more important man pages. Server Applications man vsftpd - Describes available command line options for vsftpd . Configuration Files man vsftpd.conf - Contains a detailed list of options available within the configuration file for vsftpd . man 5 hosts_access - Describes the format and options available within the TCP wrappers configuration files: hosts.allow and hosts.deny . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ftp-resources |
Preface | Preface | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/pr01 |
28.4. Modifying Password Policy Attributes | 28.4. Modifying Password Policy Attributes Important When you modify a password policy, the new rules apply to new passwords only. The changes are not applied retroactively to existing passwords. For the change to take effect, users must change their existing passwords, or the administrator must reset the passwords of other users. See Section 22.1.1, "Changing and Resetting User Passwords" . Note For recommendations on secure user passwords, see Password Security in the Security Guide . To modify a password policy using: the web UI, see the section called "Web UI: Modifying a Password Policy" the command line, see the section called "Command Line: Modifying a Password Policy" Note that setting a password policy attribute to 0 means no attribute restriction. For example, if you set maximum lifetime to 0 , user passwords never expire. Web UI: Modifying a Password Policy Select Policy Password Policies . Click the policy you want to change. Update the required attributes. For details on the available attributes, see Section 28.2.1, "Supported Password Policy Attributes" . Click Save to confirm the changes. Command Line: Modifying a Password Policy Use the ipa pwpolicy-mod command to change the policy's attributes. For example, to update the global password policy and set the minimum password length to 10 : To update a group policy, add the group name to ipa pwpolicy-mod . For example: Optional. Use the ipa pwpolicy-show command to display the new policy settings. To display the global policy: To display a group policy, add the group name to ipa pwpolicy-show : | [
"ipa pwpolicy-mod --minlength=10",
"ipa pwpolicy-mod group_name --minlength=10",
"ipa pwpolicy-show",
"ipa pwpolicy-show group_name"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/pwd-policies-mod |
Appendix A. Component Versions | Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 7.5 release. Table A.1. Component Versions Component Version kernel 3.10.0-862 kernel-alt 4.14.0-49 QLogic qla2xxx driver 9.00.00.00.07.5-k1 QLogic qla4xxx driver 5.04.00.00.07.02-k0 Emulex lpfc driver 0:11.4.0.4 iSCSI initiator utils ( iscsi-initiator-utils ) 6.2.0.874-7 DM-Multipath ( device-mapper-multipath ) 0.4.9-119 LVM ( lvm2 ) 2.02.177-4 qemu-kvm [a] 1.5.3-156 qemu-kvm-ma [b] 2.10.0-21 [a] The qemu-kvm packages provide KVM virtualization on AMD64 and Intel 64 systems. [b] The qemu-kvm-ma packages provide KVM virtualization on IBM POWER8, IBM POWER9, and IBM Z. Note that KVM virtualization on IBM POWER9 and IBM Z also requires using the kernel-alt packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/component_versions |
Chapter 4. Flavors for instances | Chapter 4. Flavors for instances An instance flavor is a resource template that specifies the virtual hardware profile for the instance. You select a flavor when you launch instances to specify the virtual resources to allocate to the instance. Flavors define the number of virtual CPUs, the amount of RAM, the size of the root disk, and the size of the virtual storage, including secondary ephemeral storage and swap disk, to create the instance with. You select the flavor from the set of available flavors defined for your project within the cloud. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/con_flavors-for-instances_osp |
Chapter 3. Creating and building an application using the web console | Chapter 3. Creating and building an application using the web console 3.1. Before you begin Review Accessing the web console . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. 3.2. Logging in to the web console You can log in to the OpenShift Container Platform web console to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. You are redirected to the Projects page. For non-administrative users, the default view is the Developer perspective. For cluster administrators, the default view is the Administrator perspective. If you do not have cluster-admin privileges, you will not see the Administrator perspective in your web console. The web console provides two perspectives: the Administrator perspective and Developer perspective. The Developer perspective provides workflows specific to the developer use cases. Figure 3.1. Perspective switcher Use the perspective switcher to switch to the Developer perspective. The Topology view with options to create an application is displayed. 3.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure In the +Add view, select Project Create Project . In the Name field, enter user-getting-started . Optional: In the Display name field, enter Getting Started with OpenShift . Note Display name and Description fields are optional. Click Create . You have created your first project on OpenShift Container Platform. Additional resources Default cluster roles Viewing a project using the web console Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You are logged in to the OpenShift Container Platform web console. You have a deployed image. You are in the Administrator perspective. Procedure Navigate to User Management and then click RoleBindings . Click Create binding . Select Namespace role binding (RoleBinding) . In the Name field, enter sa-user-account . In the Namespace field, search for and select user-getting-started . In the Role name field, search for view and select view . In the Subject field, select ServiceAccount . In the Subject namespace field, search for and select user-getting-started . In the Subject name field, enter default . Click Create . Additional resources Understanding authentication RBAC overview 3.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter the following: quay.io/openshiftroadshow/parksmap:latest Ensure that you have the current values for the following: Application: national-parks-app Name: parksmap Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=parksmap role=frontend Click Create . You are redirected to the Topology page where you can see the parksmap deployment in the national-parks-app application. Additional resources Creating applications using the Developer perspective Viewing a project using the web console Viewing the topology of your application Deleting a project using the web console 3.5.1. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. The Overview panel enables you to access many features of the parksmap deployment. The Details and Resources tabs enable you to scale application pods, check build status, services, and routes. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure Click D parksmap in the Topology view to open the Overview panel. Figure 3.2. Parksmap deployment The Overview panel includes tabs for Details , Resources , and Observe . The Details tab might be displayed by default. Table 3.1. Overview panel tab definitions Tab Defintion Details Enables you to scale your application and view pod configuration such as labels, annotations, and the status of the application. Resources Displays the resources that are associated with the deployment. Pods are the basic units of OpenShift Container Platform applications. You can see how many pods are being used, what their status is, and you can view the logs. Services that are created for your pod and assigned ports are listed under the Services heading. Routes enable external access to the pods and a URL is used to access them. Observe View various Events and Metrics information as it relates to your pod. Additional resources Interacting with applications and components Scaling application pods and checking builds and routes Labels and annotations used for the Topology view 3.5.2. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure In the Topology view, click the national-parks-app application. Click the Details tab. Use the up arrow to scale the pod to two instances. Figure 3.3. Scaling application Note Application scaling can happen quickly because OpenShift Container Platform is launching a new instance of an existing image. Use the down arrow to scale the pod down to one instance. Additional resources Recommended practices for scaling the cluster Understanding horizontal pod autoscalers About the Vertical Pod Autoscaler Operator 3.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service that is nationalparks . Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Import from Git to open a dialog. Enter the following URL in the Git Repo URL field: https://github.com/openshift-roadshow/nationalparks-py.git A builder image is automatically detected. Note If the detected builder image is Dockerfile, select Edit Import Strategy . Select Builder Image and then click Python . Scroll to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: nationalparks Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=nationalparks role=backend type=parksmap-backend Click Create . From the Topology view, select the nationalparks application. Note Click the Resources tab. In the Builds section, you can see your build running. Additional resources Adding services to your application Importing a codebase from Git to create an application Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter quay.io/centos7/mongodb-36-centos7 . In the Runtime icon field, search for mongodb . Scroll down to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: mongodb-nationalparks Select Deployment as the Resource . Unselect the checkbox to Create route to the application . In the Advanced Options section, click Deployment to add environment variables to add the following environment variables: Table 3.2. Environment variable names and values Name Value MONGODB_USER mongodb MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Additional resources Adding services to your application Viewing a project using the web console Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Developer perspective, navigate to Secrets on the left hand navigation and click Secrets . Click Create Key/value secret . In the Secret name field, enter nationalparks-mongodb-parameters . Enter the following values for Key and Value : Table 3.3. Secret keys and values Key Value MONGODB_USER mongodb DATABASE_SERVICE_NAME mongodb-nationalparks MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Click Add Secret to workload . From the drop down menu, select nationalparks as the workload to add. Click Save . This change in configuration triggers a new rollout of the nationalparks deployment with the environment variables properly injected. Additional resources Understanding secrets 3.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Before loading the data, add the proper labels to the mongodb-nationalparks and nationalparks deployment. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Topology view, navigate to nationalparks deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser and add the following at the end of the URL: /ws/data/load Example output Items inserted in database: 2893 From the Topology view, navigate to parksmap deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser to view your national parks across the world map. Figure 3.4. National parks across the world Additional resources Providing access permissions to your project using the Developer perspective Labels and annotations used for the Topology view | [
"/ws/data/load",
"Items inserted in database: 2893"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/getting_started/openshift-web-console |
Chapter 3. Red Hat build of OpenJDK 8.0.352 release notes | Chapter 3. Red Hat build of OpenJDK 8.0.352 release notes The latest Red Hat build of OpenJDK 8 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 8 releases. Note For all the other changes and security fixes, see OpenJDK 8u352 Released . Red Hat build of OpenJDK new features and enhancements Review the following release notes to understand new features and feature enhancements that the Red Hat build of OpenJDK 8.0.352 release provides: Reference object changes and configurations From Red Hat build of OpenJDK 8.0.352 onward, you can no longer clone Reference objects. If you attempt to clone a reference object, the java.lang.ref.Reference::clone method throws a CloneNotSupportedException message. If you want to copy an existing Reference object, you must use the constructor of the appropriate Reference subclass to create a Reference object. This ensures the new Reference object contains referent and reference queues that are identical to the target Reference object. For the Red Hat build of OpenJDK 8.0.352 release, the java.lang.ref.Reference.enqueue method changes behavior. When application code calls the java.lang.ref.Reference.enqueue method, this method clears the Referent before it adds the object to the registered queue. After the Reference object is enqueued, code that expects the return value of java.lang.ref.Reference.get() to be non-null might throw a NullPointerException . The Red Hat build of OpenJDK 8.0.352 release changes the behavior of PhantomReference objects, so that they are cleared before being enqueued in any associated queues. This is the same as the existing behaviour for SoftReference and WeakReference objects. Links See JDK-8201793 (JDK Bug System). See JDK-8175797 (JDK Bug System). See JDK-8071507 (JDK Bug System). For more information about these Reference object configurations, see Red Hat build of OpenJDK 8 Maintenance Release 4 (Red Hat Customer Portal) Enablement of TLSv1.3 for client roles Red Hat build of OpenJDK 8.0.352 enables TLSv1.3 protocol support for client roles, by default. From the Red Hat build of OpenJDK 8.0.272 release, TLSv1.3 protocol support for server roles was already enabled. If you create a TLS client role in Red Hat build of OpenJDK 8.0.352 while keeping the default protocol setting, and TLSv1.3 is used in the connection established with the TLS server, compatibility issues might affect your application. The following list details common compatibility issues: TLSv1.3 uses a half-duplex-close policy whereas TLSv1.2 uses a full-duplex-close policy. You can use the jdk.tls.acknowledgeCloseNotify system property to configure TLSv1.3 to use a full-duplex-close policy. For more information about this configuration, see JDK-8208526 . TLSv1.3 does not support certain algorithms in the signature_algorithms_cert extension. For example, if you only allow Digital Signature Algorithm (DSA) for signature verification in your configurations, you will experience incompatibility issues when using the TLSv1.3 protocol. A client that uses DSA certificates for client authentication causes compatibility issues with TLSv1.3. TLSv1.3 contains different cipher suites than earlier TLS protocol versions. For an application with hard-coded unsupported cipher suites, compatibility issues might exist. TLSV1.3 session resumption and key update behaviors differ from earlier TLS protocol versions. An application that relies on handshake details from these protocols might experience compatibility issues. If you need to disable TLSv1.3 protocol support for your client role, complete one of the following actions: Obtain a TLSv1.2 context with SSLContext.getInstance("TLSv1.2") . Set the jdk.tls.client.protocols system property to TLSv1.2. For example, -Djdk.tls.client.protocols="TLSv1.2" . Set an earlier TLS protocol for the Red Hat build of OpenJDK javax.net.ssl API, as demonstrated with the following examples: Links See JDK-8208526 (JDK Bug System). jdk.httpserver.maxConnections system property Red Hat build of OpenJDK 8.0.352 adds a new system property, jdk.httpserver.maxConnections , that fixes a security issue where no connection limits were specified for the HttpServer service, which can cause accepted connections and established connections to remain open indefinitely. You can use the jdk.httpserver.maxConnections system property to change the HttpServer service's behavior in the following ways: Set a value of 0 or a negative value, such as -1 , to specify no connection limit for the service. Set a positive value, such as 1 , to cause the service to check any accepted connection against the current count of established connections. If the maximum number of established connections for the service is reached, the service immediately closes the accepted connection. Support for Microsoft Visual Studio 2017 From the Red Hat build of OpenJDK 8.0.352 release onward, the Windows JDK and JRE 1.8.0 releases are compiled with the Visual Studio 2017 toolchain, because this toolchain is currently supported by Microsoft. Note The Red Hat Customer Portal no longer uses the Alternative toolchain label to mark binaries that were compiled with the Visual Studio 2017 toolchain. For customers that rely on the Microsoft Visual Studio 2010 toolchain for compiling binaries, which Red Hat labels as the legacy toolchain , Red Hat continues to support these binaries. On the Software Details page, on the Red Hat Customer Portal, a file compiled with this toolchain contains a vs10 entry in its file name. For example, openjdk-1.8.0.345/java-1.8.0-openjdk-1.8.0.352-2.b08.redhat.windows.vs10.x86_64.zip . Important Microsoft no longer supports the Visual Studio 2010 toolchain, so Red Hat can only provide limited support for any products related to this toolchain. | [
"sslSocket.setEnabledProtocols(new String[] {\"TLSv1.2\"});",
"sslEngine.setEnabledProtocols(new String[] {\"TLSv1.2\"});",
"SSLParameters params = sslSocket.getSSLParameters(); params.setProtocols(new String[] {\"TLSv1.2\"}); slsSocket.setSSLParameters(params);"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.352/openjdk-80352-release-notes_openjdk |
2.9. Runtime Metadata | 2.9. Runtime Metadata Once you have adequately modeled your enterprise information systems, including the necessary technical metadata that describes the physical structure of your sources, you can use the metadata for data access. To prepare the metadata for use in the JBoss Data Virtualization Server, you take a snapshot of a metadata model for the JBoss Data Virtualization Server to use when resolving queries from your client applications. This runtime metadata represents a static version of design-time metadata you created or imported. This snapshot is in the form of a Virtual Database definition, or VDB. As you create this runtime metadata, the Teiid Designer : derives the runtime metadata from a consistent set of metadata models. creates a subset of design-time metadata, focusing on the technical metadata that describes the access to underlying enterprise information systems. optimizes runtime metadata for data access performance. You can continue to work with the design-time metadata, but once you have created a runtime metadata model, it remains static. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/runtime_metadata1 |
Chapter 9. Preparing a system with UEFI Secure Boot enabled to install and boot RHEL beta releases | Chapter 9. Preparing a system with UEFI Secure Boot enabled to install and boot RHEL beta releases To enhance the security of your operating system, use the UEFI Secure Boot feature for signature verification when booting a Red Hat Enterprise Linux Beta release on systems having UEFI Secure Boot enabled. 9.1. UEFI Secure Boot and RHEL Beta releases UEFI Secure Boot requires that the operating system kernel is signed with a recognized private key. UEFI Secure Boot then verifies the signature using the corresponding public key. For Red Hat Enterprise Linux Beta releases, the kernel is signed with a Red Hat Beta-specific private key. UEFI Secure Boot attempts to verify the signature using the corresponding public key, but because the hardware does not recognize the Beta private key, Red Hat Enterprise Linux Beta release system fails to boot. Therefore, to use UEFI Secure Boot with a Beta release, add the Red Hat Beta public key to your system using the Machine Owner Key (MOK) facility. 9.2. Adding a Beta public key for UEFI Secure Boot This section contains information about how to add a Red Hat Enterprise Linux Beta public key for UEFI Secure Boot. Prerequisites The UEFI Secure Boot is disabled on the system. The Red Hat Enterprise Linux Beta release is installed, and Secure Boot is disabled even after system reboot. You are logged in to the system, and the tasks in the Initial Setup window are complete. Procedure Begin to enroll the Red Hat Beta public key in the system's Machine Owner Key (MOK) list: USD(uname -r) is replaced by the kernel version - for example, 4.18.0-80.el8.x86_64 . Enter a password when prompted. Reboot the system and press any key to continue the startup. The Shim UEFI key management utility starts during the system startup. Select Enroll MOK . Select Continue . Select Yes and enter the password. The key is imported into the system's firmware. Select Reboot . Enable Secure Boot on the system. 9.3. Removing a Beta public key If you plan to remove the Red Hat Enterprise Linux Beta release, and install a Red Hat Enterprise Linux General Availability (GA) release, or a different operating system, then remove the Beta public key. The procedure describes how to remove a Beta public key. Procedure Begin to remove the Red Hat Beta public key from the system's Machine Owner Key (MOK) list: Enter a password when prompted. Reboot the system and press any key to continue the startup. The Shim UEFI key management utility starts during the system startup. Select Reset MOK . Select Continue . Select Yes and enter the password that you had specified in step 2. The key is removed from the system's firmware. Select Reboot . | [
"mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer",
"mokutil --reset"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/booting-a-beta-system-with-uefi-secure-boot_rhel-installer |
Chapter 5. Setting up the test environment | Chapter 5. Setting up the test environment The first step towards certifying your product is setting up the environment where you can run the tests. The test environment consists of three systems: Test host : A workstation, referred to as the test host, is used as a medium for accessing the Controller and Compute nodes. The tests are only initiated on this system but are run on the two nodes. Controller : The tests designed for the specific plugin undergoing certification are run on the Controller node. Compute : Remaining certification-related tests are run on the Compute node. In multi-host, information is provided to the Compute node for test execution. 5.1. Setting up the test host The test host is used only to initiate a test run on the Controller and Compute node, display the progress of the tests, and present the final result file after gathering results from both nodes. Prerequisites You have installed RHEL 8 or 9 on the system. You have enabled access to the Controller and Compute nodes from the test host. You have installed Cockpit on the system. Procedure Use your RHN credentials to register your system using Red Hat Subscription Management: Display the list of available subscriptions for your system: Search for the subscription that provides the Red Hat Certification (for RHEL Server) repository and make a note of the subscription and its Pool ID. Attach the subscription to your system: Replace the pool_ID with the Pool ID of the subscription. Note You don't have to attach the subscription to your system, if you enable the option Simple content access for Red Hat Subscription Management . For more details, see How do I enable Simple Content Access for Red Hat Subscription Management? Subscribe to the Red Hat Certification channel: On RHEL 8: Replace HOSTTYPE with the system architecture. To find out the system architecture, run Example: On RHEL 9: Replace HOSTTYPE with the system architecture. To find out the system architecture, run Example: Install the certification and Cockpit RPMs. Only on RHEL 9 Generate a new SSH key pair on the test host, if it is not already present. View and copy the public key to enter it later during the set up of Controller and Compute node to allow secure and passwordless communication between the test host and each node. Replace <user> with your user name. Example: # cat /root/.ssh/id_rsa.pub 5.2. Setting up the Controller and Compute nodes Separate tests are run on the two nodes based on the defined role of each node in the test plan. Note Repeat the following process for setting up each node. Prerequisites You have installed RHOSP on the system based on the supported RHEL version, as applicable. The corresponding supported versions are as follows: RHOSP version Supported RHEL version 17.0 9.0 17.1 9.2 You have installed and enabled Cockpit on both nodes. Note You have installed the plugin that needs certification. This is applicable only to the Controller node. Procedure Use your RHN credentials to register your system using Red Hat Subscription Management: Display the list of available subscriptions for your system: Search for the subscription that provides the Red Hat Certification (for RHEL Server) repository and make a note of the subscription and its Pool ID. Attach the subscription to your system: Replace the pool_ID with the Pool ID of the subscription. Subscribe to the Red Hat Certification channel: On RHEL 8: Replace HOSTTYPE with the system architecture. To find out the system architecture, run Example: On RHEL 9: Replace HOSTTYPE with the system architecture. To find out the system architecture, run Example: Install the certification RPM. Install OpenStack test suite package: Open the authorized keys file in the Controller and Compute node and paste the public key of the test host that you copied earlier, and then save the file. This will allow secure and passwordless communication between the test host and each node. Replace <user> with your user name. Example: Additional resources Setting up passwordless SSH | [
"subscription-manager register",
"subscription-manager list --available*",
"subscription-manager attach --pool= <pool_ID >",
"subscription-manager repos --enable=cert-1-for-rhel-8- <HOSTTYPE> -rpms",
"uname -m",
"subscription-manager repos --enable=cert-1-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=cert-1-for-rhel-9- <HOSTTYPE> -rpms",
"uname -m",
"subscription-manager repos --enable=cert-1-for-rhel-9-x86_64-rpms",
"yum install redhat-certification-cockpit",
"yum install redhat-certification",
"ssh-keygen",
"cat /<user>/.ssh/id_rsa.pub",
"subscription-manager register",
"subscription-manager list --available*",
"subscription-manager attach --pool= <pool_ID >",
"subscription-manager repos --enable=cert-1-for-rhel-8- <HOSTTYPE> -rpms",
"uname -m",
"subscription-manager repos --enable=cert-1-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=cert-1-for-rhel-9- <HOSTTYPE> -rpms",
"uname -m",
"subscription-manager repos --enable=cert-1-for-rhel-9-x86_64-rpms",
"yum install redhat-certification",
"yum install redhat-certification-openstack",
"vi /<user>/.ssh/authorized_keys",
"vi /root/.ssh/authorized_keys"
] | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/assembly_rhosp-wf-setting-up-the-test-environment_onboarding-certification-partners |
Chapter 5. Kafka producer configuration tuning | Chapter 5. Kafka producer configuration tuning Use configuration properties to optimize the performance of Kafka producers. You can use standard Kafka producer configuration options. Adjusting your configuration to maximize throughput might increase latency or vice versa. You will need to experiment and tune your producer configuration to get the balance you need. When configuring a producer, consider the following aspects carefully, as they significantly impact its performance and behavior: Compression By compressing messages before they are sent over the network, you can conserve network bandwidth and reduce disk storage requirements, but with the additional cost of increased CPU utilization due to the compression and decompression processes. Batching Adjusting the batch size and time intervals when the producer sends messages can affect throughput and latency. Partitioning Partitioning strategies in the Kafka cluster can support producers through parallelism and load balancing, whereby producers can write to multiple partitions concurrently and each partition receives an equal share of messages. Other strategies might include topic replication for fault tolerance. Securing access Implement security measures for authentication, encryption, and authorization by setting up user accounts to manage secure access to Kafka . 5.1. Basic producer configuration Connection and serializer properties are required for every producer. Generally, it is good practice to add a client id for tracking, and use compression on the producer to reduce batch sizes in requests. In a basic producer configuration: The order of messages in a partition is not guaranteed. The acknowledgment of messages reaching the broker does not guarantee durability. Basic producer configuration properties # ... bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5 # ... 1 (Required) Tells the producer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The producer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it's not necessary to provide a list of all the brokers in the cluster. 2 (Required) Serializer to transform the key of each message to bytes prior to them being sent to a broker. 3 (Required) Serializer to transform the value of each message to bytes prior to them being sent to a broker. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. 5 (Optional) The codec for compressing messages, which are sent and might be stored in compressed format and then decompressed when reaching a consumer. Compression is useful for improving throughput and reducing the load on storage, but might not be suitable for low latency applications where the cost of compression or decompression could be prohibitive. 5.2. Data durability Message delivery acknowledgments minimize the likelihood that messages are lost. By default, acknowledgments are enabled with the acks property set to acks=all . To control the maximum time the producer waits for acknowledgments from the broker and handle potential delays in sending messages, you can use the delivery.timeout.ms property. Acknowledging message delivery # ... acks=all 1 delivery.timeout.ms=120000 2 # ... 1 acks=all forces a leader replica to replicate messages to a certain number of followers before acknowledging that the message request was successfully received. 2 The maximum time in milliseconds to wait for a complete send request. You can set the value to MAX_LONG to delegate to Kafka an indefinite number of retries. The default is 120000 or 2 minutes. The acks=all setting offers the strongest guarantee of delivery, but it will increase the latency between the producer sending a message and receiving acknowledgment. If you don't require such strong guarantees, a setting of acks=0 or acks=1 provides either no delivery guarantees or only acknowledgment that the leader replica has written the record to its log. With acks=all , the leader waits for all in-sync replicas to acknowledge message delivery. A topic's min.insync.replicas configuration sets the minimum required number of in-sync replica acknowledgements. The number of acknowledgements include that of the leader and followers. A typical starting point is to use the following configuration: Producer configuration: acks=all (default) Broker configuration for topic replication: default.replication.factor=3 (default = 1 ) min.insync.replicas=2 (default = 1 ) When you create a topic, you can override the default replication factor. You can also override min.insync.replicas at the topic level in the topic configuration. Streams for Apache Kafka uses this configuration in the example configuration files for multi-node deployment of Kafka. The following table describes how this configuration operates depending on the availability of followers that replicate the leader replica. Table 5.1. Follower availability Number of followers available and in-sync Acknowledgements Producer can send messages? 2 The leader waits for 2 follower acknowledgements Yes 1 The leader waits for 1 follower acknowledgement Yes 0 The leader raises an exception No A topic replication factor of 3 creates one leader replica and two followers. In this configuration, the producer can continue if a single follower is unavailable. Some delay can occur whilst removing a failed broker from the in-sync replicas or a creating a new leader. If the second follower is also unavailable, message delivery will not be successful. Instead of acknowledging successful message delivery, the leader sends an error ( not enough replicas ) to the producer. The producer raises an equivalent exception. With retries configuration, the producer can resend the failed message request. Note If the system fails, there is a risk of unsent data in the buffer being lost. 5.3. Ordered delivery Idempotent producers avoid duplicates as messages are delivered exactly once. IDs and sequence numbers are assigned to messages to ensure the order of delivery, even in the event of failure. If you are using acks=all for data consistency, using idempotency makes sense for ordered delivery. Idempotency is enabled for producers by default. With idempotency enabled, you can set the number of concurrent in-flight requests to a maximum of 5 for message ordering to be preserved. Ordered delivery with idempotency # ... enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4 # ... 1 Set to true to enable the idempotent producer. 2 With idempotent delivery the number of in-flight requests may be greater than 1 while still providing the message ordering guarantee. The default is 5 in-flight requests. 3 Set acks to all . 4 Set the number of attempts to resend a failed message request. If you choose not to use acks=all and disable idempotency because of the performance cost, set the number of in-flight (unacknowledged) requests to 1 to preserve ordering. Otherwise, a situation is possible where Message-A fails only to succeed after Message-B was already written to the broker. Ordered delivery without idempotency # ... enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647 # ... 1 Set to false to disable the idempotent producer. 2 Set the number of in-flight requests to exactly 1 . 5.4. Reliability guarantees Idempotence is useful for exactly once writes to a single partition. Transactions, when used with idempotence, allow exactly once writes across multiple partitions. Transactions guarantee that messages using the same transactional ID are produced once, and either all are successfully written to the respective logs or none of them are. # ... enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2 # ... 1 Specify a unique transactional ID. 2 Set the maximum allowed time for transactions in milliseconds before a timeout error is returned. The default is 900000 or 15 minutes. The choice of transactional.id is important in order that the transactional guarantee is maintained. Each transactional id should be used for a unique set of topic partitions. For example, this can be achieved using an external mapping of topic partition names to transactional ids, or by computing the transactional id from the topic partition names using a function that avoids collisions. 5.5. Optimizing producers for throughput and latency Usually, the requirement of a system is to satisfy a particular throughput target for a proportion of messages within a given latency. For example, targeting 500,000 messages per second with 95% of messages being acknowledged within 2 seconds. It's likely that the messaging semantics (message ordering and durability) of your producer are defined by the requirements for your application. For instance, it's possible that you don't have the option of using acks=0 or acks=1 without breaking some important property or guarantee provided by your application. Broker restarts have a significant impact on high percentile statistics. For example, over a long period the 99th percentile latency is dominated by behavior around broker restarts. This is worth considering when designing benchmarks or comparing performance numbers from benchmarking with performance numbers seen in production. Depending on your objective, Kafka offers a number of configuration parameters and techniques for tuning producer performance for throughput and latency. Message batching ( linger.ms and batch.size ) Message batching delays sending messages in the hope that more messages destined for the same broker will be sent, allowing them to be batched into a single produce request. Batching is a compromise between higher latency in return for higher throughput. Time-based batching is configured using linger.ms , and size-based batching is configured using batch.size . Compression ( compression.type ) Message compression adds latency in the producer (CPU time spent compressing the messages), but makes requests (and potentially disk writes) smaller, which can increase throughput. Whether compression is worthwhile, and the best compression to use, will depend on the messages being sent. Compression happens on the thread which calls KafkaProducer.send() , so if the latency of this method matters for your application you should consider using more threads. Pipelining ( max.in.flight.requests.per.connection ) Pipelining means sending more requests before the response to a request has been received. In general more pipelining means better throughput, up to a threshold at which other effects, such as worse batching, start to counteract the effect on throughput. Lowering latency When your application calls the KafkaProducer.send() method, messages undergo a series of operations before being sent: Interception: Messages are processed by any configured interceptors. Serialization: Messages are serialized into the appropriate format. Partition assignment: Each message is assigned to a specific partition. Compression: Messages are compressed to conserve network bandwidth. Batching: Compressed messages are added to a batch in a partition-specific queue. During these operations, the send() method is momentarily blocked. It also remains blocked if the buffer.memory is full or if metadata is unavailable. Batches will remain in the queue until one of the following occurs: The batch is full (according to batch.size ). The delay introduced by linger.ms has passed. The sender is ready to dispatch batches for other partitions to the same broker and can include this batch. The producer is being flushed or closed. To minimize the impact of send() blocking on latency, optimize batching and buffering configurations. Use the linger.ms and batch.size properties to batch more messages into a single produce request for higher throughput. # ... linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3 # ... 1 The linger.ms property adds a delay in milliseconds so that larger batches of messages are accumulated and sent in a request. The default is 0 . 2 If a maximum batch.size in bytes is used, a request is sent when the maximum is reached, or messages have been queued for longer than linger.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. 3 The buffer size must be at least as big as the batch size, and be able to accommodate buffering, compression, and in-flight requests. Increasing throughput You can improve throughput of your message requests by directing messages to a specified partition using a custom partitioner to replace the default. # ... partitioner.class=my-custom-partitioner 1 # ... 1 Specify the class name of your custom partitioner. | [
"bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5",
"acks=all 1 delivery.timeout.ms=120000 2",
"enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4",
"enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647",
"enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2",
"linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3",
"partitioner.class=my-custom-partitioner 1"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_tuning/con-producer-config-properties-str |
Chapter 47. ProcessBaselineService | Chapter 47. ProcessBaselineService 47.1. DeleteProcessBaselines DELETE /v1/processbaselines DeleteProcessBaselines deletes baselines. 47.1.1. Description 47.1.2. Parameters 47.1.2.1. Query Parameters Name Description Required Default Pattern query - null confirm - null 47.1.3. Return Type V1DeleteProcessBaselinesResponse 47.1.4. Content Type application/json 47.1.5. Responses Table 47.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeleteProcessBaselinesResponse 0 An unexpected error response. GooglerpcStatus 47.1.6. Samples 47.1.7. Common object reference 47.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 47.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 47.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 47.1.7.3. V1DeleteProcessBaselinesResponse Field Name Required Nullable Type Description Format numDeleted Integer int32 dryRun Boolean 47.2. GetProcessBaseline GET /v1/processbaselines/key GetProcessBaselineById returns the single process baseline referenced by the given ID. 47.2.1. Description 47.2.2. Parameters 47.2.2.1. Query Parameters Name Description Required Default Pattern key.deploymentId The idea is for the keys to be flexible. Only certain combinations of these will be supported. - null key.containerName - null key.clusterId - null key.namespace - null 47.2.3. Return Type StorageProcessBaseline 47.2.4. Content Type application/json 47.2.5. Responses Table 47.2. HTTP Response Codes Code Message Datatype 200 A successful response. StorageProcessBaseline 0 An unexpected error response. GooglerpcStatus 47.2.6. Samples 47.2.7. Common object reference 47.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 47.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 47.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 47.2.7.3. StorageBaselineElement Field Name Required Nullable Type Description Format element StorageBaselineItem auto Boolean 47.2.7.4. StorageBaselineItem Field Name Required Nullable Type Description Format processName String 47.2.7.5. StorageProcessBaseline Field Name Required Nullable Type Description Format id String key StorageProcessBaselineKey elements List of StorageBaselineElement elementGraveyard List of StorageBaselineElement created Date date-time userLockedTimestamp Date date-time stackRoxLockedTimestamp Date date-time lastUpdate Date date-time 47.2.7.6. StorageProcessBaselineKey Field Name Required Nullable Type Description Format deploymentId String The idea is for the keys to be flexible. Only certain combinations of these will be supported. containerName String clusterId String namespace String 47.3. LockProcessBaselines PUT /v1/processbaselines/lock LockProcessBaselines accepts a list of baseline IDs, locks those baselines, and returns the updated baseline objects. 47.3.1. Description 47.3.2. Parameters 47.3.2.1. Body Parameter Name Description Required Default Pattern body V1LockProcessBaselinesRequest X 47.3.3. Return Type V1UpdateProcessBaselinesResponse 47.3.4. Content Type application/json 47.3.5. Responses Table 47.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1UpdateProcessBaselinesResponse 0 An unexpected error response. GooglerpcStatus 47.3.6. Samples 47.3.7. Common object reference 47.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 47.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 47.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 47.3.7.3. StorageBaselineElement Field Name Required Nullable Type Description Format element StorageBaselineItem auto Boolean 47.3.7.4. StorageBaselineItem Field Name Required Nullable Type Description Format processName String 47.3.7.5. StorageProcessBaseline Field Name Required Nullable Type Description Format id String key StorageProcessBaselineKey elements List of StorageBaselineElement elementGraveyard List of StorageBaselineElement created Date date-time userLockedTimestamp Date date-time stackRoxLockedTimestamp Date date-time lastUpdate Date date-time 47.3.7.6. StorageProcessBaselineKey Field Name Required Nullable Type Description Format deploymentId String The idea is for the keys to be flexible. Only certain combinations of these will be supported. containerName String clusterId String namespace String 47.3.7.7. V1LockProcessBaselinesRequest Field Name Required Nullable Type Description Format keys List of StorageProcessBaselineKey locked Boolean 47.3.7.8. V1ProcessBaselineUpdateError Field Name Required Nullable Type Description Format error String key StorageProcessBaselineKey 47.3.7.9. V1UpdateProcessBaselinesResponse Field Name Required Nullable Type Description Format baselines List of StorageProcessBaseline errors List of V1ProcessBaselineUpdateError 47.4. UpdateProcessBaselines PUT /v1/processbaselines AddToProcessBaselines adds a list of process names to each of a list of process baselines. 47.4.1. Description 47.4.2. Parameters 47.4.2.1. Body Parameter Name Description Required Default Pattern body V1UpdateProcessBaselinesRequest X 47.4.3. Return Type V1UpdateProcessBaselinesResponse 47.4.4. Content Type application/json 47.4.5. Responses Table 47.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1UpdateProcessBaselinesResponse 0 An unexpected error response. GooglerpcStatus 47.4.6. Samples 47.4.7. Common object reference 47.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 47.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 47.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 47.4.7.3. StorageBaselineElement Field Name Required Nullable Type Description Format element StorageBaselineItem auto Boolean 47.4.7.4. StorageBaselineItem Field Name Required Nullable Type Description Format processName String 47.4.7.5. StorageProcessBaseline Field Name Required Nullable Type Description Format id String key StorageProcessBaselineKey elements List of StorageBaselineElement elementGraveyard List of StorageBaselineElement created Date date-time userLockedTimestamp Date date-time stackRoxLockedTimestamp Date date-time lastUpdate Date date-time 47.4.7.6. StorageProcessBaselineKey Field Name Required Nullable Type Description Format deploymentId String The idea is for the keys to be flexible. Only certain combinations of these will be supported. containerName String clusterId String namespace String 47.4.7.7. V1ProcessBaselineUpdateError Field Name Required Nullable Type Description Format error String key StorageProcessBaselineKey 47.4.7.8. V1UpdateProcessBaselinesRequest Field Name Required Nullable Type Description Format keys List of StorageProcessBaselineKey addElements List of StorageBaselineItem removeElements List of StorageBaselineItem 47.4.7.9. V1UpdateProcessBaselinesResponse Field Name Required Nullable Type Description Format baselines List of StorageProcessBaseline errors List of V1ProcessBaselineUpdateError | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/processbaselineservice |
Chapter 38. Storage | Chapter 38. Storage No support for thin provisioning on top of RAID in a cluster While RAID logical volumes and thinly provisioned logical volumes can be used in a cluster when activated exclusively, there is currently no support for thin provisioning on top of RAID in a cluster. This is the case even if the combination is activated exclusively. Currently this combination is only supported in LVM's single machine non-clustered mode. When using thin-provisioning, it is possible to lose buffered writes to the thin-pool if it reaches capacity If a thin-pool is filled to capacity, it may be possible to lose some writes even if the pool is being grown at that time. This is because a resize operation (even an automated one) will attempt to flush outstanding I/O to the storage device prior to the resize being performed. Since there is no room in the thin-pool, the I/O operations must be errored first to allow the grow to succeed. Once the thin pool has grown, the logical volumes associated with the thin-pool will return to normal operation. As a workaround to this problem, set 'thin_pool_autoextend_threshold' and 'thin_pool_autoextend_percent' appropriately for your needs in the lvm.conf file. Do not set the threshold so high or the percent so low that your thin-pool will reach full capacity so quickly that it does not allow enough time for it to be auto-extended (or manually extended if you prefer). If you are not using over-provisioning (creating logical volumes in excess of the size of the backing thin-pool), then be prepared to remove snapshots as necessary if the thin-pool begins to near capacity. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/known-issues-storage |
Getting Started with RHEL System Registration | Getting Started with RHEL System Registration Subscription Central 1-latest Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_rhel_system_registration/index |
Chapter 4. Installing a cluster on Nutanix in a restricted network | Chapter 4. Installing a cluster on Nutanix in a restricted network In OpenShift Container Platform 4.15, you can install a cluster on Nutanix infrastructure in a restricted network by creating an internal mirror of the installation release content. 4.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL/TLS certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. You have a container image registry, such as Red Hat Quay. If you do not already have a registry, you can create a mirror registry using mirror registry for Red Hat OpenShift . You have used the oc-mirror OpenShift CLI (oc) plugin to mirror all of the required OpenShift Container Platform content and other images, including the Nutanix CSI Operator, to your mirror registry. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. 4.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 4.5. Downloading the RHCOS cluster image Prism Central requires access to the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. You can use the installation program to locate and download the RHCOS image and make it available through an internal HTTP server or Nutanix Objects. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the Nutanix image, and click the link to download it. Example output "nutanix": { "release": "411.86.202210041459-0", "formats": { "qcow2": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "sha256": "42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b" Make the image available through an internal HTTP server or Nutanix Objects. Note the location of the downloaded image. You update the platform section in the installation configuration file ( install-config.yaml ) with the image's location before deploying the cluster. Snippet of an install-config.yaml file that specifies the RHCOS image platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image you download. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.nutanix.clusterOSImage to the image location or name. For example: platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on {platform}". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 4.6.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 12 13 16 17 18 19 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 14 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 15 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server or Nutanix Objects and pointing the installation program to the image. 21 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. 4.6.2. Configuring failure domains Failure domains improve the fault tolerance of an OpenShift Container Platform cluster by distributing control plane and compute machines across multiple Nutanix Prism Elements (clusters). Tip It is recommended that you configure three failure domains to ensure high-availability. Prerequisites You have an installation configuration file ( install-config.yaml ). Procedure Edit the install-config.yaml file and add the following stanza to configure the first failure domain: apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid> # ... where: <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <prism_element_name> Optional. Specifies the name of the Prism Element. <prism_element_uuid > Specifies the UUID of the Prism Element. <network_uuid > Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Only one subnet per failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. As required, configure additional failure domains. To distribute control plane and compute machines across the failure domains, do one of the following: If compute and control plane machines can share the same set of failure domains, add the failure domain names under the cluster's default machine configuration. Example of control plane and compute machines sharing a set of failure domains apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... If compute and control plane machines must use different failure domains, add the failure domain names under the respective machine pools. Example of control plane and compute machines using different failure domains apiVersion: v1 baseDomain: example.com compute: # ... controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 # ... Save the file. 4.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 4.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.10. Post installation Complete the following steps to complete the configuration of your cluster. 4.10.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.10.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 4.10.3. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 4.12. Additional resources About remote health monitoring 4.13. steps If necessary, see Opt out of remote health reporting If necessary, see Registering your disconnected cluster Customize your cluster | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install coreos print-stream-json",
"\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"",
"platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3",
"apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc apply -f ./oc-mirror-workspace/results-<id>/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource --all-namespaces"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_nutanix/installing-restricted-networks-nutanix-installer-provisioned |
3.3. Providing syspaths Subpackages | 3.3. Providing syspaths Subpackages In order to use a Software Collection's packages, users need to perform certain tasks that differ from when using conventional RPM packages. For example, they need to use an scl enable call, which changes environment variables, such as PATH or LD_LIBRARY_PATH , so that binaries installed in alternative locations can be found. Users also need to use alternative names for systemd services. Some scripts may also call binaries using full paths, for example /usr/bin/mysql , and as a result, those scripts may not work with the Software Collection. A recommended solution to address the problems described above is to use syspaths subpackages. The basic idea is to allow users to consume different versions of the same package without affecting the base system installation, but with the option to use the Software Collection packages as if they were conventional RPM packages, making the Software Collection easier to use. The optional syspaths subpackages (such as rh-mariadb102-syspaths ) provide shell wrappers and symbolic links that are installed into the standard path (typically, /usr/bin/ ). This means that by choosing to install the syspaths subpackages, users deliberately alter the base system installation, making the syspaths subpackages typically suitable for users who do not require installing and running multiple versions of the same package at a time. This is especially the case when using databases. Using syspaths subpackages avoids the need for adjusting scripts in the Software Collection packages to make those scripts easier to use. Keep in mind that syspaths subpackages do conflict with the packages from the base system installation, so the conventional packages cannot be installed together with the syspaths subpackages. If that is a concern, consider employing a container-based technology to isolate the syspaths subpackages from the base system installation. 3.3.1. Naming syspaths Subpackages For each Software Collection that utilizes the concept of a syspaths subpackage, there are typically multiple syspaths subpackages provided. syspaths subpackages are made available for each package with a file that can be provided with a wrapper or a symbolic link. On top of that, there is a Software Collection metapackage's subpackage named software_collection_1-syspaths , where software_collection_1 is the name of the Software Collection. The software_collection_1-syspaths subpackage requires other syspaths subpackages included the Software Collection. Installing the software_collection_1-syspaths subpackage thus results in installing all the other syspaths packages. For example, if you want to include wrappers for a binary file binary_1 included in the software_collection_1-package_1 package and a binary file binary_2 included in the software_collection_1-package_2 package, then create the following three syspaths subpackages in the software_collection_1 Software Collection: 3.3.2. Files Included in syspaths Subpackages The files suitable for inclusion in syspaths subpackages are executable shell wrappers for binaries that users interact with. The following is an example of a wrapper for a binary file binary_1 included in the software_collection_1 and located in /opt/rh/software_collection_1/root/usr/bin/binary_1 : #!/bin/bash source scl_source enable software_collection_1 exec "/opt/rh/software_collection_1/root/usr/bin/binary_1" "USD@" When you install this wrapper in /usr/bin/binary_1 and make it executable, users can then simply run the binary_1 command without the need to prefix it with scl enable software_collection_1 . The wrapper installed in /usr/bin/ sets up the correct environment and executes the target binary located withing the /opt/provider/%{scl} file system hierarchy. 3.3.3. Limitations of syspaths Wrappers The fact that syspaths wrappers are shell scripts means that users cannot perform every possible task with the wrappers as with the target binary. For example, when debugging binaries using gdb , the full path pointing to the /opt/provider/%{scl} file system hierarchy needs to be used, because gdb does not work with wrapper shell scripts. 3.3.4. Symbolic Links in syspaths Subpackages Other than wrappers for binary files, there are more files that are suitable for installation outside of the /opt , /etc/opt/ , or /var/opt/ directories, and thus can be provided by syspaths subpackages. For example, you can make the path to database files (normally located under /var/opt/provider/%{scl} ) easier to discover with a symbolic link located in /var/lib/ . However, for some symbolic links, it is better not to install them in /var/lib/ under their original name as they may conflict with the name of the conventional RPM package from the base system installation. A good idea is to name the symbolic link /var/lib/software_collection_1-original_name or similar. For log files, a recommended name is /var/log/software_collection_1-original_name or similar. Keep in mind that the name itself is not important, the design goal here is make those files easy to find under the /var/lib/ or /var/log/ directories. The same applies to configuration files, the goal is to make symbolic links easy to discover under the /etc directory. 3.3.5. Services Without a Prefix systemd and SysV init services are popular examples of user interaction with daemon services. In general, users do not need to include scl enable in the command when starting services, because services are by design started in a clean environment. But still, users are required to use the correct service name, usually prefixed with the Software Collection name (for example, rh-mariadb102-mariadb ). syspaths subpackages allow users to use the conventional names of the services, such as mariadb , mongod , or postgresql , if the appropriate syspaths subpackage is installed. To achieve this, create a symbolic link, without including the Software Collection name in the symbolic link name, that points to the conventional service file. For example, a service service_1 in the software_collection_1 Software Collection that is normally provided by the file /etc/rc.d/init.d/software_collection_1-service_1 can be accessed as service_1 by creating the following symbolic link: Or, in the case of a systemd unit file: | [
"software_collection_1-syspaths software_collection_1-package_1-syspaths software_collection_1-package_2-syspaths",
"#!/bin/bash source scl_source enable software_collection_1 exec \"/opt/rh/software_collection_1/root/usr/bin/binary_1\" \"USD@\"",
"/etc/rc.d/init.d/service_1 -> /etc/rc.d/init.d/software_collection_1-service_1",
"/usr/lib/systemd/system/service_1 -> /usr/lib/systemd/system/software_collection_1-service_1"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-providing_syspaths_subpackages |
Chapter 6. Viewing cluster logs by using Kibana | Chapter 6. Viewing cluster logs by using Kibana The logging subsystem includes a web console for visualizing collected log data. Currently, OpenShift Container Platform deploys the Kibana console for visualization. Using the log visualizer, you can do the following with your data: search and browse the data using the Discover tab. chart and map the data using the Visualize tab. create and view custom dashboards using the Dashboard tab. Use and configuration of the Kibana interface is beyond the scope of this documentation. For more information, on using the interface, see the Kibana documentation . Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. 6.1. Defining Kibana index patterns An index pattern defines the Elasticsearch indices that you want to visualize. To explore and visualize data in Kibana, you must create an index pattern. Prerequisites A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices. If you can view the pods and logs in the default , kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions: USD oc auth can-i get pods/log -n <project> Example output yes Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. Elasticsearch documents must be indexed before you can create index patterns. This is done automatically, but it might take a few minutes in a new or updated cluster. Procedure To define index patterns and create visualizations in Kibana: In the OpenShift Container Platform console, click the Application Launcher and select Logging . Create your Kibana index patterns by clicking Management Index Patterns Create index pattern : Each user must manually create index patterns when logging into Kibana the first time to see logs for their projects. Users must create an index pattern named app and use the @timestamp time field to view their container logs. Each admin user must create index patterns when logged into Kibana the first time for the app , infra , and audit indices using the @timestamp time field. Create Kibana Visualizations from the new index patterns. 6.2. Viewing cluster logs in Kibana You view cluster logs in the Kibana web console. The methods for viewing and visualizing your data in Kibana that are beyond the scope of this documentation. For more information, refer to the Kibana documentation . Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Kibana index patterns must exist. A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices. If you can view the pods and logs in the default , kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions: USD oc auth can-i get pods/log -n <project> Example output yes Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. Procedure To view logs in Kibana: In the OpenShift Container Platform console, click the Application Launcher and select Logging . Log in using the same credentials you use to log in to the OpenShift Container Platform console. The Kibana interface launches. In Kibana, click Discover . Select the index pattern you created from the drop-down menu in the top-left corner: app , audit , or infra . The log data displays as time-stamped documents. Expand one of the time-stamped documents. Click the JSON tab to display the log entry for that document. Example 6.1. Sample infrastructure log entry in Kibana { "_index": "infra-000001", "_type": "_doc", "_id": "YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3", "_version": 1, "_score": null, "_source": { "docker": { "container_id": "f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1" }, "kubernetes": { "container_name": "registry-server", "namespace_name": "openshift-marketplace", "pod_name": "redhat-marketplace-n64gc", "container_image": "registry.redhat.io/redhat/redhat-marketplace-index:v4.7", "container_image_id": "registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f", "pod_id": "8f594ea2-c866-4b5c-a1c8-a50756704b2a", "host": "ip-10-0-182-28.us-east-2.compute.internal", "master_url": "https://kubernetes.default.svc", "namespace_id": "3abab127-7669-4eb3-b9ef-44c04ad68d38", "namespace_labels": { "openshift_io/cluster-monitoring": "true" }, "flat_labels": [ "catalogsource_operators_coreos_com/update=redhat-marketplace" ] }, "message": "time=\"2020-09-23T20:47:03Z\" level=info msg=\"serving registry\" database=/database/index.db port=50051", "level": "unknown", "hostname": "ip-10-0-182-28.internal", "pipeline_metadata": { "collector": { "ipaddr4": "10.0.182.28", "inputname": "fluent-plugin-systemd", "name": "fluentd", "received_at": "2020-09-23T20:47:15.007583+00:00", "version": "1.7.4 1.6.0" } }, "@timestamp": "2020-09-23T20:47:03.422465+00:00", "viaq_msg_id": "YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3", "openshift": { "labels": { "logging": "infra" } } }, "fields": { "@timestamp": [ "2020-09-23T20:47:03.422Z" ], "pipeline_metadata.collector.received_at": [ "2020-09-23T20:47:15.007Z" ] }, "sort": [ 1600894023422 ] } | [
"oc auth can-i get pods/log -n <project>",
"yes",
"oc auth can-i get pods/log -n <project>",
"yes",
"{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/cluster-logging-visualizer-using |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_core_protocol_jms_client/using_your_subscription |
7.11. RHEA-2014:1596 - new packages: ksm_preload | 7.11. RHEA-2014:1596 - new packages: ksm_preload New ksm_preload packages are now available for Red Hat Enterprise Linux 6. The ksm_preload packages provide the ksm_preload library that allows applications to share memory pages. It also enables "legacy" applications to leverage Linux's memory deduplication. This enhancement update adds the ksm_preload packages to Red Hat Enterprise Linux 6. (BZ# 1034763 ) All users who require ksm_preload are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1596 |
Chapter 93. Tokenize | Chapter 93. Tokenize The tokenizer language is a built-in language in camel-core , which is most often used with the Split EIP to split a message using a token-based strategy. The tokenizer language is intended to tokenize text documents using a specified delimiter pattern. It can also be used to tokenize XML documents with some limited capability. For a truly XML-aware tokenization, the use of the XML Tokenize language is recommended as it offers a faster, more efficient tokenization specifically for XML documents. 93.1. Tokenize Options The Tokenize language supports 11 options, which are listed below. Name Default Java Type Description token String Required The (start) token to use as tokenizer, for example you can use the new line token. You can use simple language as the token to support dynamic tokens. endToken String The end token to use as tokenizer if using start/end token pairs. You can use simple language as the token to support dynamic tokens. inheritNamespaceTagName String To inherit namespaces from a root/parent tag name when using XML You can use simple language as the tag name to support dynamic names. headerName String Name of header to tokenize instead of using the message body. regex Boolean If the token is a regular expression pattern. The default value is false. xml Boolean Whether the input is XML messages. This option must be set to true if working with XML payloads. includeTokens Boolean Whether to include the tokens in the parts when using pairs The default value is false. group String To group N parts together, for example to split big files into chunks of 1000 lines. You can use simple language as the group to support dynamic group sizes. groupDelimiter String Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. skipFirst Boolean To skip the very first element. trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 93.2. Example The following example shows how to take a request from the direct:a endpoint then split it into pieces using an Expression , then forward each piece to direct:b: <route> <from uri="direct:a"/> <split> <tokenize token="\n"/> <to uri="direct:b"/> </split> </route> And in Java DSL: from("direct:a") .split(body().tokenize("\n")) .to("direct:b"); 93.3. See Also For more examples see Split EIP. 93.4. Spring Boot Auto-Configuration When using tokenize with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean | [
"<route> <from uri=\"direct:a\"/> <split> <tokenize token=\"\\n\"/> <to uri=\"direct:b\"/> </split> </route>",
"from(\"direct:a\") .split(body().tokenize(\"\\n\")) .to(\"direct:b\");",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-tokenize-language-starter |
Chapter 28. Retrieving diagnostic and troubleshooting data | Chapter 28. Retrieving diagnostic and troubleshooting data The report.sh diagnostics tool is a script provided by Red Hat to gather essential data for troubleshooting Streams for Apache Kafka deployments on OpenShift. It collects relevant logs, configuration files, and other diagnostic data to assist in identifying and resolving issues. When you run the script, you can specify additional parameters to retrieve specific data. Prerequisites Bash 4 or newer to run the script. The OpenShift oc command-line tool is installed and configured to connect to the running cluster. This establishes the necessary authentication for the oc command-line tool to interact with your cluster and retrieve the required diagnostic data. Procedure Download and extract the tool. The diagnostics tool is available from Streams for Apache Kafka software downloads page . From the directory where you extracted the tool, open a terminal and run the reporting tool: ./report.sh --namespace=<cluster_namespace> --cluster=<cluster_name> --out-dir=<local_output_directory> Replace <cluster_namespace> with the actual OpenShift namespace of your Streams for Apache Kafka deployment, <cluster_name> with the name of your Kafka cluster, and <local_output_directory> with the path to the local directory where you want to save the generated report. If you don't specify a directory, a temporary directory is created. Include other optional reporting options, as necessary: --bridge=<string> Specify the name of the Kafka Bridge cluster to get data on its pods and logs. --connect=<string> Specify the name of the Kafka Connect cluster to get data on its pods and logs. --mm2=<string> Specify the name of the Mirror Maker 2 cluster to get data on its pods and logs. --secrets=(off|hidden|all) Specify the secret verbosity level. The default is hidden . The available options are as follows: all : Secret keys and data values are reported. hidden : Secrets with only keys are reported. Data values, such as passwords, are removed. off : Secrets are not reported at all. Example request with data collection options ./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports Note If required, assign execute permissions on the script to your user with the chmod command. For example, chmod +x report.sh . After the script has finished executing, the output directory contains files and directories of logs, configurations, and other diagnostic data collected for each component of your Streams for Apache Kafka deployment. Data collected by the reporting diagnostics tool Data on the following components is returned if present: Cluster Operator Deployment YAML and logs All related pods and their logs YAML files for resources related to the cluster operator (ClusterRoles, ClusterRoleBindings) Drain Cleaner (if present) Deployment YAML and logs Pod logs Custom Resources Custom Resource Definitions (CRD) YAML YAML files for all related Custom Resources (CR) Events Events related to the specified namespace Configurations Kafka pod logs and configuration file ( strimzi.properties ) Zookeeper pod logs and configuration file ( zookeeper.properties ) Entity Operator (Topic Operator, User Operator) pod logs Cruise Control pod logs Kafka Exporter pod logs Bridge pod logs if specified in the options Connect pod logs if specified in the options MirrorMaker 2 pod logs if specified in the options Secrets (if requested in the options) YAML files for all secrets related to the specified Kafka cluster | [
"./report.sh --namespace=<cluster_namespace> --cluster=<cluster_name> --out-dir=<local_output_directory>",
"./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/proc-reporting-tool-str |
Chapter 98. Quartz | Chapter 98. Quartz Only consumer is supported The Quartz component provides a scheduled delivery of messages using the Quartz Scheduler 2.x . Each endpoint represents a different timer (in Quartz terms, a Trigger and JobDetail). 98.1. Dependencies When using quartz with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-quartz-starter</artifactId> </dependency> 98.2. URI format The component uses either a CronTrigger or a SimpleTrigger . If no cron expression is provided, the component uses a simple trigger. If no groupName is provided, the quartz component uses the Camel group name. 98.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 98.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 98.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 98.4. Component Options The Quartz component supports 13 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean enableJmx (consumer) Whether to enable Quartz JMX which allows to manage the Quartz scheduler from JMX. This options is default true. true boolean prefixInstanceName (consumer) Whether to prefix the Quartz Scheduler instance name with the CamelContext name. This is enabled by default, to let each CamelContext use its own Quartz scheduler instance by default. You can set this option to false to reuse Quartz scheduler instances between multiple CamelContext's. true boolean prefixJobNameWithEndpointId (consumer) Whether to prefix the quartz job with the endpoint id. This option is default false. false boolean properties (consumer) Properties to configure the Quartz scheduler. Map propertiesFile (consumer) File name of the properties to load from the classpath. String propertiesRef (consumer) References to an existing Properties or Map to lookup in the registry to use for configuring quartz. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean scheduler (advanced) To use the custom configured Quartz scheduler, instead of creating a new Scheduler. Scheduler schedulerFactory (advanced) To use the custom SchedulerFactory which is used to create the Scheduler. SchedulerFactory autoStartScheduler (scheduler) Whether or not the scheduler should be auto started. This options is default true. true boolean interruptJobsOnShutdown (scheduler) Whether to interrupt jobs on shutdown which forces the scheduler to shutdown quicker and attempt to interrupt any running jobs. If this is enabled then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully. false boolean startDelayedSeconds (scheduler) Seconds to wait before starting the quartz scheduler. int 98.5. Endpoint Options The Quartz endpoint is configured using URI syntax: with the following path and query parameters: 98.5.1. Path Parameters (2 parameters) Name Description Default Type groupName (consumer) The quartz group name to use. The combination of group name and trigger name should be unique. Camel String triggerName (consumer) Required The quartz trigger name to use. The combination of group name and trigger name should be unique. String 98.5.2. Query Parameters (17 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean cron (consumer) Specifies a cron expression to define when to trigger. String deleteJob (consumer) If set to true, then the trigger automatically delete when route stop. Else if set to false, it will remain in scheduler. When set to false, it will also mean user may reuse pre-configured trigger with camel Uri. Just ensure the names match. Notice you cannot have both deleteJob and pauseJob set to true. true boolean durableJob (consumer) Whether or not the job should remain stored after it is orphaned (no triggers point to it). false boolean pauseJob (consumer) If set to true, then the trigger automatically pauses when route stop. Else if set to false, it will remain in scheduler. When set to false, it will also mean user may reuse pre-configured trigger with camel Uri. Just ensure the names match. Notice you cannot have both deleteJob and pauseJob set to true. false boolean recoverableJob (consumer) Instructs the scheduler whether or not the job should be re-executed if a 'recovery' or 'fail-over' situation is encountered. false boolean stateful (consumer) Uses a Quartz PersistJobDataAfterExecution and DisallowConcurrentExecution instead of the default job. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern customCalendar (advanced) Specifies a custom calendar to avoid specific range of date. Calendar jobParameters (advanced) To configure additional options on the job. Map prefixJobNameWithEndpointId (advanced) Whether the job name should be prefixed with endpoint id. false boolean triggerParameters (advanced) To configure additional options on the trigger. Map usingFixedCamelContextName (advanced) If it is true, JobDataMap uses the CamelContext name directly to reference the CamelContext, if it is false, JobDataMap uses use the CamelContext management name which could be changed during the deploy time. false boolean autoStartScheduler (scheduler) Whether or not the scheduler should be auto started. true boolean startDelayedSeconds (scheduler) Seconds to wait before starting the quartz scheduler. int triggerStartDelay (scheduler) In case of scheduler has already started, we want the trigger start slightly after current time to ensure endpoint is fully started before the job kicks in. Negative value shifts trigger start time in the past. 500 long 98.5.3. Configuring quartz.properties file By default Quartz will look for a quartz.properties file in the org/quartz directory of the classpath. If you are using WAR deployments this means just drop the quartz.properties in WEB-INF/classes/org/quartz . However the Camel Quartz component also allows you to configure properties: Parameter Default Type Description properties null Properties You can configure a java.util.Properties instance. propertiesFile null String File name of the properties to load from the classpath To do this you can configure this in Spring XML as follows <bean id="quartz" class="org.apache.camel.component.quartz.QuartzComponent"> <property name="propertiesFile" value="com/mycompany/myquartz.properties"/> </bean> 98.6. Enabling Quartz scheduler in JMX You need to configure the quartz scheduler properties to enable JMX. That is typically setting the option "org.quartz.scheduler.jmx.export" to a true value in the configuration file. This option is set to true by default, unless explicitly disabled. 98.7. Starting the Quartz scheduler The Quartz component offers an option to let the Quartz scheduler be started delayed, or not auto started at all. This is an example: <bean id="quartz" class="org.apache.camel.component.quartz.QuartzComponent"> <property name="startDelayedSeconds" value="5"/> </bean> 98.8. Clustering If you use Quartz in clustered mode, e.g. the JobStore is clustered. Then the Quartz component will not pause/remove triggers when a node is being stopped/shutdown. This allows the trigger to keep running on the other nodes in the cluster. Note When running in clustered node no checking is done to ensure unique job name/group for endpoints. 98.9. Message Headers Camel adds the getters from the Quartz Execution Context as header values. The following headers are added: calendar , fireTime , jobDetail , jobInstance , jobRuntTime , mergedJobDataMap , nextFireTime , previousFireTime , refireCount , result , scheduledFireTime , scheduler , trigger , triggerName , triggerGroup . The fireTime header contains the java.util.Date of when the exchange was fired. 98.10. Using Cron Triggers Quartz supports Cron-like expressions for specifying timers in a handy format. You can use these expressions in the cron URI parameter; though to preserve valid URI encoding we allow + to be used instead of spaces. For example, the following will fire a message every five minutes starting at 12pm (noon) to 6pm on weekdays: from("quartz://myGroup/myTimerName?cron=0+0/5+12-18+?+*+MON-FRI") .to("activemq:Totally.Rocks"); which is equivalent to using the cron expression The following table shows the URI character encodings we use to preserve valid URI syntax: URI Character Cron character + Space 98.11. Specifying time zone The Quartz Scheduler allows you to configure time zone per trigger. For example to use a timezone of your country, then you can do as follows: The timeZone value is the values accepted by java.util.TimeZone . 98.12. Configuring misfire instructions The quartz scheduler can be configured with a misfire instruction to handle misfire situations for the trigger. The concrete trigger type that you are using will have defined a set of additional MISFIRE_INSTRUCTION_XXX constants that may be set as this property's value. For example to configure the simple trigger to use misfire instruction 4: And likewise you can configure the cron trigger with one of its misfire instructions as well: The simple and cron triggers has the following misfire instructions representative: 98.12.1. SimpleTrigger.MISFIRE_INSTRUCTION_FIRE_NOW = 1 (default) Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be fired now by Scheduler. This instruction should typically only be used for 'one-shot' (non-repeating) Triggers. If it is used on a trigger with a repeat count > 0 then it is equivalent to the instruction MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT. 98.12.2. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_EXISTING_REPEAT_COUNT = 2 Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to 'now' (even if the associated Calendar excludes 'now') with the repeat count left as-is. This does obey the Trigger end-time however, so if 'now' is after the end-time the Trigger will not fire again. Use of this instruction causes the trigger to 'forget' the start-time and repeat-count that it was originally setup with (this is only an issue if you for some reason wanted to be able to tell what the original values were at some later time). 98.12.3. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT = 3 Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to 'now' (even if the associated Calendar excludes 'now') with the repeat count set to what it would be, if it had not missed any firings. This does obey the Trigger end-time however, so if 'now' is after the end-time the Trigger will not fire again. Use of this instruction causes the trigger to 'forget' the start-time and repeat-count that it was originally setup with. Instead, the repeat count on the trigger will be changed to whatever the remaining repeat count is (this is only an issue if you for some reason wanted to be able to tell what the original values were at some later time). This instruction could cause the Trigger to go to the 'COMPLETE' state after firing 'now', if all the repeat-fire-times where missed. 98.12.4. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NEXT_WITH_REMAINING_COUNT = 4 Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to the scheduled time after 'now' - taking into account any associated Calendar and with the repeat count set to what it would be, if it had not missed any firings. Note This instruction could cause the Trigger to go directly to the 'COMPLETE' state if all fire-times where missed. 98.12.5. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NEXT_WITH_EXISTING_COUNT = 5 Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to the scheduled time after 'now' - taking into account any associated Calendar, and with the repeat count left unchanged. Note This instruction could cause the Trigger to go directly to the 'COMPLETE' state if the end-time of the trigger has arrived. 98.12.6. CronTrigger.MISFIRE_INSTRUCTION_FIRE_ONCE_NOW = 1 (default) Instructs the Scheduler that upon a mis-fire situation, the CronTrigger wants to be fired now by Scheduler. 98.12.7. CronTrigger.MISFIRE_INSTRUCTION_DO_NOTHING = 2 Instructs the Scheduler that upon a mis-fire situation, the CronTrigger wants to have it's -fire-time updated to the time in the schedule after the current time (taking into account any associated Calendar but it does not want to be fired now. 98.13. Using QuartzScheduledPollConsumerScheduler The Quartz component provides a Polling Consumer scheduler which allows to use cron based scheduling for Polling Consumer such as the File and FTP consumers. For example to use a cron based expression to poll for files every 2nd second, then a Camel route can be define simply as: from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?") .to("bean:process"); Notice we define the scheduler=quartz to instruct Camel to use the Quartz based scheduler. Then we use scheduler.xxx options to configure the scheduler. The Quartz scheduler requires the cron option to be set. The following options is supported: Parameter Default Type Description quartzScheduler null org.quartz.Scheduler To use a custom Quartz scheduler. If none configure then the shared scheduler from the component is used. cron null String Mandatory : To define the cron expression for triggering the polls. triggerId null String To specify the trigger id. If none provided then an UUID is generated and used. triggerGroup QuartzScheduledPollConsumerScheduler String To specify the trigger group. timeZone Default TimeZone The time zone to use for the CRON trigger. Important Remember configuring these options from the endpoint URIs must be prefixed with scheduler . For example to configure the trigger id and group: from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?&scheduler.triggerId=myId&scheduler.triggerGroup=myGroup") .to("bean:process"); There is also a CRON scheduler in Spring, so you can use the following as well: from("file:inbox?scheduler=spring&scheduler.cron=0/2+*+*+*+*+?") .to("bean:process"); 98.14. Cron Component Support The Quartz component can be used as implementation of the Camel Cron component. Maven users will need to add the following additional dependency to their pom.xml : <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cron</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> Users can then use the cron component instead of the quartz component, as in the following route: from("cron://name?schedule=0+0/5+12-18+?+*+MON-FRI") .to("activemq:Totally.Rocks"); 98.15. Spring Boot Auto-Configuration The component supports 14 options, which are listed below. Name Description Default Type camel.component.quartz.auto-start-scheduler Whether or not the scheduler should be auto started. This options is default true. true Boolean camel.component.quartz.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.quartz.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.quartz.enable-jmx Whether to enable Quartz JMX which allows to manage the Quartz scheduler from JMX. This options is default true. true Boolean camel.component.quartz.enabled Whether to enable auto configuration of the quartz component. This is enabled by default. Boolean camel.component.quartz.interrupt-jobs-on-shutdown Whether to interrupt jobs on shutdown which forces the scheduler to shutdown quicker and attempt to interrupt any running jobs. If this is enabled then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully. false Boolean camel.component.quartz.prefix-instance-name Whether to prefix the Quartz Scheduler instance name with the CamelContext name. This is enabled by default, to let each CamelContext use its own Quartz scheduler instance by default. You can set this option to false to reuse Quartz scheduler instances between multiple CamelContext's. true Boolean camel.component.quartz.prefix-job-name-with-endpoint-id Whether to prefix the quartz job with the endpoint id. This option is default false. false Boolean camel.component.quartz.properties Properties to configure the Quartz scheduler. Map camel.component.quartz.properties-file File name of the properties to load from the classpath. String camel.component.quartz.properties-ref References to an existing Properties or Map to lookup in the registry to use for configuring quartz. String camel.component.quartz.scheduler To use the custom configured Quartz scheduler, instead of creating a new Scheduler. The option is a org.quartz.Scheduler type. Scheduler camel.component.quartz.scheduler-factory To use the custom SchedulerFactory which is used to create the Scheduler. The option is a org.quartz.SchedulerFactory type. SchedulerFactory camel.component.quartz.start-delayed-seconds Seconds to wait before starting the quartz scheduler. Integer | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-quartz-starter</artifactId> </dependency>",
"quartz://timerName?options quartz://groupName/timerName?options quartz://groupName/timerName?cron=expression quartz://timerName?cron=expression",
"quartz:groupName/triggerName",
"<bean id=\"quartz\" class=\"org.apache.camel.component.quartz.QuartzComponent\"> <property name=\"propertiesFile\" value=\"com/mycompany/myquartz.properties\"/> </bean>",
"<bean id=\"quartz\" class=\"org.apache.camel.component.quartz.QuartzComponent\"> <property name=\"startDelayedSeconds\" value=\"5\"/> </bean>",
"from(\"quartz://myGroup/myTimerName?cron=0+0/5+12-18+?+*+MON-FRI\") .to(\"activemq:Totally.Rocks\");",
"0 0/5 12-18 ? * MON-FRI",
"quartz://groupName/timerName?cron=0+0/5+12-18+?+*+MON-FRI&trigger.timeZone=Europe/Stockholm",
"quartz://myGroup/myTimerName?trigger.repeatInterval=2000&trigger.misfireInstruction=4",
"quartz://myGroup/myTimerName?cron=0/2+*+*+*+*+?&trigger.misfireInstruction=2",
"from(\"file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?\") .to(\"bean:process\");",
"from(\"file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?&scheduler.triggerId=myId&scheduler.triggerGroup=myGroup\") .to(\"bean:process\");",
"from(\"file:inbox?scheduler=spring&scheduler.cron=0/2+*+*+*+*+?\") .to(\"bean:process\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cron</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>",
"from(\"cron://name?schedule=0+0/5+12-18+?+*+MON-FRI\") .to(\"activemq:Totally.Rocks\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-quartz-component-starter |
5.326. system-config-users | 5.326. system-config-users 5.326.1. RHBA-2012:1387 - system-config-users bug fix update Updated system-config-users packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The system-config-users packages provide a graphical utility for administrating users and groups. Bug Fixes BZ# 736037 Prior to this update, expiration dates at or before January 1, 1970 were not correctly calculated. As a consequence, the system-config-users utility stored expiration dates off by one day into /etc/shadow. This update modifies the underlying code so that account expiration dates are calculated and stored correctly. BZ# 801652 Prior to this update, a string in the user interface was not correctly localized into Japanese. This update modifies the string so that the text is now correct. BZ# 841886 Prior to this update, the system-config-users utility determined incorrectly whether to set an account as inactive if an expired password was not reset during a specified period. This update modifies the underlying code to check for this condition by hard-coding the value which indicates this condition. All users of system-config-users are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/system-config-users |
Chapter 11. Secret management system | Chapter 11. Secret management system Users and system administrators upload machine and cloud credentials so that automation can access machines and external services on their behalf. By default, sensitive credential values such as SSH passwords, SSH private keys, and API tokens for cloud services are stored in the database after being encrypted. With external credentials backed by credential plugins, you can map credential fields (such as a password or an SSH Private key) to values stored in a secret management system instead of providing them to automation controller directly. Automation controller provides a secret management system that include integrations for: AWS Secrets Manager Lookup Centrify Vault Credential Provider Lookup CyberArk Central Credential Provider Lookup (CCP) CyberArk Conjur Secrets Manager Lookup HashiCorp Vault Key-Value Store (KV) HashiCorp Vault SSH Secrets Engine Microsoft Azure Key Management System (KMS) Thycotic DevOps Secrets Vault Thycotic Secret Server GitHub app token lookup These external secret values are fetched before running a playbook that needs them. Additional resources For more information about specifying secret management system credentials in the user interface, see Managing user credentials . 11.1. Configuring and linking secret lookups When pulling a secret from a third-party system, you are linking credential fields to external systems. To link a credential field to a value stored in an external system, select the external credential corresponding to that system and provide metadata to look up the required value. The metadata input fields are part of the external credential type definition of the source credential. Automation controller provides a credential plugin interface for developers, integrators, system administrators, and power-users with the ability to add new external credential types to extend it to support other secret management systems. Use the following procedure to use automation controller to configure and use each of the supported third-party secret management systems. Procedure Create an external credential for authenticating with the secret management system. At minimum, give a name for the external credential and select one of the following for the Credential type field: AWS Secrets Manager Lookup Centrify Vault Credential Provider Lookup CyberArk Central Credential Provider (CCP) Lookup CyberArk Conjur Secrets Manager Lookup HashiCorp Vault Secret Lookup HashiCorp Vault Signed SSH Microsoft Azure Key Vault Thycotic DevOps Secrets Vault Thycotic Secret Server GitHub app token lookup In this example, the Demo Credential is the target credential. For any of the fields that follow the Type Details area that you want to link to the external credential, click the key icon in the input field to link one or more input fields to the external credential along with metadata for locating the secret in the external system. Select the input source to use to retrieve your secret information. Select the credential you want to link to, and click . This takes you to the Metadata tab of the input source. This example shows the Metadata prompt for HashiVault Secret Lookup. Metadata is specific to the input source you select. For more information, see the Metadata for credential input sources table. Click Test to verify connection to the secret management system. If the lookup is unsuccessful, an error message displays: Click OK . You return to the Details screen of your target credential. Repeat these steps, starting with Step 3 to complete the remaining input fields for the target credential. By linking the information in this manner, automation controller retrieves sensitive information, such as username, password, keys, certificates, and tokens from the third-party management systems and populates the remaining fields of the target credential form with that data. If necessary, supply any information manually for those fields that do not use linking as a way of retrieving sensitive information. For more information about each of the fields, see the appropriate Credential types . Click Save . Additional resources For more information, see the development documents for Credential plugins . 11.1.1. Metadata for credential input sources The information required for the Metadata tab of the input source. AWS Secrets Manager Lookup Metadata Description AWS Secrets Manager Region (required) The region where the secrets manager is located. AWS Secret Name (required) Specify the AWS secret name that was generated by the AWS access key. Centrify Vault Credential Provider Lookup Metadata Description Account name (required) Name of the system account or domain associated with Centrify Vault. System Name Specify the name used by the Centrify portal. CyberArk Central Credential Provider Lookup Metadata Description Object Query (Required) Lookup query for the object. Object Query Format Select Exact for a specific secret name, or Regexp for a secret that has a dynamically generated name. Object Property Specifies the name of the property to return. For example, UserName or Address other than the default of Content . Reason If required for the object's policy, supply a reason for checking out the secret, as CyberArk logs those. CyberArk Conjur Secrets Lookup Metadata Description Secret Identifier The identifier for the secret. Secret Version Specify a version of the secret, if necessary, otherwise, leave it empty to use the latest version. HashiVault Secret Lookup Metadata Description Name of Secret Backend Specify the name of the KV backend to use. Leave it blank to use the first path segment of the Path to Secret field instead. Path to Secret (required) Specify the path to where the secret information is stored; for example, /path/username . Key Name (required) Specify the name of the key to look up the secret information. Secret Version (V2 Only) Specify a version if necessary, otherwise, leave it empty to use the latest version. HashiCorp Signed SSH Metadata Description Unsigned Public Key (required) Specify the public key of the certificate you want to have signed. It needs to be present in the authorized keys file of the target hosts. Path to Secret (required) Specify the path to where the secret information is stored; for example, /path/username . Role Name (required) A role is a collection of SSH settings and parameters that are stored in Hashi vault. Typically, you can specify some with different privileges or timeouts, for example. So you could have a role that is permitted to get a certificate signed for root, and other less privileged ones, for example. Valid Principals Specify a user (or users) other than the default, that you are requesting vault to authorize the cert for the stored key. Hashi vault has a default user for whom it signs, for example, ec2-user. Microsoft Azure KMS Metadata Description Secret Name (required) The name of the secret as it is referenced in Microsoft Azure's Key vault app. Secret Version Specify a version of the secret, if necessary, otherwise, leave it empty to use the latest version. Thycotic DevOps Secrets Vault Metadata Description Secret Path (required) Specify the path to where the secret information is stored, for example, /path/username. Thycotic Secret Server Metadata Description Secret ID (required) The identifier for the secret. Secret Field Specify the field to be used from the secret. 11.1.2. AWS Secrets Manager lookup This plugin enables Amazon Web Services to be used as a credential input source to pull secrets from the Amazon Web Services Secrets Manager. The AWS Secrets Manager provides similar service to Microsoft Azure Key Vault, and the AWS collection provides a lookup plugin for it. When AWS Secrets Manager lookup is selected for Credential type , give the following metadata to configure your lookup: AWS Access Key (required): give the access key used for communicating with AWS key management system AWS Secret Key (required): give the secret as obtained by the AWS IAM console 11.1.3. Centrify Vault Credential Provider Lookup You need the Centrify Vault web service running to store secrets for this integration to work. When you select Centrify Vault Credential Provider Lookup for Credential Type , give the following metadata to configure your lookup: Centrify Tenant URL (required): give the URL used for communicating with Centrify's secret management system Centrify API User (required): give the username Centrify API Password (required): give the password OAuth2 Application ID : specify the identifier given associated with the OAuth2 client OAuth2 Scope : specify the scope of the OAuth2 client 11.1.4. CyberArk Central Credential Provider (CCP) Lookup The CyberArk Central Credential Provider web service must be running to store secrets for this integration to work. When you select CyberArk Central Credential Provider Lookup for Credential Type , give the following metadata to configure your lookup: CyberArk CCP URL (required): give the URL used for communicating with CyberArk CCP's secret management system. It must include the URL scheme such as http or https. Optional: Web Service ID : specify the identifier for the web service. Leaving this blank defaults to AIMWebService. Application ID (required): specify the identifier given by CyberArk CCP services. Client Key : paste the client key if provided by CyberArk. Client Certificate : include the BEGIN CERTIFICATE and END CERTIFICATE lines when pasting the certificate, if provided by CyberArk. Verify SSL Certificates : this option is only available when the URL uses HTTPS. Check this option to verify that the server's SSL/TLS certificate is valid and trusted. For environments that use internal or private CA's, leave this option unchecked to disable verification. 11.1.5. CyberArk Conjur Secrets Manager Lookup With a Conjur Cloud tenant available to target, configure the CyberArk Conjur Secrets Lookup external management system credential plugin. When you select CyberArk Conjur Secrets Manager Lookup for Credential Type , give the following metadata to configure your lookup: Conjur URL (required): provide the URL used for communicating with CyberArk Conjur's secret management system. This must include the URL scheme, such as http or https. API Key (required): provide the key given by your Conjur admin Account (required): the organization's account name Username (required): the specific authenticated user for this service Public Key Certificate : include the BEGIN CERTIFICATE and END CERTIFICATE lines when pasting the public key, if provided by CyberArk 11.1.6. HashiCorp Vault Secret Lookup When you select HashiCorp Vault Secret Lookup for Credential Type , give the following metadata to configure your lookup: Server URL (required): give the URL used for communicating with HashiCorp Vault's secret management system. Token : specify the access token used to authenticate HashiCorp's server. CA Certificate : specify the CA certificate used to verify HashiCorp's server. AppRole role_id : specify the ID if using AppRole for authentication. AppRole secret_id : specify the corresponding secret ID for AppRole authentication. Client Certificate : specify a PEM-encoded client certificate when using the TLS authentication method, including any required intermediate certificates expected by Hashicorp Vault. Client Certificate Key : specify a PEM-encoded certificate private key when using the TLS authentication method. TLS Authentication Role : specify the role or certificate name in Hashicorp Vault that corresponds to your client certificate when using the TLS authentication method. If it is not provided, Hashicorp Vault attempts to match the certificate automatically. Namespace name : specify the Namespace name (Hashicorp Vault enterprise only). Kubernetes role : specify the role name when using Kubernetes authentication. Username : enter the username of the user to be used to authenticate this service. Password : enter the password associated with the user to be used to authenticate this service. Path to Auth : specify a path if other than the default path of /approle . API Version (required): select v1 for static lookups and v2 for versioned lookups. LDAP authentication requires LDAP to be configured in HashiCorp's Vault UI and a policy added to the user. Cubbyhole is the name of the default secret mount. If you have proper permissions, you can create other mounts and write key values to those. To test the lookup, create another credential that uses Hashicorp Vault lookup. Additional resources For more detail about the LDAP authentication method and its fields, see the Vault documentation for LDAP auth method . For more information about AppRole authentication method and its fields, see the Vault documentation for AppRole auth method . For more information about the userpass authentication method and its fields, see the Vault documentation for userpass auth method . For more information about the Kubernetes auth method and its fields, see the Vault documentation for Kubernetes auth method . For more information about the TLS certificate auth method and its fields, see the Vault documentation for TLS certificates auth method . 11.1.7. HashiCorp Vault Signed SSH When you select HashiCorp Vault Signed SSH for Credential Type , give the following metadata to configure your lookup: Server URL (required): give the URL used for communicating with HashiCorp Signed SSH's secret management system. Token : specify the access token used to authenticate HashiCorp's server. CA Certificate : specify the CA certificate used to verify HashiCorp's server. AppRole role_id : specify the ID for AppRole authentication. AppRole secret_id : specify the corresponding secret ID for AppRole authentication. Client Certificate : specify a PEM-encoded client certificate when using the TLS authentication method, including any required intermediate certificates expected by Hashicorp Vault. Client Certificate Key : specify a PEM-encoded certificate private key when using the TLS authentication method. TLS Authentication Role : specify the role or certificate name in Hashicorp Vault that corresponds to your client certificate when using the TLS authentication method. If it is not provided, Hashicorp Vault attempts to match the certificate automatically. Namespace name : specify the Namespace name (Hashicorp Vault enterprise only). Kubernetes role : specify the role name when using Kubernetes authentication. Username : enter the username of the user to be used to authenticate this service. Password : enter the password associated with the user to be used to authenticate this service. Path to Auth : specify a path if other than the default path of /approle . Additional resources For more information about AppRole authentication method and its fields, see the Vault documentation for AppRole Auth Method . For more information about the Kubernetes authentication method and its fields, see the Vault documentation for Kubernetes auth method . For more information about the TLS certificate auth method and its fields, see the Vault documentation for TLS certificates auth method . 11.1.8. Microsoft Azure Key Vault When you select Microsoft Azure Key Vault for Credential Type , give the following metadata to configure your lookup: Vault URL (DNS Name) (required): give the URL used for communicating with Microsoft Azure's key management system Client ID (required): give the identifier as obtained by Microsoft Entra ID Client Secret (required): give the secret as obtained by Microsoft Entra ID Tenant ID (required): give the unique identifier that is associated with an Microsoft Entra ID instance within an Azure subscription Cloud Environment : select the applicable cloud environment to apply 11.1.9. Thycotic DevOps Secrets Vault When you select Thycotic DevOps Secrets Vault for Credential Type , give the following metadata to configure your lookup: Tenant (required): give the URL used for communicating with Thycotic's secret management system Top-level Domain (TLD) : give the top-level domain designation, for example .com, .edu, or .org, associated with the secret vault you want to integrate Client ID (required): give the identifier as obtained by the Thycotic secret management system Client Secret (required): give the secret as obtained by the Thycotic secret management system 11.1.10. Thycotic Secret Server When you select Thycotic Secrets Server for Credential Type , give the following metadata to configure your lookup: Secret Server URL (required): give the URL used for communicating with the Thycotic Secrets Server management system Username (required): specify the authenticated user for this service Domain : give the (application) user domain Password (required): give the password associated with the user 11.1.11. Configuring a GitHub App Installation Access Token Lookup With this plugin you can use a private GitHub App RSA key as a credential input source to pull access tokens from GitHub App installations. Platform gateway uses existing GitHub authorization from organizations' GitHub repositories. For more information, see Generating an installation access token for a GitHub App . Procedure Create a lookup credential that stores your secrets. For more information, see Creating new credentials . Select GitHub App Installation Access Token Lookup for Credential type , and enter the following attributes to properly configure your lookup: GitHub App ID : Enter the App ID provided by your instance of GitHub, this is what is used to authenticate. GitHub App Installation ID : Enter the ID of the application into your target organization where the access token is scoped. You must set it up to have access to your target repository. RSA Private Key : Enter the generated private key that your GitHub instance generated. You can get it from the GitHub App maintainer within GitHub. For more information, see Managing private keys for GitHub Apps . Click Create credential to confirm and save the credential. The following is an example of a configured GitHub App Installation Access Token Lookup credential: Create a target credential that searches for the lookup credential. To use your lookup in a private repository, select Source Control as your Credential type . Enter the following attributes to properly configure your target credential: Username : Enter the username x-access-token . Password : Click the icon for managing external credentials in the input field. You are prompted to set the input source to use to retrieve your secret information. This is the lookup credential that you have already created. Enter an optional description for the metadata requested and click Finish . Click Create credential to confirm and save the credential. Verify both your lookup credential and your target credential are now available on the Credentials list view. To use the target credential in a project, create a project and enter the following information: Name : Enter the name for your project. Organization : Select the name of the organization from the drop-down menu.. Execution environment : Optionally select an execution environment, if applicable. Source control type : If you are syncing with a private repository, select Git for your source control. The Type Details view opens for additional input. Enter the following information: Source control URL : Enter the URL of the private repository you want to access. The other related fields pertaining to branch/tag/commit and refspec are not relevant for use with a lookup credential. Source control credential : Select the target credential that you have already created. The following is an example of a configured target credential in a project: Click Create project and the project sync automatically starts. The project Details tab displays the progress of the job: Troubleshooting If your project sync fails, you might have to manually re-enter https://api.github.com in the GitHub API endpoint URL field from Step 2 and re-run your project sync. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/assembly-controller-secret-management |
Chapter 20. Unlocking assets | Chapter 20. Unlocking assets By default, whenever you open and modify an asset in Business Central, that asset is automatically locked for your exclusive use in order to avoid conflicts in a multiuser setup. This lock is automatically released when your session ends or when you save or close the asset. This lock feature ensures that users do not overwrite each other's changes. However, you can force unlock an asset if you need to edit a file that is locked by another user. Procedure In Business Central, go to Menu Design Projects and click the project name. Select the asset from the list to open the asset designer. Go to Overview Metadata and view the Lock Status . Figure 20.1. Unlock metadata view If the asset is already being edited by another user, the following will be displayed in the Lock status field: Locked by <user_name> Click Force unlock asset to unlock. The following confirmation pop-up message is displayed: Are you sure you want to release the lock of this asset? This might cause <user_name> to lose unsaved changes! Click Yes to confirm. The asset returns to an unlocked state and the lock icon option will appear to the asset. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assets_unlocking_proc |
34.2.2. Configuring Batch Jobs | 34.2.2. Configuring Batch Jobs To execute a one-time task when the load average is below 0.8, use the batch command. After typing the batch command, the at> prompt is displayed. Type the command to execute, press Enter , and type Ctrl + D . Multiple commands can be specified by typing each command followed by the Enter key. After typing all the commands, press Enter to go to a blank line and type Ctrl + D . Alternatively, a shell script can be entered at the prompt, pressing Enter after each line in the script, and typing Ctrl + D on a blank line to exit. If a script is entered, the shell used is the shell set in the user's SHELL environment, the user's login shell, or /bin/sh (whichever is found first). As soon as the load average is below 0.8, the set of commands or script is executed. If the set of commands or script tries to display information to standard out, the output is emailed to the user. Use the command atq to view pending jobs. Refer to Section 34.2.3, "Viewing Pending Jobs" for more information. Usage of the batch command can be restricted. For more information, refer to Section 34.2.5, "Controlling Access to At and Batch" for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/at_and_batch-configuring_batch_jobs |
7.133. lohit-telugu-fonts | 7.133. lohit-telugu-fonts 7.133.1. RHBA-2012:1212 - lohit-telugu-fonts bug fix update An updated lohit-telugu-fonts package that fixes one bug is now available for Red Hat Enterprise Linux 6. The lohit-telugu-fonts package provides a free Telugu TrueType/OpenType font. Bug Fix BZ#640610 Due to a bug in the lohit-telugu-fonts package, four certain syllables were rendering incorrectly. This bug has been fixed and these syllables now render correctly. All users of lohit-telugu-fonts are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/lohit-telugu-fonts |
Chapter 8. Assigning a Puppet Class to an Individual Host | Chapter 8. Assigning a Puppet Class to an Individual Host Procedure In the Satellite web UI, navigate to Hosts > All hosts . Click on the Edit button of the host you want to add the ntp Puppet class to. Select the Puppet ENC tab and look for the ntp class. Click the + symbol to ntp to add the ntp submodule to the list of included classes . Click the Submit button at the bottom to save your changes. Tip If the Puppet classes tab of an individual host is empty, check if it is assigned to the proper Puppet environment. Verify the Puppet configuration. Navigate to Hosts > All Hosts and select the host. From the top overflow menu, select Legacy UI . Under Details , click the Puppet YAML button. This produces output similar as follows: --- parameters: // shortened YAML output classes: ntp: servers: '["0.de.pool.ntp.org","1.de.pool.ntp.org","2.de.pool.ntp.org","3.de.pool.ntp.org"]' environment: production ... Verify the ntp configuration. Connect to your host using SSH and check the content of /etc/ntp.conf . This example assumes your host is running CentOS 7 . Other operating systems may store the ntp config file in a different path. Tip You may need to run the Puppet agent on your host by executing the following command: Running the following command on the host checks which ntp servers are used for clock synchronization: This returns output similar as follows: You now have a working ntp module which you can add to a host or group of hosts to roll out your ntp configuration automatically. | [
"--- parameters: // shortened YAML output classes: ntp: servers: '[\"0.de.pool.ntp.org\",\"1.de.pool.ntp.org\",\"2.de.pool.ntp.org\",\"3.de.pool.ntp.org\"]' environment: production",
"puppet agent -t",
"cat /etc/ntp.conf",
"ntp.conf: Managed by puppet. server 0.de.pool.ntp.org server 1.de.pool.ntp.org server 2.de.pool.ntp.org server 3.de.pool.ntp.org"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_configurations_using_puppet_integration_in_red_hat_satellite/assigning-a-puppet-class-to-an-individual-host_managing-configurations-puppet |
Chapter 9. Viewing and managing JMX domains and MBeans | Chapter 9. Viewing and managing JMX domains and MBeans Java Management Extensions (JMX) is a Java technology that allows you to manage resources (services, devices, and applications) dynamically at runtime. The resources are represented by objects called MBeans (for Managed Bean). You can manage and monitor resources as soon as they are created, implemented, or installed. With the JMX plugin on the Fuse Console, you can view and manage JMX domains and MBeans. You can view MBean attributes, run commands, and create charts that show statistics for the MBeans. The JMX tab provides a tree view of the active JMX domains and MBeans organized in folders. You can view details and execute commands on the MBeans. Procedure To view and edit MBean attributes: In the tree view, select an MBean. Click the Attributes tab. Click an attribute to see its details. To perform operations: In the tree view, select an MBean. Click the Operations tab, expand one of the listed operations. Click Execute to run the operation. To view charts: In the tree view, select an item. Click the Chart tab. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_jboss_eap_standalone/fuse-console-view-jmx-all_eap |
Chapter 6. Troubleshooting upgrade error messages | Chapter 6. Troubleshooting upgrade error messages The following table shows some cephadm upgrade error messages. If the cephadm upgrade fails for any reason, an error message appears in the storage cluster health status. Error Message Description UPGRADE_NO_STANDBY_MGR Ceph requires both active and standby manager daemons to proceed, but there is currently no standby. UPGRADE_FAILED_PULL Ceph was unable to pull the container image for the target version. This can happen if you specify a version or container image that does not exist (e.g., 1.2.3), or if the container registry is not reachable from one or more hosts in the cluster. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/upgrade_guide/troubleshooting-upgrade-error-messages_upgrade |
7.3. Comparison with Virtual Machines | 7.3. Comparison with Virtual Machines Virtual machines represent an entire server with all of the associated software and maintenance concerns. Docker containers provide application isolation and can be configured with minimum run-time environments. In a Docker container, the kernel and parts of the operating system infrastructure are shared. For the virtual machine, a full operating system must be included. You can create or destroy containers quickly and easily. Virtual Machines require full installations and require more computing resources to execute. Containers are lightweight, therefore, more containers than virtual machines can run simultaneously on a host machine. Containers share resources efficiently. Virtual machines are isolated. Therefore multiple variations of an application running in containers are also able to be very lightweight. For example, shared binaries are not duplicated on the system. Virtual machines can be migrated while still executing, however containers cannot be migrated while executing and must be stopped before moving from host machine to host machine. Containers do not replace virtual machines for all use cases. Careful evaluation is still required to determine what is best for your application. To quickly get up-and-running with Docker Containers, refer to Get Started with Docker Containers . The Docker FAQ contains more information about Linux Containers, Docker, subscriptions and support. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-linux_containers_with_docker_format-comparison_with_virtual_machines |
Chapter 4. Decision Model and Notation (DMN) | Chapter 4. Decision Model and Notation (DMN) Decision Model and Notation (DMN) is a standard established by the Object Management Group (OMG) for describing and modeling operational decisions. DMN defines an XML schema that enables DMN models to be shared between DMN-compliant platforms and across organizations so that business analysts and business rules developers can collaborate in designing and implementing DMN decision services. The DMN standard is similar to and can be used together with the Business Process Model and Notation (BPMN) standard for designing and modeling business processes. For more information about the background and applications of DMN, see the OMG Decision Model and Notation specification . 4.1. DMN conformance levels The DMN specification defines three incremental levels of conformance in a software implementation. A product that claims compliance at one level must also be compliant with any preceding levels. For example, a conformance level 3 implementation must also include the supported components in conformance levels 1 and 2. For the formal definitions of each conformance level, see the OMG Decision Model and Notation specification . The following list summarizes the three DMN conformance levels: Conformance level 1 A DMN conformance level 1 implementation supports decision requirement diagrams (DRDs), decision logic, and decision tables, but decision models are not executable. Any language can be used to define the expressions, including natural, unstructured languages. Conformance level 2 A DMN conformance level 2 implementation includes the requirements in conformance level 1, and supports Simplified Friendly Enough Expression Language (S-FEEL) expressions and fully executable decision models. Conformance level 3 A DMN conformance level 3 implementation includes the requirements in conformance levels 1 and 2, and supports Friendly Enough Expression Language (FEEL) expressions, the full set of boxed expressions, and fully executable decision models. Red Hat Process Automation Manager provides runtime support for DMN 1.1, 1.2, 1.3, and 1.4 models at conformance level 3, and design support for DMN 1.2 models at conformance level 3. You can design your DMN models directly in Business Central or with the Red Hat Process Automation Manager DMN modeler in VS Code, or import existing DMN models into your Red Hat Process Automation Manager projects for deployment and execution. Any DMN 1.1 and 1.3 models (do not contain DMN 1.3 features) that you import into Business Central, open in the DMN designer, and save are converted to DMN 1.2 models. 4.2. DMN decision requirements diagram (DRD) components A decision requirements diagram (DRD) is a visual representation of your DMN model. A DRD can represent part or all of the overall decision requirements graph (DRG) for the DMN model. DRDs trace business decisions using decision nodes, business knowledge models, sources of business knowledge, input data, and decision services. The following table summarizes the components in a DRD: Table 4.1. DRD components Component Description Notation Elements Decision Node where one or more input elements determine an output based on defined decision logic. Business knowledge model Reusable function with one or more decision elements. Decisions that have the same logic but depend on different sub-input data or sub-decisions use business knowledge models to determine which procedure to follow. Knowledge source External authorities, documents, committees, or policies that regulate a decision or business knowledge model. Knowledge sources are references to real-world factors rather than executable business rules. Input data Information used in a decision node or a business knowledge model. Input data usually includes business-level concepts or objects relevant to the business, such as loan applicant data used in a lending strategy. Decision service Top-level decision containing a set of reusable decisions published as a service for invocation. A decision service can be invoked from an external application or a BPMN business process. Requirement connectors Information requirement Connection from an input data node or decision node to another decision node that requires the information. Knowledge requirement Connection from a business knowledge model to a decision node or to another business knowledge model that invokes the decision logic. Authority requirement Connection from an input data node or a decision node to a dependent knowledge source or from a knowledge source to a decision node, business knowledge model, or another knowledge source. Artifacts Text annotation Explanatory note associated with an input data node, decision node, business knowledge model, or knowledge source. Association Connection from an input data node, decision node, business knowledge model, or knowledge source to a text annotation. The following table summarizes the permitted connectors between DRD elements: Table 4.2. DRD connector rules Starts from Connects to Connection type Example Decision Decision Information requirement Business knowledge model Decision Knowledge requirement Business knowledge model Decision service Decision Knowledge requirement Business knowledge model Input data Decision Information requirement Knowledge source Authority requirement Knowledge source Decision Authority requirement Business knowledge model Knowledge source Decision Text annotation Association Business knowledge model Knowledge source Input data The following example DRD illustrates some of these DMN components in practice: Figure 4.1. Example DRD: Loan prequalification The following example DRD illustrates DMN components that are part of a reusable decision service: Figure 4.2. Example DRD: Phone call handling as a decision service In a DMN decision service node, the decision nodes in the bottom segment incorporate input data from outside of the decision service to arrive at a final decision in the top segment of the decision service node. The resulting top-level decisions from the decision service are then implemented in any subsequent decisions or business knowledge requirements of the DMN model. You can reuse DMN decision services in other DMN models to apply the same decision logic with different input data and different outgoing connections. 4.3. Rule expressions in FEEL Friendly Enough Expression Language (FEEL) is an expression language defined by the Object Management Group (OMG) DMN specification. FEEL expressions define the logic of a decision in a DMN model. FEEL is designed to facilitate both decision modeling and execution by assigning semantics to the decision model constructs. FEEL expressions in decision requirements diagrams (DRDs) occupy table cells in boxed expressions for decision nodes and business knowledge models. For more information about FEEL in DMN, see the OMG Decision Model and Notation specification . 4.3.1. Data types in FEEL Friendly Enough Expression Language (FEEL) supports the following data types: Numbers Strings Boolean values Dates Time Date and time Days and time duration Years and months duration Functions Contexts Ranges (or intervals) Lists Note The DMN specification currently does not provide an explicit way of declaring a variable as a function , context , range , or list , but Red Hat Process Automation Manager extends the DMN built-in types to support variables of these types. The following list describes each data type: Numbers Numbers in FEEL are based on the IEEE 754-2008 Decimal 128 format, with 34 digits of precision. Internally, numbers are represented in Java as BigDecimals with MathContext DECIMAL128 . FEEL supports only one number data type, so the same type is used to represent both integers and floating point numbers. FEEL numbers use a dot ( . ) as a decimal separator. FEEL does not support -INF , +INF , or NaN . FEEL uses null to represent invalid numbers. Red Hat Process Automation Manager extends the DMN specification and supports additional number notations: Scientific: You can use scientific notation with the suffix e<exp> or E<exp> . For example, 1.2e3 is the same as writing the expression 1.2*10**3 , but is a literal instead of an expression. Hexadecimal: You can use hexadecimal numbers with the prefix 0x . For example, 0xff is the same as the decimal number 255 . Both uppercase and lowercase letters are supported. For example, 0XFF is the same as 0xff . Type suffixes: You can use the type suffixes f , F , d , D , l , and L . These suffixes are ignored. Strings Strings in FEEL are any sequence of characters delimited by double quotation marks. Example Boolean values FEEL uses three-valued boolean logic, so a boolean logic expression may have values true , false , or null . Dates Date literals are not supported in FEEL, but you can use the built-in date() function to construct date values. Date strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document. The format is "YYYY-MM-DD" where YYYY is the year with four digits, MM is the number of the month with two digits, and DD is the number of the day. Example: Date objects have time equal to "00:00:00" , which is midnight. The dates are considered to be local, without a timezone. Time Time literals are not supported in FEEL, but you can use the built-in time() function to construct time values. Time strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document. The format is "hh:mm:ss[.uuu][(+-)hh:mm]" where hh is the hour of the day (from 00 to 23 ), mm is the minutes in the hour, and ss is the number of seconds in the minute. Optionally, the string may define the number of milliseconds ( uuu ) within the second and contain a positive ( + ) or negative ( - ) offset from UTC time to define its timezone. Instead of using an offset, you can use the letter z to represent the UTC time, which is the same as an offset of -00:00 . If no offset is defined, the time is considered to be local. Examples: Time values that define an offset or a timezone cannot be compared to local times that do not define an offset or a timezone. Date and time Date and time literals are not supported in FEEL, but you can use the built-in date and time() function to construct date and time values. Date and time strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document. The format is "<date>T<time>" , where <date> and <time> follow the prescribed XML schema formatting, conjoined by T . Examples: Date and time values that define an offset or a timezone cannot be compared to local date and time values that do not define an offset or a timezone. Important If your implementation of the DMN specification does not support spaces in the XML schema, use the keyword dateTime as a synonym of date and time . Days and time duration Days and time duration literals are not supported in FEEL, but you can use the built-in duration() function to construct days and time duration values. Days and time duration strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document, but are restricted to only days, hours, minutes and seconds. Months and years are not supported. Examples: Important If your implementation of the DMN specification does not support spaces in the XML schema, use the keyword dayTimeDuration as a synonym of days and time duration . Years and months duration Years and months duration literals are not supported in FEEL, but you can use the built-in duration() function to construct days and time duration values. Years and months duration strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document, but are restricted to only years and months. Days, hours, minutes, or seconds are not supported. Examples: Important If your implementation of the DMN specification does not support spaces in the XML schema, use the keyword yearMonthDuration as a synonym of years and months duration . Functions FEEL has function literals (or anonymous functions) that you can use to create functions. The DMN specification currently does not provide an explicit way of declaring a variable as a function , but Red Hat Process Automation Manager extends the DMN built-in types to support variables of functions. Example: In this example, the FEEL expression creates a function that adds the parameters a and b and returns the result. Contexts FEEL has context literals that you can use to create contexts. A context in FEEL is a list of key and value pairs, similar to maps in languages like Java. The DMN specification currently does not provide an explicit way of declaring a variable as a context , but Red Hat Process Automation Manager extends the DMN built-in types to support variables of contexts. Example: In this example, the expression creates a context with two entries, x and y , representing a coordinate in a chart. In DMN 1.2, another way to create contexts is to create an item definition that contains the list of keys as attributes, and then declare the variable as having that item definition type. The Red Hat Process Automation Manager DMN API supports DMN ItemDefinition structural types in a DMNContext represented in two ways: User-defined Java type: Must be a valid JavaBeans object defining properties and getters for each of the components in the DMN ItemDefinition . If necessary, you can also use the @FEELProperty annotation for those getters representing a component name which would result in an invalid Java identifier. java.util.Map interface: The map needs to define the appropriate entries, with the keys corresponding to the component name in the DMN ItemDefinition . Ranges (or intervals) FEEL has range literals that you can use to create ranges or intervals. A range in FEEL is a value that defines a lower and an upper bound, where either can be open or closed. The DMN specification currently does not provide an explicit way of declaring a variable as a range , but Red Hat Process Automation Manager extends the DMN built-in types to support variables of ranges. The syntax of a range is defined in the following formats: The expression for the endpoint must return a comparable value, and the lower bound endpoint must be lower than the upper bound endpoint. For example, the following literal expression defines an interval between 1 and 10 , including the boundaries (a closed interval on both endpoints): The following literal expression defines an interval between 1 hour and 12 hours, including the lower boundary (a closed interval), but excluding the upper boundary (an open interval): You can use ranges in decision tables to test for ranges of values, or use ranges in simple literal expressions. For example, the following literal expression returns true if the value of a variable x is between 0 and 100 : Lists FEEL has list literals that you can use to create lists of items. A list in FEEL is represented by a comma-separated list of values enclosed in square brackets. The DMN specification currently does not provide an explicit way of declaring a variable as a list , but Red Hat Process Automation Manager extends the DMN built-in types to support variables of lists. Example: All lists in FEEL contain elements of the same type and are immutable. Elements in a list can be accessed by index, where the first element is 1 . Negative indexes can access elements starting from the end of the list so that -1 is the last element. For example, the following expression returns the second element of a list x : The following expression returns the second-to-last element of a list x : Elements in a list can also be counted by the function count , which uses the list of elements as the parameter. For example, the following expression returns 4 : 4.3.2. Built-in functions in FEEL To promote interoperability with other platforms and systems, Friendly Enough Expression Language (FEEL) includes a library of built-in functions. The built-in FEEL functions are implemented in the Drools Decision Model and Notation (DMN) engine so that you can use the functions in your DMN decision services. The following sections describe each built-in FEEL function, listed in the format NAME ( PARAMETERS ) . For more information about FEEL functions in DMN, see the OMG Decision Model and Notation specification . 4.3.2.1. Conversion functions The following functions support conversion between values of different types. Some of these functions use specific string formats, such as the following examples: date string : Follows the format defined in the XML Schema Part 2: Datatypes document, such as 2020-06-01 time string : Follows one of the following formats: Format defined in the XML Schema Part 2: Datatypes document, such as 23:59:00z Format for a local time defined by ISO 8601 followed by @ and an IANA Timezone, such as 00:01:00@Etc/UTC date time string : Follows the format of a date string followed by T and a time string , such as 2012-12-25T11:00:00Z duration string : Follows the format of days and time duration and years and months duration defined in the XQuery 1.0 and XPath 2.0 Data Model , such as P1Y2M date( from ) - using date Converts from to a date value. Table 4.3. Parameters Parameter Type Format from string date string Example date( "2012-12-25" ) - date( "2012-12-24" ) = duration( "P1D" ) date( from ) - using date and time Converts from to a date value and sets time components to null. Table 4.4. Parameters Parameter Type from date and time Example date(date and time( "2012-12-25T11:00:00Z" )) = date( "2012-12-25" ) date( year, month, day ) Produces a date from the specified year, month, and day values. Table 4.5. Parameters Parameter Type year number month number day number Example date( 2012, 12, 25 ) = date( "2012-12-25" ) date and time( date, time ) Produces a date and time from the specified date and ignores any time components and the specified time. Table 4.6. Parameters Parameter Type date date or date and time time time Example date and time ( "2012-12-24T23:59:00" ) = date and time(date( "2012-12-24" ), time( "23:59:00" )) date and time( from ) Produces a date and time from the specified string. Table 4.7. Parameters Parameter Type Format from string date time string Example date and time( "2012-12-24T23:59:00" ) + duration( "PT1M" ) = date and time( "2012-12-25T00:00:00" ) time( from ) Produces a time from the specified string. Table 4.8. Parameters Parameter Type Format from string time string Example time( "23:59:00z" ) + duration( "PT2M" ) = time( "00:01:00@Etc/UTC" ) time( from ) Produces a time from the specified parameter and ignores any date components. Table 4.9. Parameters Parameter Type from time or date and time Example time(date and time( "2012-12-25T11:00:00Z" )) = time( "11:00:00Z" ) time( hour, minute, second, offset? ) Produces a time from the specified hour, minute, and second component values. Table 4.10. Parameters Parameter Type hour number minute number second number offset (Optional) days and time duration or null Example time( "23:59:00z" ) = time(23, 59, 0, duration( "PT0H" )) number( from, grouping separator, decimal separator ) Converts from to a number using the specified separators. Table 4.11. Parameters Parameter Type from string representing a valid number grouping separator Space ( ), comma ( , ), period ( . ), or null decimal separator Same types as grouping separator , but the values cannot match Example number( "1 000,0", " ", "," ) = number( "1,000.0", ",", "." ) string( from ) Provides a string representation of the specified parameter. Table 4.12. Parameters Parameter Type from Non-null value Examples string( 1.1 ) = "1.1" string( null ) = null duration( from ) Converts from to a days and time duration value or years and months duration value. Table 4.13. Parameters Parameter Type Format from string duration string Examples date and time( "2012-12-24T23:59:00" ) - date and time( "2012-12-22T03:45:00" ) = duration( "P2DT20H14M" ) duration( "P2Y2M" ) = duration( "P26M" ) years and months duration( from, to ) Calculates the years and months duration between the two specified parameters. Table 4.14. Parameters Parameter Type from date or date and time to date or date and time Example years and months duration( date( "2011-12-22" ), date( "2013-08-24" ) ) = duration( "P1Y8M" ) 4.3.2.2. Boolean functions The following functions support Boolean operations. not( negand ) Performs the logical negation of the negand operand. Table 4.15. Parameters Parameter Type negand boolean Examples not( true ) = false not( null ) = null 4.3.2.3. String functions The following functions support string operations. Note In FEEL, Unicode characters are counted based on their code points. substring( string, start position, length? ) Returns the substring from the start position for the specified length. The first character is at position value 1 . Table 4.16. Parameters Parameter Type string string start position number length (Optional) number Examples substring( "testing",3 ) = "sting" substring( "testing",3,3 ) = "sti" substring( "testing", -2, 1 ) = "n" substring( "\U01F40Eab", 2 ) = "ab" Note In FEEL, the string literal "\U01F40Eab" is the 🐎ab string (horse symbol followed by a and b ). string length( string ) Calculates the length of the specified string. Table 4.17. Parameters Parameter Type string string Examples string length( "tes" ) = 3 string length( "\U01F40Eab" ) = 3 upper case( string ) Produces an uppercase version of the specified string. Table 4.18. Parameters Parameter Type string string Example upper case( "aBc4" ) = "ABC4" lower case( string ) Produces a lowercase version of the specified string. Table 4.19. Parameters Parameter Type string string Example lower case( "aBc4" ) = "abc4" substring before( string, match ) Calculates the substring before the match. Table 4.20. Parameters Parameter Type string string match string Examples substring before( "testing", "ing" ) = "test" substring before( "testing", "xyz" ) = "" substring after( string, match ) Calculates the substring after the match. Table 4.21. Parameters Parameter Type string string match string Examples substring after( "testing", "test" ) = "ing" substring after( "", "a" ) = "" replace( input, pattern, replacement, flags? ) Calculates the regular expression replacement. Table 4.22. Parameters Parameter Type input string pattern string replacement string flags (Optional) string Note This function uses regular expression parameters as defined in XQuery 1.0 and XPath 2.0 Functions and Operators . Example replace( "abcd", "(ab)|(a)", "[1=USD1][2=USD2]" ) = "[1=ab][2=]cd" contains( string, match ) Returns true if the string contains the match. Table 4.23. Parameters Parameter Type string string match string Example contains( "testing", "to" ) = false starts with( string, match ) Returns true if the string starts with the match Table 4.24. Parameters Parameter Type string string match string Example starts with( "testing", "te" ) = true ends with( string, match ) Returns true if the string ends with the match. Table 4.25. Parameters Parameter Type string string match string Example ends with( "testing", "g" ) = true matches( input, pattern, flags? ) Returns true if the input matches the regular expression. Table 4.26. Parameters Parameter Type input string pattern string flags (Optional) string Note This function uses regular expression parameters as defined in XQuery 1.0 and XPath 2.0 Functions and Operators . Example matches( "teeesting", "^te*sting" ) = true split( string, delimiter ) Returns a list of the original string and splits it at the delimiter regular expression pattern. Table 4.27. Parameters Parameter Type string string delimiter string for a regular expression pattern Note This function uses regular expression parameters as defined in XQuery 1.0 and XPath 2.0 Functions and Operators . Examples split( "John Doe", "\\s" ) = ["John", "Doe"] split( "a;b;c;;", ";" ) = ["a","b","c","",""] 4.3.2.4. List functions The following functions support list operations. Note In FEEL, the index of the first element in a list is 1 . The index of the last element in a list can be identified as -1 . list contains( list, element ) Returns true if the list contains the element. Table 4.28. Parameters Parameter Type list list element Any type, including null Example list contains( [1,2,3], 2 ) = true count( list ) Counts the elements in the list. Table 4.29. Parameters Parameter Type list list Examples count( [1,2,3] ) = 3 count( [] ) = 0 count( [1,[2,3]] ) = 2 min( list ) Returns the minimum comparable element in the list. Table 4.30. Parameters Parameter Type list list Alternative signature Examples min( [1,2,3] ) = 1 min( 1 ) = 1 min( [1] ) = 1 max( list ) Returns the maximum comparable element in the list. Table 4.31. Parameters Parameter Type list list Alternative signature Examples max( 1,2,3 ) = 3 max( [] ) = null sum( list ) Returns the sum of the numbers in the list. Table 4.32. Parameters Parameter Type list list of number elements Alternative signature Examples sum( [1,2,3] ) = 6 sum( 1,2,3 ) = 6 sum( 1 ) = 1 sum( [] ) = null mean( list ) Calculates the average (arithmetic mean) of the elements in the list. Table 4.33. Parameters Parameter Type list list of number elements Alternative signature Examples mean( [1,2,3] ) = 2 mean( 1,2,3 ) = 2 mean( 1 ) = 1 mean( [] ) = null all( list ) Returns true if all elements in the list are true. Table 4.34. Parameters Parameter Type list list of boolean elements Alternative signature Examples all( [false,null,true] ) = false all( true ) = true all( [true] ) = true all( [] ) = true all( 0 ) = null any( list ) Returns true if any element in the list is true. Table 4.35. Parameters Parameter Type list list of boolean elements Alternative signature Examples any( [false,null,true] ) = true any( false ) = false any( [] ) = false any( 0 ) = null sublist( list, start position, length? ) Returns the sublist from the start position, limited to the length elements. Table 4.36. Parameters Parameter Type list list start position number length (Optional) number Example sublist( [4,5,6], 1, 2 ) = [4,5] append( list, item ) Creates a list that is appended to the item or items. Table 4.37. Parameters Parameter Type list list item Any type Example append( [1], 2, 3 ) = [1,2,3] concatenate( list ) Creates a list that is the result of the concatenated lists. Table 4.38. Parameters Parameter Type list list Example concatenate( [1,2],[3] ) = [1,2,3] insert before( list, position, newItem ) Creates a list with the newItem inserted at the specified position. Table 4.39. Parameters Parameter Type list list position number newItem Any type Example insert before( [1,3],1,2 ) = [2,1,3] remove( list, position ) Creates a list with the removed element excluded from the specified position. Table 4.40. Parameters Parameter Type list list position number Example remove( [1,2,3], 2 ) = [1,3] reverse( list ) Returns a reversed list. Table 4.41. Parameters Parameter Type list list Example reverse( [1,2,3] ) = [3,2,1] index of( list, match ) Returns indexes matching the element. Parameters list of type list match of any type Table 4.42. Parameters Parameter Type list list match Any type Example index of( [1,2,3,2],2 ) = [2,4] union( list ) Returns a list of all the elements from multiple lists and excludes duplicates. Table 4.43. Parameters Parameter Type list list Example union( [1,2],[2,3] ) = [1,2,3] distinct values( list ) Returns a list of elements from a single list and excludes duplicates. Table 4.44. Parameters Parameter Type list list Example distinct values( [1,2,3,2,1] ) = [1,2,3] flatten( list ) Returns a flattened list. Table 4.45. Parameters Parameter Type list list Example flatten( [[1,2],[[3]], 4] ) = [1,2,3,4] product( list ) Returns the product of the numbers in the list. Table 4.46. Parameters Parameter Type list list of number elements Alternative signature Examples product( [2, 3, 4] ) = 24 product( 2, 3, 4 ) = 24 median( list ) Returns the median of the numbers in the list. If the number of elements is odd, the result is the middle element. If the number of elements is even, the result is the average of the two middle elements. Table 4.47. Parameters Parameter Type list list of number elements Alternative signature Examples median( 8, 2, 5, 3, 4 ) = 4 median( [6, 1, 2, 3] ) = 2.5 median( [ ] ) = null stddev( list ) Returns the standard deviation of the numbers in the list. Table 4.48. Parameters Parameter Type list list of number elements Alternative signature Examples stddev( 2, 4, 7, 5 ) = 2.081665999466132735282297706979931 stddev( [47] ) = null stddev( 47 ) = null stddev( [ ] ) = null mode( list ) Returns the mode of the numbers in the list. If multiple elements are returned, the numbers are sorted in ascending order. Table 4.49. Parameters Parameter Type list list of number elements Alternative signature Examples mode( 6, 3, 9, 6, 6 ) = [6] mode( [6, 1, 9, 6, 1] ) = [1, 6] mode( [ ] ) = [ ] 4.3.2.4.1. Loop statements Loop statements can transform lists or verify if some elements satisfy a specific condition: for in (list) Iterates the elements of the list. Table 4.50. Parameters Parameter Type list list of Any elements Examples for i in [1, 2, 3] return i * i = [1, 4, 9] for i in [1,2,3], j in [1,2,3] return i*j = [1, 2, 3, 2, 4, 6, 3, 6, 9] some in (list) satisfies (condition) Returns to single boolean value (true or false), if any element in the list satisfies the condition. Table 4.51. Parameters Parameter Type list list of Any elements condition boolean expression evaluated to true or false Examples some i in [1, 2, 3] satisfies i > 3 = true some i in [1, 2, 3] satisfies i > 4 = false every in (list) satisfies (condition) Returns to single boolean value (true or false), if every element in the list satisfies the condition. Table 4.52. Parameters Parameter Type list list of Any elements condition boolean expression evaluated to true or false Examples every i in [1, 2, 3] satisfies i > 1 = false every i in [1, 2, 3] satisfies i > 0 = true 4.3.2.5. Numeric functions The following functions support number operations. decimal( n, scale ) Returns a number with the specified scale. Table 4.53. Parameters Parameter Type n number scale number in the range [−6111..6176] Note This function is implemented to be consistent with the FEEL:number definition for rounding decimal numbers to the nearest even decimal number. Examples decimal( 1/3, 2 ) = .33 decimal( 1.5, 0 ) = 2 decimal( 2.5, 0 ) = 2 decimal( 1.035, 2 ) = 1.04 decimal( 1.045, 2 ) = 1.04 decimal( 1.055, 2 ) = 1.06 decimal( 1.065, 2 ) = 1.06 floor( n ) Returns the greatest integer that is less than or equal to the specified number. Table 4.54. Parameters Parameter Type n number Examples floor( 1.5 ) = 1 floor( -1.5 ) = -2 ceiling( n ) Returns the smallest integer that is greater than or equal to the specified number. Table 4.55. Parameters Parameter Type n number Examples ceiling( 1.5 ) = 2 ceiling( -1.5 ) = -1 abs( n ) Returns the absolute value. Table 4.56. Parameters Parameter Type n number , days and time duration , or years and months duration Examples abs( 10 ) = 10 abs( -10 ) = 10 abs( @"PT5H" ) = @"PT5H" abs( @"-PT5H" ) = @"PT5H" modulo( dividend, divisor ) Returns the remainder of the division of the dividend by the divisor. If either the dividend or divisor is negative, the result is of the same sign as the divisor. Note This function is also expressed as modulo(dividend, divisor) = dividend - divisor*floor(dividen d/divisor) . Table 4.57. Parameters Parameter Type dividend number divisor number Examples modulo( 12, 5 ) = 2 modulo( -12,5 )= 3 modulo( 12,-5 )= -3 modulo( -12,-5 )= -2 modulo( 10.1, 4.5 )= 1.1 modulo( -10.1, 4.5 )= 3.4 modulo( 10.1, -4.5 )= -3.4 modulo( -10.1, -4.5 )= -1.1 sqrt( number ) Returns the square root of the specified number. Table 4.58. Parameters Parameter Type n number Example sqrt( 16 ) = 4 log( number ) Returns the logarithm of the specified number. Table 4.59. Parameters Parameter Type n number Example decimal( log( 10 ), 2 ) = 2.30 exp( number ) Returns Euler's number e raised to the power of the specified number. Table 4.60. Parameters Parameter Type n number Example decimal( exp( 5 ), 2 ) = 148.41 odd( number ) Returns true if the specified number is odd. Table 4.61. Parameters Parameter Type n number Examples odd( 5 ) = true odd( 2 ) = false even( number ) Returns true if the specified number is even. Table 4.62. Parameters Parameter Type n number Examples even( 5 ) = false even ( 2 ) = true 4.3.2.6. Date and time functions The following functions support date and time operations. is( value1, value2 ) Returns true if both values are the same element in the FEEL semantic domain. Table 4.63. Parameters Parameter Type value1 Any type value2 Any type Examples is( date( "2012-12-25" ), time( "23:00:50" ) ) = false is( date( "2012-12-25" ), date( "2012-12-25" ) ) = true is( time( "23:00:50z" ), time( "23:00:50" ) ) = false 4.3.2.7. Range functions The following functions support temporal ordering operations to establish relationships between single scalar values and ranges of such values. These functions are similar to the components in the Health Level Seven (HL7) International Clinical Quality Language (CQL) 1.4 syntax . before( ) Returns true when an element A is before an element B and when the relevant requirements for evaluating to true are also met. Signatures before( point1 point2 ) before( point range ) before( range point ) before( range1,range2 ) Requirements for evaluating to true point1 < point2 point < range.start or ( point = range.start and not(range.start included) ) range.end < point or ( range.end = point and not(range.end included) ) range1.end < range2.start or (( not(range1.end included) or not(range2.start included) ) and range1.end = range2.start ) Examples before( 1, 10 ) = true before( 10, 1 ) = false before( 1, [1..10] ) = false before( 1, (1..10] ) = true before( 1, [5..10] ) = true before( [1..10], 10 ) = false before( [1..10), 10 ) = true before( [1..10], 15 ) = true before( [1..10], [15..20] ) = true before( [1..10], [10..20] ) = false before( [1..10), [10..20] ) = true before( [1..10], (10..20] ) = true after( ) Returns true when an element A is after an element B and when the relevant requirements for evaluating to true are also met. Signatures after( point1 point2 ) after( point range ) after( range, point ) after( range1 range2 ) Requirements for evaluating to true point1 > point2 point > range.end or ( point = range.end and not(range.end included) ) range.start > point or ( range.start = point and not(range.start included) ) range1.start > range2.end or (( not(range1.start included) or not(range2.end included) ) and range1.start = range2.end ) Examples after( 10, 5 ) = true after( 5, 10 ) = false after( 12, [1..10] ) = true after( 10, [1..10) ) = true after( 10, [1..10] ) = false after( [11..20], 12 ) = false after( [11..20], 10 ) = true after( (11..20], 11 ) = true after( [11..20], 11 ) = false after( [11..20], [1..10] ) = true after( [1..10], [11..20] ) = false after( [11..20], [1..11) ) = true after( (11..20], [1..11] ) = true meets( ) Returns true when an element A meets an element B and when the relevant requirements for evaluating to true are also met. Signatures meets( range1, range2 ) Requirements for evaluating to true range1.end included and range2.start included and range1.end = range2.start Examples meets( [1..5], [5..10] ) = true meets( [1..5), [5..10] ) = false meets( [1..5], (5..10] ) = false meets( [1..5], [6..10] ) = false met by( ) Returns true when an element A is met by an element B and when the relevant requirements for evaluating to true are also met. Signatures met by( range1, range2 ) Requirements for evaluating to true range1.start included and range2.end included and range1.start = range2.end Examples met by( [5..10], [1..5] ) = true met by( [5..10], [1..5) ) = false met by( (5..10], [1..5] ) = false met by( [6..10], [1..5] ) = false overlaps( ) Returns true when an element A overlaps an element B and when the relevant requirements for evaluating to true are also met. Signatures overlaps( range1, range2 ) Requirements for evaluating to true ( range1.end > range2.start or (range1.end = range2.start and (range1.end included or range2.end included)) ) and ( range1.start < range2.end or (range1.start = range2.end and range1.start included and range2.end included) ) Examples overlaps( [1..5], [3..8] ) = true overlaps( [3..8], [1..5] ) = true overlaps( [1..8], [3..5] ) = true overlaps( [3..5], [1..8] ) = true overlaps( [1..5], [6..8] ) = false overlaps( [6..8], [1..5] ) = false overlaps( [1..5], [5..8] ) = true overlaps( [1..5], (5..8] ) = false overlaps( [1..5), [5..8] ) = false overlaps( [1..5), (5..8] ) = false overlaps( [5..8], [1..5] ) = true overlaps( (5..8], [1..5] ) = false overlaps( [5..8], [1..5) ) = false overlaps( (5..8], [1..5) ) = false overlaps before( ) Returns true when an element A overlaps before an element B and when the relevant requirements for evaluating to true are also met. Signatures overlaps before( range1 range2 ) Requirements for evaluating to true ( range1.start < range2.start or (range1.start = range2.start and range1.start included and range2.start included) ) and ( range1.end > range2.start or (range1.end = range2.start and range1.end included and range2.start included) ) and ( range1.end < range2.end or (range1.end = range2.end and (not(range1.end included) or range2.end included )) ) Examples overlaps before( [1..5], [3..8] ) = true overlaps before( [1..5], [6..8] ) = false overlaps before( [1..5], [5..8] ) = true overlaps before( [1..5], (5..8] ) = false overlaps before( [1..5), [5..8] ) = false overlaps before( [1..5), (1..5] ) = true overlaps before( [1..5], (1..5] ) = true overlaps before( [1..5), [1..5] ) = false overlaps before( [1..5], [1..5] ) = false overlaps after( ) Returns true when an element A overlaps after an element B and when the relevant requirements for evaluating to true are also met. Signatures overlaps after( range1 range2 ) Requirements for evaluating to true ( range2.start < range1.start or (range2.start = range1.start and range2.start included and not( range1.start included)) ) and ( range2.end > range1.start or (range2.end = range1.start and range2.end included and range1.start included) ) and ( range2.end < range1.end or (range2.end = range1.end and (not(range2.end included) or range1.end included)) ) Examples overlaps after( [3..8], [1..5] )= true overlaps after( [6..8], [1..5] )= false overlaps after( [5..8], [1..5] )= true overlaps after( (5..8], [1..5] )= false overlaps after( [5..8], [1..5) )= false overlaps after( (1..5], [1..5) )= true overlaps after( (1..5], [1..5] )= true overlaps after( [1..5], [1..5) )= false overlaps after( [1..5], [1..5] )= false overlaps after( (1..5), [1..5] )= false overlaps after( (1..5], [1..6] )= false overlaps after( (1..5], (1..5] )= false overlaps after( (1..5], [2..5] )= false finishes( ) Returns true when an element A finishes an element B and when the relevant requirements for evaluating to true are also met. Signatures finishes( point, range ) finishes( range1, range2 ) Requirements for evaluating to true range.end included and range.end = point range1.end included = range2.end included and range1.end = range2.end and ( range1.start > range2.start or (range1.start = range2.start and (not(range1.start included) or range2.start included)) ) Examples finishes( 10, [1..10] ) = true finishes( 10, [1..10) ) = false finishes( [5..10], [1..10] ) = true finishes( [5..10), [1..10] ) = false finishes( [5..10), [1..10) ) = true finishes( [1..10], [1..10] ) = true finishes( (1..10], [1..10] ) = true finished by( ) Returns true when an element A is finished by an element B and when the relevant requirements for evaluating to true are also met. Signatures finished by( range, point ) finished by( range1 range2 ) Requirements for evaluating to true range.end included and range.end = point range1.end included = range2.end included and range1.end = range2.end and ( range1.start < range2.start or (range1.start = range2.start and (range1.start included or not(range2.start included))) ) Examples finished by( [1..10], 10 ) = true finished by( [1..10), 10 ) = false finished by( [1..10], [5..10] ) = true finished by( [1..10], [5..10) ) = false finished by( [1..10), [5..10) ) = true finished by( [1..10], [1..10] ) = true finished by( [1..10], (1..10] ) = true includes( ) Returns true when an element A includes an element B and when the relevant requirements for evaluating to true are also met. Signatures includes( range, point ) includes( range1, range2 ) Requirements for evaluating to true (range.start < point and range.end > point) or (range.start = point and range.start included) or (range.end = point and range.end included) ( range1.start < range2.start or (range1.start = range2.start and (range1.start included or not(range2.start included))) ) and ( range1.end > range2.end or (range1.end = range2.end and (range1.end included or not(range2.end included))) ) Examples includes( [1..10], 5 ) = true includes( [1..10], 12 ) = false includes( [1..10], 1 ) = true includes( [1..10], 10 ) = true includes( (1..10], 1 ) = false includes( [1..10), 10 ) = false includes( [1..10], [4..6] ) = true includes( [1..10], [1..5] ) = true includes( (1..10], (1..5] ) = true includes( [1..10], (1..10) ) = true includes( [1..10), [5..10) ) = true includes( [1..10], [1..10) ) = true includes( [1..10], (1..10] ) = true includes( [1..10], [1..10] ) = true during( ) Returns true when an element A is during an element B and when the relevant requirements for evaluating to true are also met. Signatures during( point, range ) during( range1 range2 ) Requirements for evaluating to true (range.start < point and range.end > point) or (range.start = point and range.start included) or (range.end = point and range.end included) ( range2.start < range1.start or (range2.start = range1.start and (range2.start included or not(range1.start included))) ) and ( range2.end > range1.end or (range2.end = range1.end and (range2.end included or not(range1.end included))) ) Examples during( 5, [1..10] ) = true during( 12, [1..10] ) = false during( 1, [1..10] ) = true during( 10, [1..10] ) = true during( 1, (1..10] ) = false during( 10, [1..10) ) = false during( [4..6], [1..10] ) = true during( [1..5], [1..10] ) = true during( (1..5], (1..10] ) = true during( (1..10), [1..10] ) = true during( [5..10), [1..10) ) = true during( [1..10), [1..10] ) = true during( (1..10], [1..10] ) = true during( [1..10], [1..10] ) = true starts( ) Returns true when an element A starts an element B and when the relevant requirements for evaluating to true are also met. Signatures starts( point, range ) starts( range1, range2 ) Requirements for evaluating to true range.start = point and range.start included range1.start = range2.start and range1.start included = range2.start included and ( range1.end < range2.end or (range1.end = range2.end and (not(range1.end included) or range2.end included)) ) Examples starts( 1, [1..10] ) = true starts( 1, (1..10] ) = false starts( 2, [1..10] ) = false starts( [1..5], [1..10] ) = true starts( (1..5], (1..10] ) = true starts( (1..5], [1..10] ) = false starts( [1..5], (1..10] ) = false starts( [1..10], [1..10] ) = true starts( [1..10), [1..10] ) = true starts( (1..10), (1..10) ) = true started by( ) Returns true when an element A is started by an element B and when the relevant requirements for evaluating to true are also met. Signatures started by( range, point ) started by( range1, range2 ) Requirements for evaluating to true range.start = point and range.start included range1.start = range2.start and range1.start included = range2.start included and ( range2.end < range1.end or (range2.end = range1.end and (not(range2.end included) or range1.end included)) ) Examples started by( [1..10], 1 ) = true started by( (1..10], 1 ) = false started by( [1..10], 2 ) = false started by( [1..10], [1..5] ) = true started by( (1..10], (1..5] ) = true started by( [1..10], (1..5] ) = false started by( (1..10], [1..5] ) = false started by( [1..10], [1..10] ) = true started by( [1..10], [1..10) ) = true started by( (1..10), (1..10) ) = true coincides( ) Returns true when an element A coincides with an element B and when the relevant requirements for evaluating to true are also met. Signatures coincides( point1, point2 ) coincides( range1, range2 ) Requirements for evaluating to true point1 = point2 range1.start = range2.start and range1.start included = range2.start included and range1.end = range2.end and range1.end included = range2.end included Examples coincides( 5, 5 ) = true coincides( 3, 4 ) = false coincides( [1..5], [1..5] ) = true coincides( (1..5), [1..5] ) = false coincides( [1..5], [2..6] ) = false 4.3.2.8. Temporal functions The following functions support general temporal operations. day of year( date ) Returns the Gregorian number of the day of the year. Table 4.64. Parameters Parameter Type date date or date and time Example day of year( date(2019, 9, 17) ) = 260 day of week( date ) Returns the Gregorian day of the week: "Monday" , "Tuesday" , "Wednesday" , "Thursday" , "Friday" , "Saturday" , or "Sunday" . Table 4.65. Parameters Parameter Type date date or date and time Example day of week( date(2019, 9, 17) ) = "Tuesday" month of year( date ) Returns the Gregorian month of the year: "January" , "February" , "March" , "April" , "May" , "June" , "July" , "August" , "September" , "October" , "November" , or "December" . Table 4.66. Parameters Parameter Type date date or date and time Example month of year( date(2019, 9, 17) ) = "September" month of year( date ) Returns the Gregorian week of the year as defined by ISO 8601. Table 4.67. Parameters Parameter Type date date or date and time Examples week of year( date(2019, 9, 17) ) = 38 week of year( date(2003, 12, 29) ) = 1 week of year( date(2004, 1, 4) ) = 1 week of year( date(2005, 1, 1) ) = 53 week of year( date(2005, 1, 3) ) = 1 week of year( date(2005, 1, 9) ) = 1 4.3.2.9. Sort functions The following functions support sorting operations. sort( list, precedes ) Returns a list of the same elements but ordered according to the sorting function. Table 4.68. Parameters Parameter Type list list precedes function Example sort( list: [3,1,4,5,2], precedes: function(x,y) x < y ) = [1,2,3,4,5] 4.3.2.10. Context functions The following functions support context operations. get value( m, key ) Returns the value from the context for the specified entry key. Table 4.69. Parameters Parameter Type m context key string Examples get value( {key1 : "value1"}, "key1" ) = "value1" get value( {key1 : "value1"}, "unexistent-key" ) = null get entries( m ) Returns a list of key-value pairs for the specified context. Table 4.70. Parameters Parameter Type m context Example get entries( {key1 : "value1", key2 : "value2"} ) = [ { key : "key1", value : "value1" }, {key : "key2", value : "value2"} ] 4.3.3. Variable and function names in FEEL Unlike many traditional expression languages, Friendly Enough Expression Language (FEEL) supports spaces and a few special characters as part of variable and function names. A FEEL name must start with a letter , ? , or _ element. The unicode letter characters are also allowed. Variable names cannot start with a language keyword, such as and , true , or every . The remaining characters in a variable name can be any of the starting characters, as well as digits , white spaces, and special characters such as + , - , / , * , ' , and . . For example, the following names are all valid FEEL names: Age Birth Date Flight 234 pre-check procedure Several limitations apply to variable and function names in FEEL: Ambiguity The use of spaces, keywords, and other special characters as part of names can make FEEL ambiguous. The ambiguities are resolved in the context of the expression, matching names from left to right. The parser resolves the variable name as the longest name matched in scope. You can use ( ) to disambiguate names if necessary. Spaces in names The DMN specification limits the use of spaces in FEEL names. According to the DMN specification, names can contain multiple spaces but not two consecutive spaces. In order to make the language easier to use and avoid common errors due to spaces, Red Hat Process Automation Manager removes the limitation on the use of consecutive spaces. Red Hat Process Automation Manager supports variable names with any number of consecutive spaces, but normalizes them into a single space. For example, the variable references First Name with one space and First Name with two spaces are both acceptable in Red Hat Process Automation Manager. Red Hat Process Automation Manager also normalizes the use of other white spaces, like the non-breakable white space that is common in web pages, tabs, and line breaks. From a Red Hat Process Automation Manager FEEL engine perspective, all of these characters are normalized into a single white space before processing. The keyword in The keyword in is the only keyword in the language that cannot be used as part of a variable name. Although the specifications allow the use of keywords in the middle of variable names, the use of in in variable names conflicts with the grammar definition of for , every and some expression constructs. 4.4. DMN decision logic in boxed expressions Boxed expressions in DMN are tables that you use to define the underlying logic of decision nodes and business knowledge models in a decision requirements diagram (DRD). Some boxed expressions can contain other boxed expressions, but the top-level boxed expression corresponds to the decision logic of a single DRD artifact. While DRDs represent the flow of a DMN decision model, boxed expressions define the actual decision logic of individual nodes. DRDs and boxed expressions together form a complete and functional DMN decision model. The following are the types of DMN boxed expressions: Decision tables Literal expressions Contexts Relations Functions Invocations Lists Note Red Hat Process Automation Manager does not provide boxed list expressions in Business Central, but supports a FEEL list data type that you can use in boxed literal expressions. For more information about the list data type and other FEEL data types in Red Hat Process Automation Manager, see Section 4.3.1, "Data types in FEEL" . All Friendly Enough Expression Language (FEEL) expressions that you use in your boxed expressions must conform to the FEEL syntax requirements in the OMG Decision Model and Notation specification . 4.4.1. DMN decision tables A decision table in DMN is a visual representation of one or more business rules in a tabular format. You use decision tables to define rules for a decision node that applies those rules at a given point in the decision model. Each rule consists of a single row in the table, and includes columns that define the conditions (input) and outcome (output) for that particular row. The definition of each row is precise enough to derive the outcome using the values of the conditions. Input and output values can be FEEL expressions or defined data type values. For example, the following decision table determines credit score ratings based on a defined range of a loan applicant's credit score: Figure 4.3. Decision table for credit score rating The following decision table determines the step in a lending strategy for applicants depending on applicant loan eligibility and the bureau call type: Figure 4.4. Decision table for lending strategy The following decision table determines applicant qualification for a loan as the concluding decision node in a loan prequalification decision model: Figure 4.5. Decision table for loan prequalification Decision tables are a popular way of modeling rules and decision logic, and are used in many methodologies (such as DMN) and implementation frameworks (such as Drools). Important Red Hat Process Automation Manager supports both DMN decision tables and Drools-native decision tables, but they are different types of assets with different syntax requirements and are not interchangeable. For more information about Drools-native decision tables in Red Hat Process Automation Manager, see Designing a decision service using spreadsheet decision tables . 4.4.1.1. Hit policies in DMN decision tables Hit policies determine how to reach an outcome when multiple rules in a decision table match the provided input values. For example, if one rule in a decision table applies a sales discount to military personnel and another rule applies a discount to students, then when a customer is both a student and in the military, the decision table hit policy must indicate whether to apply one discount or the other ( Unique , First ) or both discounts ( Collect Sum ). You specify the single character of the hit policy ( U , F , C+ ) in the upper-left corner of the decision table. The following decision table hit policies are supported in DMN: Unique (U): Permits only one rule to match. Any overlap raises an error. Any (A): Permits multiple rules to match, but they must all have the same output. If multiple matching rules do not have the same output, an error is raised. Priority (P): Permits multiple rules to match, with different outputs. The output that comes first in the output values list is selected. First (F): Uses the first match in rule order. Collect (C+, C>, C<, C#): Aggregates output from multiple rules based on an aggregation function. Collect ( C ): Aggregates values in an arbitrary list. Collect Sum (C+): Outputs the sum of all collected values. Values must be numeric. Collect Min (C<): Outputs the minimum value among the matches. The resulting values must be comparable, such as numbers, dates, or text (lexicographic order). Collect Max (C>): Outputs the maximum value among the matches. The resulting values must be comparable, such as numbers, dates or text (lexicographic order). Collect Count (C#): Outputs the number of matching rules. 4.4.2. Boxed literal expressions A boxed literal expression in DMN is a literal FEEL expression as text in a table cell, typically with a labeled column and an assigned data type. You use boxed literal expressions to define simple or complex node logic or decision data directly in FEEL for a particular node in a decision. Literal FEEL expressions must conform to FEEL syntax requirements in the OMG Decision Model and Notation specification . For example, the following boxed literal expression defines the minimum acceptable PITI calculation (principal, interest, taxes, and insurance) in a lending decision, where acceptable rate is a variable defined in the DMN model: Figure 4.6. Boxed literal expression for minimum PITI value The following boxed literal expression sorts a list of possible dating candidates (soul mates) in an online dating application based on their score on criteria such as age, location, and interests: Figure 4.7. Boxed literal expression for matching online dating candidates 4.4.3. Boxed context expressions A boxed context expression in DMN is a set of variable names and values with a result value. Each name-value pair is a context entry. You use context expressions to represent data definitions in decision logic and set a value for a desired decision element within the DMN decision model. A value in a boxed context expression can be a data type value or FEEL expression, or can contain a nested sub-expression of any type, such as a decision table, a literal expression, or another context expression. For example, the following boxed context expression defines the factors for sorting delayed passengers in a flight-rebooking decision model, based on defined data types ( tPassengerTable , tFlightNumberList ): Figure 4.8. Boxed context expression for flight passenger waiting list The following boxed context expression defines the factors that determine whether a loan applicant can meet minimum mortgage payments based on principal, interest, taxes, and insurance (PITI), represented as a front-end ratio calculation with a sub-context expression: Figure 4.9. Boxed context expression for front-end client PITI ratio 4.4.4. Boxed relation expressions A boxed relation expression in DMN is a traditional data table with information about given entities, listed as rows. You use boxed relation tables to define decision data for relevant entities in a decision at a particular node. Boxed relation expressions are similar to context expressions in that they set variable names and values, but relation expressions contain no result value and list all variable values based on a single defined variable in each column. For example, the following boxed relation expression provides information about employees in an employee rostering decision: Figure 4.10. Boxed relation expression with employee information 4.4.5. Boxed function expressions A boxed function expression in DMN is a parameterized boxed expression containing a literal FEEL expression, a nested context expression of an external JAVA or PMML function, or a nested boxed expression of any type. By default, all business knowledge models are defined as boxed function expressions. You use boxed function expressions to call functions on your decision logic and to define all business knowledge models. For example, the following boxed function expression determines airline flight capacity in a flight-rebooking decision model: Figure 4.11. Boxed function expression for flight capacity The following boxed function expression contains a basic Java function as a context expression for determining absolute value in a decision model calculation: Figure 4.12. Boxed function expression for absolute value The following boxed function expression determines a monthly mortgage installment as a business knowledge model in a lending decision, with the function value defined as a nested context expression: Figure 4.13. Boxed function expression for installment calculation in business knowledge model The following boxed function expression uses a PMML model included in the DMN file to define the minimum acceptable PITI calculation (principal, interest, taxes, and insurance) in a lending decision: Figure 4.14. Boxed function expression with an included PMML model in business knowledge model 4.4.6. Boxed invocation expressions A boxed invocation expression in DMN is a boxed expression that invokes a business knowledge model. A boxed invocation expression contains the name of the business knowledge model to be invoked and a list of parameter bindings. Each binding is represented by two boxed expressions on a row: The box on the left contains the name of a parameter and the box on the right contains the binding expression whose value is assigned to the parameter to evaluate the invoked business knowledge model. You use boxed invocations to invoke at a particular decision node a business knowledge model defined in the decision model. For example, the following boxed invocation expression invokes a Reassign Passenger business knowledge model as the concluding decision node in a flight-rebooking decision model: Figure 4.15. Boxed invocation expression to reassign flight passengers The following boxed invocation expression invokes an InstallmentCalculation business knowledge model to calculate a monthly installment amount for a loan before proceeding to affordability decisions: Figure 4.16. Boxed invocation expression for required monthly installment 4.4.7. Boxed list expressions A boxed list expression in DMN represents a FEEL list of items. You use boxed lists to define lists of relevant items for a particular node in a decision. You can also use literal FEEL expressions for list items in cells to create more complex lists. For example, the following boxed list expression identifies approved credit score agencies in a loan application decision service: Figure 4.17. Boxed list expression for approved credit score agencies The following boxed list expression also identifies approved credit score agencies but uses FEEL logic to define the agency status (Inc., LLC, SA, GA) based on a DMN input node: Figure 4.18. Boxed list expression using FEEL logic for approved credit score agency status 4.5. DMN model example The following is a real-world DMN model example that demonstrates how you can use decision modeling to reach a decision based on input data, circumstances, and company guidelines. In this scenario, a flight from San Diego to New York is canceled, requiring the affected airline to find alternate arrangements for its inconvenienced passengers. First, the airline collects the information necessary to determine how best to get the travelers to their destinations: Input data List of flights List of passengers Decisions Prioritize the passengers who will get seats on a new flight Determine which flights those passengers will be offered Business knowledge models The company process for determining passenger priority Any flights that have space available Company rules for determining how best to reassign inconvenienced passengers The airline then uses the DMN standard to model its decision process in the following decision requirements diagram (DRD) for determining the best rebooking solution: Figure 4.19. DRD for flight rebooking Similar to flowcharts, DRDs use shapes to represent the different elements in a process. Ovals contain the two necessary input data, rectangles contain the decision points in the model, and rectangles with clipped corners (business knowledge models) contain reusable logic that can be repeatedly invoked. The DRD draws logic for each element from boxed expressions that provide variable definitions using FEEL expressions or data type values. Some boxed expressions are basic, such as the following decision for establishing a prioritized waiting list: Figure 4.20. Boxed context expression example for prioritized wait list Some boxed expressions are more complex with greater detail and calculation, such as the following business knowledge model for reassigning the delayed passenger: Figure 4.21. Boxed function expression for passenger reassignment The following is the DMN source file for this decision model: <dmn:definitions xmlns="https://www.drools.org/kie-dmn/Flight-rebooking" xmlns:dmn="http://www.omg.org/spec/DMN/20151101/dmn.xsd" xmlns:feel="http://www.omg.org/spec/FEEL/20140401" id="_0019_flight_rebooking" name="0019-flight-rebooking" namespace="https://www.drools.org/kie-dmn/Flight-rebooking"> <dmn:itemDefinition id="_tFlight" name="tFlight"> <dmn:itemComponent id="_tFlight_Flight" name="Flight Number"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tFlight_From" name="From"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tFlight_To" name="To"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tFlight_Dep" name="Departure"> <dmn:typeRef>feel:dateTime</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tFlight_Arr" name="Arrival"> <dmn:typeRef>feel:dateTime</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tFlight_Capacity" name="Capacity"> <dmn:typeRef>feel:number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tFlight_Status" name="Status"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id="_tFlightTable" isCollection="true" name="tFlightTable"> <dmn:typeRef>tFlight</dmn:typeRef> </dmn:itemDefinition> <dmn:itemDefinition id="_tPassenger" name="tPassenger"> <dmn:itemComponent id="_tPassenger_Name" name="Name"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tPassenger_Status" name="Status"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tPassenger_Miles" name="Miles"> <dmn:typeRef>feel:number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_tPassenger_Flight" name="Flight Number"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id="_tPassengerTable" isCollection="true" name="tPassengerTable"> <dmn:typeRef>tPassenger</dmn:typeRef> </dmn:itemDefinition> <dmn:itemDefinition id="_tFlightNumberList" isCollection="true" name="tFlightNumberList"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemDefinition> <dmn:inputData id="i_Flight_List" name="Flight List"> <dmn:variable name="Flight List" typeRef="tFlightTable"/> </dmn:inputData> <dmn:inputData id="i_Passenger_List" name="Passenger List"> <dmn:variable name="Passenger List" typeRef="tPassengerTable"/> </dmn:inputData> <dmn:decision name="Prioritized Waiting List" id="d_PrioritizedWaitingList"> <dmn:variable name="Prioritized Waiting List" typeRef="tPassengerTable"/> <dmn:informationRequirement> <dmn:requiredInput href="#i_Passenger_List"/> </dmn:informationRequirement> <dmn:informationRequirement> <dmn:requiredInput href="#i_Flight_List"/> </dmn:informationRequirement> <dmn:knowledgeRequirement> <dmn:requiredKnowledge href="#b_PassengerPriority"/> </dmn:knowledgeRequirement> <dmn:context> <dmn:contextEntry> <dmn:variable name="Cancelled Flights" typeRef="tFlightNumberList"/> <dmn:literalExpression> <dmn:text>Flight List[ Status = "cancelled" ].Flight Number</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Waiting List" typeRef="tPassengerTable"/> <dmn:literalExpression> <dmn:text>Passenger List[ list contains( Cancelled Flights, Flight Number ) ]</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:literalExpression> <dmn:text>sort( Waiting List, passenger priority )</dmn:text> </dmn:literalExpression> </dmn:contextEntry> </dmn:context> </dmn:decision> <dmn:decision name="Rebooked Passengers" id="d_RebookedPassengers"> <dmn:variable name="Rebooked Passengers" typeRef="tPassengerTable"/> <dmn:informationRequirement> <dmn:requiredDecision href="#d_PrioritizedWaitingList"/> </dmn:informationRequirement> <dmn:informationRequirement> <dmn:requiredInput href="#i_Flight_List"/> </dmn:informationRequirement> <dmn:knowledgeRequirement> <dmn:requiredKnowledge href="#b_ReassignNextPassenger"/> </dmn:knowledgeRequirement> <dmn:invocation> <dmn:literalExpression> <dmn:text>reassign passenger</dmn:text> </dmn:literalExpression> <dmn:binding> <dmn:parameter name="Waiting List"/> <dmn:literalExpression> <dmn:text>Prioritized Waiting List</dmn:text> </dmn:literalExpression> </dmn:binding> <dmn:binding> <dmn:parameter name="Reassigned Passengers List"/> <dmn:literalExpression> <dmn:text>[]</dmn:text> </dmn:literalExpression> </dmn:binding> <dmn:binding> <dmn:parameter name="Flights"/> <dmn:literalExpression> <dmn:text>Flight List</dmn:text> </dmn:literalExpression> </dmn:binding> </dmn:invocation> </dmn:decision> <dmn:businessKnowledgeModel id="b_PassengerPriority" name="passenger priority"> <dmn:encapsulatedLogic> <dmn:formalParameter name="Passenger1" typeRef="tPassenger"/> <dmn:formalParameter name="Passenger2" typeRef="tPassenger"/> <dmn:decisionTable hitPolicy="UNIQUE"> <dmn:input id="b_Passenger_Priority_dt_i_P1_Status" label="Passenger1.Status"> <dmn:inputExpression typeRef="feel:string"> <dmn:text>Passenger1.Status</dmn:text> </dmn:inputExpression> <dmn:inputValues> <dmn:text>"gold", "silver", "bronze"</dmn:text> </dmn:inputValues> </dmn:input> <dmn:input id="b_Passenger_Priority_dt_i_P2_Status" label="Passenger2.Status"> <dmn:inputExpression typeRef="feel:string"> <dmn:text>Passenger2.Status</dmn:text> </dmn:inputExpression> <dmn:inputValues> <dmn:text>"gold", "silver", "bronze"</dmn:text> </dmn:inputValues> </dmn:input> <dmn:input id="b_Passenger_Priority_dt_i_P1_Miles" label="Passenger1.Miles"> <dmn:inputExpression typeRef="feel:string"> <dmn:text>Passenger1.Miles</dmn:text> </dmn:inputExpression> </dmn:input> <dmn:output id="b_Status_Priority_dt_o" label="Passenger1 has priority"> <dmn:outputValues> <dmn:text>true, false</dmn:text> </dmn:outputValues> <dmn:defaultOutputEntry> <dmn:text>false</dmn:text> </dmn:defaultOutputEntry> </dmn:output> <dmn:rule id="b_Passenger_Priority_dt_r1"> <dmn:inputEntry id="b_Passenger_Priority_dt_r1_i1"> <dmn:text>"gold"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r1_i2"> <dmn:text>"gold"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r1_i3"> <dmn:text>>= Passenger2.Miles</dmn:text> </dmn:inputEntry> <dmn:outputEntry id="b_Passenger_Priority_dt_r1_o1"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id="b_Passenger_Priority_dt_r2"> <dmn:inputEntry id="b_Passenger_Priority_dt_r2_i1"> <dmn:text>"gold"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r2_i2"> <dmn:text>"silver","bronze"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r2_i3"> <dmn:text>-</dmn:text> </dmn:inputEntry> <dmn:outputEntry id="b_Passenger_Priority_dt_r2_o1"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id="b_Passenger_Priority_dt_r3"> <dmn:inputEntry id="b_Passenger_Priority_dt_r3_i1"> <dmn:text>"silver"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r3_i2"> <dmn:text>"silver"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r3_i3"> <dmn:text>>= Passenger2.Miles</dmn:text> </dmn:inputEntry> <dmn:outputEntry id="b_Passenger_Priority_dt_r3_o1"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id="b_Passenger_Priority_dt_r4"> <dmn:inputEntry id="b_Passenger_Priority_dt_r4_i1"> <dmn:text>"silver"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r4_i2"> <dmn:text>"bronze"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r4_i3"> <dmn:text>-</dmn:text> </dmn:inputEntry> <dmn:outputEntry id="b_Passenger_Priority_dt_r4_o1"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id="b_Passenger_Priority_dt_r5"> <dmn:inputEntry id="b_Passenger_Priority_dt_r5_i1"> <dmn:text>"bronze"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r5_i2"> <dmn:text>"bronze"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id="b_Passenger_Priority_dt_r5_i3"> <dmn:text>>= Passenger2.Miles</dmn:text> </dmn:inputEntry> <dmn:outputEntry id="b_Passenger_Priority_dt_r5_o1"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> </dmn:decisionTable> </dmn:encapsulatedLogic> <dmn:variable name="passenger priority" typeRef="feel:boolean"/> </dmn:businessKnowledgeModel> <dmn:businessKnowledgeModel id="b_ReassignNextPassenger" name="reassign passenger"> <dmn:encapsulatedLogic> <dmn:formalParameter name="Waiting List" typeRef="tPassengerTable"/> <dmn:formalParameter name="Reassigned Passengers List" typeRef="tPassengerTable"/> <dmn:formalParameter name="Flights" typeRef="tFlightTable"/> <dmn:context> <dmn:contextEntry> <dmn:variable name=" Passenger" typeRef="tPassenger"/> <dmn:literalExpression> <dmn:text>Waiting List[1]</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Original Flight" typeRef="tFlight"/> <dmn:literalExpression> <dmn:text>Flights[ Flight Number = Passenger.Flight Number ][1]</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Best Alternate Flight" typeRef="tFlight"/> <dmn:literalExpression> <dmn:text>Flights[ From = Original Flight.From and To = Original Flight.To and Departure > Original Flight.Departure and Status = "scheduled" and has capacity( item, Reassigned Passengers List ) ][1]</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Reassigned Passenger" typeRef="tPassenger"/> <dmn:context> <dmn:contextEntry> <dmn:variable name="Name" typeRef="feel:string"/> <dmn:literalExpression> <dmn:text> Passenger.Name</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Status" typeRef="feel:string"/> <dmn:literalExpression> <dmn:text> Passenger.Status</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Miles" typeRef="feel:number"/> <dmn:literalExpression> <dmn:text> Passenger.Miles</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Flight Number" typeRef="feel:string"/> <dmn:literalExpression> <dmn:text>Best Alternate Flight.Flight Number</dmn:text> </dmn:literalExpression> </dmn:contextEntry> </dmn:context> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Remaining Waiting List" typeRef="tPassengerTable"/> <dmn:literalExpression> <dmn:text>remove( Waiting List, 1 )</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name="Updated Reassigned Passengers List" typeRef="tPassengerTable"/> <dmn:literalExpression> <dmn:text>append( Reassigned Passengers List, Reassigned Passenger )</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:literalExpression> <dmn:text>if count( Remaining Waiting List ) > 0 then reassign passenger( Remaining Waiting List, Updated Reassigned Passengers List, Flights ) else Updated Reassigned Passengers List</dmn:text> </dmn:literalExpression> </dmn:contextEntry> </dmn:context> </dmn:encapsulatedLogic> <dmn:variable name="reassign passenger" typeRef="tPassengerTable"/> <dmn:knowledgeRequirement> <dmn:requiredKnowledge href="#b_HasCapacity"/> </dmn:knowledgeRequirement> </dmn:businessKnowledgeModel> <dmn:businessKnowledgeModel id="b_HasCapacity" name="has capacity"> <dmn:encapsulatedLogic> <dmn:formalParameter name="flight" typeRef="tFlight"/> <dmn:formalParameter name="rebooked list" typeRef="tPassengerTable"/> <dmn:literalExpression> <dmn:text>flight.Capacity > count( rebooked list[ Flight Number = flight.Flight Number ] )</dmn:text> </dmn:literalExpression> </dmn:encapsulatedLogic> <dmn:variable name="has capacity" typeRef="feel:boolean"/> </dmn:businessKnowledgeModel> </dmn:definitions> | [
"\"John Doe\"",
"date( \"2017-06-23\" )",
"time( \"04:25:12\" ) time( \"14:10:00+02:00\" ) time( \"22:35:40.345-05:00\" ) time( \"15:00:30z\" )",
"date and time( \"2017-10-22T23:59:00\" ) date and time( \"2017-06-13T14:10:00+02:00\" ) date and time( \"2017-02-05T22:35:40.345-05:00\" ) date and time( \"2017-06-13T15:00:30z\" )",
"duration( \"P1DT23H12M30S\" ) duration( \"P23D\" ) duration( \"PT12H\" ) duration( \"PT35M\" )",
"duration( \"P3Y5M\" ) duration( \"P2Y\" ) duration( \"P10M\" ) duration( \"P25M\" )",
"function(a, b) a + b",
"{ x : 5, y : 3 }",
"range := interval_start endpoint '..' endpoint interval_end interval_start := open_start | closed_start open_start := '(' | ']' closed_start := '[' interval_end := open_end | closed_end open_end := ')' | '[' closed_end := ']' endpoint := expression",
"[ 1 .. 10 ]",
"[ duration(\"PT1H\") .. duration(\"PT12H\") )",
"x in [ 1 .. 100 ]",
"[ 2, 3, 4, 5 ]",
"x[2]",
"x[-2]",
"count([ 2, 3, 4, 5 ])",
"date( \"2012-12-25\" ) - date( \"2012-12-24\" ) = duration( \"P1D\" )",
"date(date and time( \"2012-12-25T11:00:00Z\" )) = date( \"2012-12-25\" )",
"date( 2012, 12, 25 ) = date( \"2012-12-25\" )",
"date and time ( \"2012-12-24T23:59:00\" ) = date and time(date( \"2012-12-24\" ), time( \"23:59:00\" ))",
"date and time( \"2012-12-24T23:59:00\" ) + duration( \"PT1M\" ) = date and time( \"2012-12-25T00:00:00\" )",
"time( \"23:59:00z\" ) + duration( \"PT2M\" ) = time( \"00:01:00@Etc/UTC\" )",
"time(date and time( \"2012-12-25T11:00:00Z\" )) = time( \"11:00:00Z\" )",
"time( \"23:59:00z\" ) = time(23, 59, 0, duration( \"PT0H\" ))",
"number( \"1 000,0\", \" \", \",\" ) = number( \"1,000.0\", \",\", \".\" )",
"string( 1.1 ) = \"1.1\" string( null ) = null",
"date and time( \"2012-12-24T23:59:00\" ) - date and time( \"2012-12-22T03:45:00\" ) = duration( \"P2DT20H14M\" ) duration( \"P2Y2M\" ) = duration( \"P26M\" )",
"years and months duration( date( \"2011-12-22\" ), date( \"2013-08-24\" ) ) = duration( \"P1Y8M\" )",
"not( true ) = false not( null ) = null",
"substring( \"testing\",3 ) = \"sting\" substring( \"testing\",3,3 ) = \"sti\" substring( \"testing\", -2, 1 ) = \"n\" substring( \"\\U01F40Eab\", 2 ) = \"ab\"",
"string length( \"tes\" ) = 3 string length( \"\\U01F40Eab\" ) = 3",
"upper case( \"aBc4\" ) = \"ABC4\"",
"lower case( \"aBc4\" ) = \"abc4\"",
"substring before( \"testing\", \"ing\" ) = \"test\" substring before( \"testing\", \"xyz\" ) = \"\"",
"substring after( \"testing\", \"test\" ) = \"ing\" substring after( \"\", \"a\" ) = \"\"",
"replace( \"abcd\", \"(ab)|(a)\", \"[1=USD1][2=USD2]\" ) = \"[1=ab][2=]cd\"",
"contains( \"testing\", \"to\" ) = false",
"starts with( \"testing\", \"te\" ) = true",
"ends with( \"testing\", \"g\" ) = true",
"matches( \"teeesting\", \"^te*sting\" ) = true",
"split( \"John Doe\", \"\\\\s\" ) = [\"John\", \"Doe\"] split( \"a;b;c;;\", \";\" ) = [\"a\",\"b\",\"c\",\"\",\"\"]",
"list contains( [1,2,3], 2 ) = true",
"count( [1,2,3] ) = 3 count( [] ) = 0 count( [1,[2,3]] ) = 2",
"min( e1, e2, ..., eN )",
"min( [1,2,3] ) = 1 min( 1 ) = 1 min( [1] ) = 1",
"max( e1, e2, ..., eN )",
"max( 1,2,3 ) = 3 max( [] ) = null",
"sum( n1, n2, ..., nN )",
"sum( [1,2,3] ) = 6 sum( 1,2,3 ) = 6 sum( 1 ) = 1 sum( [] ) = null",
"mean( n1, n2, ..., nN )",
"mean( [1,2,3] ) = 2 mean( 1,2,3 ) = 2 mean( 1 ) = 1 mean( [] ) = null",
"all( b1, b2, ..., bN )",
"all( [false,null,true] ) = false all( true ) = true all( [true] ) = true all( [] ) = true all( 0 ) = null",
"any( b1, b2, ..., bN )",
"any( [false,null,true] ) = true any( false ) = false any( [] ) = false any( 0 ) = null",
"sublist( [4,5,6], 1, 2 ) = [4,5]",
"append( [1], 2, 3 ) = [1,2,3]",
"concatenate( [1,2],[3] ) = [1,2,3]",
"insert before( [1,3],1,2 ) = [2,1,3]",
"remove( [1,2,3], 2 ) = [1,3]",
"reverse( [1,2,3] ) = [3,2,1]",
"index of( [1,2,3,2],2 ) = [2,4]",
"union( [1,2],[2,3] ) = [1,2,3]",
"distinct values( [1,2,3,2,1] ) = [1,2,3]",
"flatten( [[1,2],[[3]], 4] ) = [1,2,3,4]",
"product( n1, n2, ..., nN )",
"product( [2, 3, 4] ) = 24 product( 2, 3, 4 ) = 24",
"median( n1, n2, ..., nN )",
"median( 8, 2, 5, 3, 4 ) = 4 median( [6, 1, 2, 3] ) = 2.5 median( [ ] ) = null",
"stddev( n1, n2, ..., nN )",
"stddev( 2, 4, 7, 5 ) = 2.081665999466132735282297706979931 stddev( [47] ) = null stddev( 47 ) = null stddev( [ ] ) = null",
"mode( n1, n2, ..., nN )",
"mode( 6, 3, 9, 6, 6 ) = [6] mode( [6, 1, 9, 6, 1] ) = [1, 6] mode( [ ] ) = [ ]",
"for i in [1, 2, 3] return i * i = [1, 4, 9] for i in [1,2,3], j in [1,2,3] return i*j = [1, 2, 3, 2, 4, 6, 3, 6, 9]",
"some i in [1, 2, 3] satisfies i > 3 = true some i in [1, 2, 3] satisfies i > 4 = false",
"every i in [1, 2, 3] satisfies i > 1 = false every i in [1, 2, 3] satisfies i > 0 = true",
"decimal( 1/3, 2 ) = .33 decimal( 1.5, 0 ) = 2 decimal( 2.5, 0 ) = 2 decimal( 1.035, 2 ) = 1.04 decimal( 1.045, 2 ) = 1.04 decimal( 1.055, 2 ) = 1.06 decimal( 1.065, 2 ) = 1.06",
"floor( 1.5 ) = 1 floor( -1.5 ) = -2",
"ceiling( 1.5 ) = 2 ceiling( -1.5 ) = -1",
"abs( 10 ) = 10 abs( -10 ) = 10 abs( @\"PT5H\" ) = @\"PT5H\" abs( @\"-PT5H\" ) = @\"PT5H\"",
"modulo( 12, 5 ) = 2 modulo( -12,5 )= 3 modulo( 12,-5 )= -3 modulo( -12,-5 )= -2 modulo( 10.1, 4.5 )= 1.1 modulo( -10.1, 4.5 )= 3.4 modulo( 10.1, -4.5 )= -3.4 modulo( -10.1, -4.5 )= -1.1",
"sqrt( 16 ) = 4",
"decimal( log( 10 ), 2 ) = 2.30",
"decimal( exp( 5 ), 2 ) = 148.41",
"odd( 5 ) = true odd( 2 ) = false",
"even( 5 ) = false even ( 2 ) = true",
"is( date( \"2012-12-25\" ), time( \"23:00:50\" ) ) = false is( date( \"2012-12-25\" ), date( \"2012-12-25\" ) ) = true is( time( \"23:00:50z\" ), time( \"23:00:50\" ) ) = false",
"before( 1, 10 ) = true before( 10, 1 ) = false before( 1, [1..10] ) = false before( 1, (1..10] ) = true before( 1, [5..10] ) = true before( [1..10], 10 ) = false before( [1..10), 10 ) = true before( [1..10], 15 ) = true before( [1..10], [15..20] ) = true before( [1..10], [10..20] ) = false before( [1..10), [10..20] ) = true before( [1..10], (10..20] ) = true",
"after( 10, 5 ) = true after( 5, 10 ) = false after( 12, [1..10] ) = true after( 10, [1..10) ) = true after( 10, [1..10] ) = false after( [11..20], 12 ) = false after( [11..20], 10 ) = true after( (11..20], 11 ) = true after( [11..20], 11 ) = false after( [11..20], [1..10] ) = true after( [1..10], [11..20] ) = false after( [11..20], [1..11) ) = true after( (11..20], [1..11] ) = true",
"meets( [1..5], [5..10] ) = true meets( [1..5), [5..10] ) = false meets( [1..5], (5..10] ) = false meets( [1..5], [6..10] ) = false",
"met by( [5..10], [1..5] ) = true met by( [5..10], [1..5) ) = false met by( (5..10], [1..5] ) = false met by( [6..10], [1..5] ) = false",
"overlaps( [1..5], [3..8] ) = true overlaps( [3..8], [1..5] ) = true overlaps( [1..8], [3..5] ) = true overlaps( [3..5], [1..8] ) = true overlaps( [1..5], [6..8] ) = false overlaps( [6..8], [1..5] ) = false overlaps( [1..5], [5..8] ) = true overlaps( [1..5], (5..8] ) = false overlaps( [1..5), [5..8] ) = false overlaps( [1..5), (5..8] ) = false overlaps( [5..8], [1..5] ) = true overlaps( (5..8], [1..5] ) = false overlaps( [5..8], [1..5) ) = false overlaps( (5..8], [1..5) ) = false",
"overlaps before( [1..5], [3..8] ) = true overlaps before( [1..5], [6..8] ) = false overlaps before( [1..5], [5..8] ) = true overlaps before( [1..5], (5..8] ) = false overlaps before( [1..5), [5..8] ) = false overlaps before( [1..5), (1..5] ) = true overlaps before( [1..5], (1..5] ) = true overlaps before( [1..5), [1..5] ) = false overlaps before( [1..5], [1..5] ) = false",
"overlaps after( [3..8], [1..5] )= true overlaps after( [6..8], [1..5] )= false overlaps after( [5..8], [1..5] )= true overlaps after( (5..8], [1..5] )= false overlaps after( [5..8], [1..5) )= false overlaps after( (1..5], [1..5) )= true overlaps after( (1..5], [1..5] )= true overlaps after( [1..5], [1..5) )= false overlaps after( [1..5], [1..5] )= false overlaps after( (1..5), [1..5] )= false overlaps after( (1..5], [1..6] )= false overlaps after( (1..5], (1..5] )= false overlaps after( (1..5], [2..5] )= false",
"finishes( 10, [1..10] ) = true finishes( 10, [1..10) ) = false finishes( [5..10], [1..10] ) = true finishes( [5..10), [1..10] ) = false finishes( [5..10), [1..10) ) = true finishes( [1..10], [1..10] ) = true finishes( (1..10], [1..10] ) = true",
"finished by( [1..10], 10 ) = true finished by( [1..10), 10 ) = false finished by( [1..10], [5..10] ) = true finished by( [1..10], [5..10) ) = false finished by( [1..10), [5..10) ) = true finished by( [1..10], [1..10] ) = true finished by( [1..10], (1..10] ) = true",
"includes( [1..10], 5 ) = true includes( [1..10], 12 ) = false includes( [1..10], 1 ) = true includes( [1..10], 10 ) = true includes( (1..10], 1 ) = false includes( [1..10), 10 ) = false includes( [1..10], [4..6] ) = true includes( [1..10], [1..5] ) = true includes( (1..10], (1..5] ) = true includes( [1..10], (1..10) ) = true includes( [1..10), [5..10) ) = true includes( [1..10], [1..10) ) = true includes( [1..10], (1..10] ) = true includes( [1..10], [1..10] ) = true",
"during( 5, [1..10] ) = true during( 12, [1..10] ) = false during( 1, [1..10] ) = true during( 10, [1..10] ) = true during( 1, (1..10] ) = false during( 10, [1..10) ) = false during( [4..6], [1..10] ) = true during( [1..5], [1..10] ) = true during( (1..5], (1..10] ) = true during( (1..10), [1..10] ) = true during( [5..10), [1..10) ) = true during( [1..10), [1..10] ) = true during( (1..10], [1..10] ) = true during( [1..10], [1..10] ) = true",
"starts( 1, [1..10] ) = true starts( 1, (1..10] ) = false starts( 2, [1..10] ) = false starts( [1..5], [1..10] ) = true starts( (1..5], (1..10] ) = true starts( (1..5], [1..10] ) = false starts( [1..5], (1..10] ) = false starts( [1..10], [1..10] ) = true starts( [1..10), [1..10] ) = true starts( (1..10), (1..10) ) = true",
"started by( [1..10], 1 ) = true started by( (1..10], 1 ) = false started by( [1..10], 2 ) = false started by( [1..10], [1..5] ) = true started by( (1..10], (1..5] ) = true started by( [1..10], (1..5] ) = false started by( (1..10], [1..5] ) = false started by( [1..10], [1..10] ) = true started by( [1..10], [1..10) ) = true started by( (1..10), (1..10) ) = true",
"coincides( 5, 5 ) = true coincides( 3, 4 ) = false coincides( [1..5], [1..5] ) = true coincides( (1..5), [1..5] ) = false coincides( [1..5], [2..6] ) = false",
"day of year( date(2019, 9, 17) ) = 260",
"day of week( date(2019, 9, 17) ) = \"Tuesday\"",
"month of year( date(2019, 9, 17) ) = \"September\"",
"week of year( date(2019, 9, 17) ) = 38 week of year( date(2003, 12, 29) ) = 1 week of year( date(2004, 1, 4) ) = 1 week of year( date(2005, 1, 1) ) = 53 week of year( date(2005, 1, 3) ) = 1 week of year( date(2005, 1, 9) ) = 1",
"sort( list: [3,1,4,5,2], precedes: function(x,y) x < y ) = [1,2,3,4,5]",
"get value( {key1 : \"value1\"}, \"key1\" ) = \"value1\" get value( {key1 : \"value1\"}, \"unexistent-key\" ) = null",
"get entries( {key1 : \"value1\", key2 : \"value2\"} ) = [ { key : \"key1\", value : \"value1\" }, {key : \"key2\", value : \"value2\"} ]",
"<dmn:definitions xmlns=\"https://www.drools.org/kie-dmn/Flight-rebooking\" xmlns:dmn=\"http://www.omg.org/spec/DMN/20151101/dmn.xsd\" xmlns:feel=\"http://www.omg.org/spec/FEEL/20140401\" id=\"_0019_flight_rebooking\" name=\"0019-flight-rebooking\" namespace=\"https://www.drools.org/kie-dmn/Flight-rebooking\"> <dmn:itemDefinition id=\"_tFlight\" name=\"tFlight\"> <dmn:itemComponent id=\"_tFlight_Flight\" name=\"Flight Number\"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tFlight_From\" name=\"From\"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tFlight_To\" name=\"To\"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tFlight_Dep\" name=\"Departure\"> <dmn:typeRef>feel:dateTime</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tFlight_Arr\" name=\"Arrival\"> <dmn:typeRef>feel:dateTime</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tFlight_Capacity\" name=\"Capacity\"> <dmn:typeRef>feel:number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tFlight_Status\" name=\"Status\"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id=\"_tFlightTable\" isCollection=\"true\" name=\"tFlightTable\"> <dmn:typeRef>tFlight</dmn:typeRef> </dmn:itemDefinition> <dmn:itemDefinition id=\"_tPassenger\" name=\"tPassenger\"> <dmn:itemComponent id=\"_tPassenger_Name\" name=\"Name\"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tPassenger_Status\" name=\"Status\"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tPassenger_Miles\" name=\"Miles\"> <dmn:typeRef>feel:number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_tPassenger_Flight\" name=\"Flight Number\"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id=\"_tPassengerTable\" isCollection=\"true\" name=\"tPassengerTable\"> <dmn:typeRef>tPassenger</dmn:typeRef> </dmn:itemDefinition> <dmn:itemDefinition id=\"_tFlightNumberList\" isCollection=\"true\" name=\"tFlightNumberList\"> <dmn:typeRef>feel:string</dmn:typeRef> </dmn:itemDefinition> <dmn:inputData id=\"i_Flight_List\" name=\"Flight List\"> <dmn:variable name=\"Flight List\" typeRef=\"tFlightTable\"/> </dmn:inputData> <dmn:inputData id=\"i_Passenger_List\" name=\"Passenger List\"> <dmn:variable name=\"Passenger List\" typeRef=\"tPassengerTable\"/> </dmn:inputData> <dmn:decision name=\"Prioritized Waiting List\" id=\"d_PrioritizedWaitingList\"> <dmn:variable name=\"Prioritized Waiting List\" typeRef=\"tPassengerTable\"/> <dmn:informationRequirement> <dmn:requiredInput href=\"#i_Passenger_List\"/> </dmn:informationRequirement> <dmn:informationRequirement> <dmn:requiredInput href=\"#i_Flight_List\"/> </dmn:informationRequirement> <dmn:knowledgeRequirement> <dmn:requiredKnowledge href=\"#b_PassengerPriority\"/> </dmn:knowledgeRequirement> <dmn:context> <dmn:contextEntry> <dmn:variable name=\"Cancelled Flights\" typeRef=\"tFlightNumberList\"/> <dmn:literalExpression> <dmn:text>Flight List[ Status = \"cancelled\" ].Flight Number</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Waiting List\" typeRef=\"tPassengerTable\"/> <dmn:literalExpression> <dmn:text>Passenger List[ list contains( Cancelled Flights, Flight Number ) ]</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:literalExpression> <dmn:text>sort( Waiting List, passenger priority )</dmn:text> </dmn:literalExpression> </dmn:contextEntry> </dmn:context> </dmn:decision> <dmn:decision name=\"Rebooked Passengers\" id=\"d_RebookedPassengers\"> <dmn:variable name=\"Rebooked Passengers\" typeRef=\"tPassengerTable\"/> <dmn:informationRequirement> <dmn:requiredDecision href=\"#d_PrioritizedWaitingList\"/> </dmn:informationRequirement> <dmn:informationRequirement> <dmn:requiredInput href=\"#i_Flight_List\"/> </dmn:informationRequirement> <dmn:knowledgeRequirement> <dmn:requiredKnowledge href=\"#b_ReassignNextPassenger\"/> </dmn:knowledgeRequirement> <dmn:invocation> <dmn:literalExpression> <dmn:text>reassign next passenger</dmn:text> </dmn:literalExpression> <dmn:binding> <dmn:parameter name=\"Waiting List\"/> <dmn:literalExpression> <dmn:text>Prioritized Waiting List</dmn:text> </dmn:literalExpression> </dmn:binding> <dmn:binding> <dmn:parameter name=\"Reassigned Passengers List\"/> <dmn:literalExpression> <dmn:text>[]</dmn:text> </dmn:literalExpression> </dmn:binding> <dmn:binding> <dmn:parameter name=\"Flights\"/> <dmn:literalExpression> <dmn:text>Flight List</dmn:text> </dmn:literalExpression> </dmn:binding> </dmn:invocation> </dmn:decision> <dmn:businessKnowledgeModel id=\"b_PassengerPriority\" name=\"passenger priority\"> <dmn:encapsulatedLogic> <dmn:formalParameter name=\"Passenger1\" typeRef=\"tPassenger\"/> <dmn:formalParameter name=\"Passenger2\" typeRef=\"tPassenger\"/> <dmn:decisionTable hitPolicy=\"UNIQUE\"> <dmn:input id=\"b_Passenger_Priority_dt_i_P1_Status\" label=\"Passenger1.Status\"> <dmn:inputExpression typeRef=\"feel:string\"> <dmn:text>Passenger1.Status</dmn:text> </dmn:inputExpression> <dmn:inputValues> <dmn:text>\"gold\", \"silver\", \"bronze\"</dmn:text> </dmn:inputValues> </dmn:input> <dmn:input id=\"b_Passenger_Priority_dt_i_P2_Status\" label=\"Passenger2.Status\"> <dmn:inputExpression typeRef=\"feel:string\"> <dmn:text>Passenger2.Status</dmn:text> </dmn:inputExpression> <dmn:inputValues> <dmn:text>\"gold\", \"silver\", \"bronze\"</dmn:text> </dmn:inputValues> </dmn:input> <dmn:input id=\"b_Passenger_Priority_dt_i_P1_Miles\" label=\"Passenger1.Miles\"> <dmn:inputExpression typeRef=\"feel:string\"> <dmn:text>Passenger1.Miles</dmn:text> </dmn:inputExpression> </dmn:input> <dmn:output id=\"b_Status_Priority_dt_o\" label=\"Passenger1 has priority\"> <dmn:outputValues> <dmn:text>true, false</dmn:text> </dmn:outputValues> <dmn:defaultOutputEntry> <dmn:text>false</dmn:text> </dmn:defaultOutputEntry> </dmn:output> <dmn:rule id=\"b_Passenger_Priority_dt_r1\"> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r1_i1\"> <dmn:text>\"gold\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r1_i2\"> <dmn:text>\"gold\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r1_i3\"> <dmn:text>>= Passenger2.Miles</dmn:text> </dmn:inputEntry> <dmn:outputEntry id=\"b_Passenger_Priority_dt_r1_o1\"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id=\"b_Passenger_Priority_dt_r2\"> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r2_i1\"> <dmn:text>\"gold\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r2_i2\"> <dmn:text>\"silver\",\"bronze\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r2_i3\"> <dmn:text>-</dmn:text> </dmn:inputEntry> <dmn:outputEntry id=\"b_Passenger_Priority_dt_r2_o1\"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id=\"b_Passenger_Priority_dt_r3\"> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r3_i1\"> <dmn:text>\"silver\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r3_i2\"> <dmn:text>\"silver\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r3_i3\"> <dmn:text>>= Passenger2.Miles</dmn:text> </dmn:inputEntry> <dmn:outputEntry id=\"b_Passenger_Priority_dt_r3_o1\"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id=\"b_Passenger_Priority_dt_r4\"> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r4_i1\"> <dmn:text>\"silver\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r4_i2\"> <dmn:text>\"bronze\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r4_i3\"> <dmn:text>-</dmn:text> </dmn:inputEntry> <dmn:outputEntry id=\"b_Passenger_Priority_dt_r4_o1\"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id=\"b_Passenger_Priority_dt_r5\"> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r5_i1\"> <dmn:text>\"bronze\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r5_i2\"> <dmn:text>\"bronze\"</dmn:text> </dmn:inputEntry> <dmn:inputEntry id=\"b_Passenger_Priority_dt_r5_i3\"> <dmn:text>>= Passenger2.Miles</dmn:text> </dmn:inputEntry> <dmn:outputEntry id=\"b_Passenger_Priority_dt_r5_o1\"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> </dmn:decisionTable> </dmn:encapsulatedLogic> <dmn:variable name=\"passenger priority\" typeRef=\"feel:boolean\"/> </dmn:businessKnowledgeModel> <dmn:businessKnowledgeModel id=\"b_ReassignNextPassenger\" name=\"reassign next passenger\"> <dmn:encapsulatedLogic> <dmn:formalParameter name=\"Waiting List\" typeRef=\"tPassengerTable\"/> <dmn:formalParameter name=\"Reassigned Passengers List\" typeRef=\"tPassengerTable\"/> <dmn:formalParameter name=\"Flights\" typeRef=\"tFlightTable\"/> <dmn:context> <dmn:contextEntry> <dmn:variable name=\"Next Passenger\" typeRef=\"tPassenger\"/> <dmn:literalExpression> <dmn:text>Waiting List[1]</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Original Flight\" typeRef=\"tFlight\"/> <dmn:literalExpression> <dmn:text>Flights[ Flight Number = Next Passenger.Flight Number ][1]</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Best Alternate Flight\" typeRef=\"tFlight\"/> <dmn:literalExpression> <dmn:text>Flights[ From = Original Flight.From and To = Original Flight.To and Departure > Original Flight.Departure and Status = \"scheduled\" and has capacity( item, Reassigned Passengers List ) ][1]</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Reassigned Passenger\" typeRef=\"tPassenger\"/> <dmn:context> <dmn:contextEntry> <dmn:variable name=\"Name\" typeRef=\"feel:string\"/> <dmn:literalExpression> <dmn:text>Next Passenger.Name</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Status\" typeRef=\"feel:string\"/> <dmn:literalExpression> <dmn:text>Next Passenger.Status</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Miles\" typeRef=\"feel:number\"/> <dmn:literalExpression> <dmn:text>Next Passenger.Miles</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Flight Number\" typeRef=\"feel:string\"/> <dmn:literalExpression> <dmn:text>Best Alternate Flight.Flight Number</dmn:text> </dmn:literalExpression> </dmn:contextEntry> </dmn:context> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Remaining Waiting List\" typeRef=\"tPassengerTable\"/> <dmn:literalExpression> <dmn:text>remove( Waiting List, 1 )</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:variable name=\"Updated Reassigned Passengers List\" typeRef=\"tPassengerTable\"/> <dmn:literalExpression> <dmn:text>append( Reassigned Passengers List, Reassigned Passenger )</dmn:text> </dmn:literalExpression> </dmn:contextEntry> <dmn:contextEntry> <dmn:literalExpression> <dmn:text>if count( Remaining Waiting List ) > 0 then reassign next passenger( Remaining Waiting List, Updated Reassigned Passengers List, Flights ) else Updated Reassigned Passengers List</dmn:text> </dmn:literalExpression> </dmn:contextEntry> </dmn:context> </dmn:encapsulatedLogic> <dmn:variable name=\"reassign next passenger\" typeRef=\"tPassengerTable\"/> <dmn:knowledgeRequirement> <dmn:requiredKnowledge href=\"#b_HasCapacity\"/> </dmn:knowledgeRequirement> </dmn:businessKnowledgeModel> <dmn:businessKnowledgeModel id=\"b_HasCapacity\" name=\"has capacity\"> <dmn:encapsulatedLogic> <dmn:formalParameter name=\"flight\" typeRef=\"tFlight\"/> <dmn:formalParameter name=\"rebooked list\" typeRef=\"tPassengerTable\"/> <dmn:literalExpression> <dmn:text>flight.Capacity > count( rebooked list[ Flight Number = flight.Flight Number ] )</dmn:text> </dmn:literalExpression> </dmn:encapsulatedLogic> <dmn:variable name=\"has capacity\" typeRef=\"feel:boolean\"/> </dmn:businessKnowledgeModel> </dmn:definitions>"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/dmn-con_dmn-models |
Chapter 2. Installing the Red Hat Quay Operator from the OperatorHub | Chapter 2. Installing the Red Hat Quay Operator from the OperatorHub Use the following procedure to install the Red Hat Quay Operator from the OpenShift Container Platform OperatorHub. Procedure Using the OpenShift Container Platform console, select Operators OperatorHub . In the search box, type Red Hat Quay and select the official Red Hat Quay Operator provided by Red Hat. This directs you to the Installation page, which outlines the features, prerequisites, and deployment information. Select Install . This directs you to the Operator Installation page. The following choices are available for customizing the installation: Update Channel: Choose the update channel, for example, stable-3 for the latest release. Installation Mode: Choose All namespaces on the cluster if you want the Red Hat Quay Operator to be available cluster-wide. It is recommended that you install the Red Hat Quay Operator cluster-wide. If you choose a single namespace, the monitoring component will not be available by default. Choose A specific namespace on the cluster if you want it deployed only within a single namespace. Approval Strategy: Choose to approve either automatic or manual updates. Automatic update strategy is recommended. Select Install . | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-install |
5.374. rhn-client-tools | 5.374. rhn-client-tools 5.374.1. RHBA-2013:1384 - rhn-client-tools bug fix and enhancement update Updated rhn-client-tools packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6 Extended Update Support. Red Hat Network Client Tools provide programs and libraries that allow systems to receive software updates from Red Hat Network (RHN). Bug Fix BZ# 993080 The RHN Proxy did not work properly if separated from a parent by a slow enough network. Consequently, users who attempted to download larger repodata files and RPMs experienced timeouts. This update changes both RHN Proxy and Red Hat Enterprise Linux RHN Client to allow all communications to obey a configured timeout value for connections. Enhancement BZ# 993073 While Satellite 5.3.0 now has the ability to get the number of CPUs via an API call, there was no function to obtain the number of sockets from the registered systems. This update adds a function to get the number of physical CPU sockets in a managed system from Satellite via an API call. Users of rhn-client-tools are advised to upgrade to these updated packages, which fix this bug and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rhn-client-tools |
Chapter 5. Deploying a virt-who configuration | Chapter 5. Deploying a virt-who configuration After you create a virt-who configuration, Satellite provides a script to automate the deployment process. The script installs virt-who and creates the individual and global virt-who configuration files. For Red Hat products, you must deploy each configuration file on the hypervisor specified in the file. For other products, you must deploy the configuration files on Satellite Server, Capsule Server, or a separate Red Hat Enterprise Linux server that is dedicated to running virt-who. To deploy the files on a hypervisor or Capsule Server, see Section 5.1, "Deploying a virt-who configuration on a hypervisor" . To deploy the files on Satellite Server, see Section 5.2, "Deploying a virt-who configuration on Satellite Server" . To deploy the files on a separate Red Hat Enterprise Linux server, see Section 5.3, "Deploying a virt-who configuration on a separate Red Hat Enterprise Linux server" . 5.1. Deploying a virt-who configuration on a hypervisor Use this procedure to deploy a virt-who configuration on the Red Hat hypervisor that you specified in the file. Global values apply only to this hypervisor. You can also use this procedure to deploy a vCenter or Hyper-V virt-who configuration on Capsule Server. Global configuration values apply to all virt-who configurations on the same Capsule Server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Register the hypervisor to Red Hat Satellite. If you are using Red Hat Virtualization Host (RHVH), update it to the latest version so that the minimum virt-who version is available. Virt-who is available by default on RHVH, but cannot be updated individually from the rhel-7-server-rhvh-4-rpms repository. Create a read-only virt-who user on the hypervisor. Create a virt-who configuration for your virtualization platform. Procedure In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration. Click the Deploy tab. Under Configuration script , click Download the script . Copy the script to the hypervisor: Make the deployment script executable and run it: After the deployment is complete, delete the script: 5.2. Deploying a virt-who configuration on Satellite Server Use this procedure to deploy a vCenter or Hyper-V virt-who configuration on Satellite Server. Global configuration values apply to all virt-who configurations on Satellite Server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Create a read-only virt-who user on the hypervisor or virtualization manager. If you are deploying a Hyper-V virt-who configuration, enable remote management on the Hyper-V hypervisor. Create a virt-who configuration for your virtualization platform. Procedure In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration. Under Hammer command , click Copy to clipboard . On Satellite Server, paste the Hammer command into your terminal. 5.3. Deploying a virt-who configuration on a separate Red Hat Enterprise Linux server Use this procedure to deploy a vCenter or Hyper-V virt-who configuration on a dedicated Red Hat Enterprise Linux 7 server. The server can be physical or virtual. Global configuration values apply to all virt-who configurations on this server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Create a read-only virt-who user on the hypervisor or virtualization manager. If you are deploying a Hyper-V virt-who configuration, enable remote management on the Hyper-V hypervisor. Create a virt-who configuration for your virtualization platform. Procedure On the Red Hat Enterprise Linux server, install Satellite Server's CA certificate: Register the Red Hat Enterprise Linux server to Satellite Server: Open a network port for communication between virt-who and Satellite Server: Open a network port for communication between virt-who and each hypervisor or virtualization manager: VMware vCenter: TCP port 443 Microsoft Hyper-V: TCP port 5985 In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration file. Click the Deploy tab. Under Configuration script , click Download the script . Copy the script to the Red Hat Enterprise Linux server: Make the deployment script executable and run it: After the deployment is complete, delete the script: | [
"scp deploy_virt_who_config_1 .sh root@ hypervisor.example.com :",
"chmod +x deploy_virt_who_config_1 .sh sh deploy_virt_who_config_1 .sh",
"rm deploy_virt_who_config_1",
"rpm -ivh http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"subscription-manager register --org= organization_label --auto-attach",
"firewall-cmd --add-port=\"443/tcp\" firewall-cmd --add-port=\"443/tcp\" --permanent",
"scp deploy_virt_who_config_1 .sh root@ rhel.example.com :",
"chmod +x deploy_virt_who_config_1 .sh sh deploy_virt_who_config_1 .sh",
"rm deploy_virt_who_config_1"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/configuring_virtual_machine_subscriptions/deploying-a-virt-who-configuration |
Chapter 7. Displaying system security classification | Chapter 7. Displaying system security classification As an administrator of deployments where the user must be aware of the security classification of the system, you can set up a notification of the security classification. This can be either a permanent banner or a temporary notification, and it can appear on login screen, in the GNOME session, and on the lock screen. 7.1. Enabling system security classification banners You can create a permanent classification banner to state the overall security classification level of the system. This is useful for deployments where the user must always be aware of the security classification level of the system that they are logged into. The permanent classification banner can appear within the running session, the lock screen, and login screen, and customize its background color, its font, and its position within the screen. This procedure creates a red banner with a white text placed on both the top and bottom of the login screen. Procedure Install the gnome-shell-extension-classification-banner package: Create the 99-class-banner file at either of the following locations: To configure a notification at the login screen, create /etc/dconf/db/gdm.d/99-class-banner . To configure a notification in the user session, create /etc/dconf/db/local.d/99-class-banner . Enter the following configuration in the created file: Warning This configuration overrides similar configuration files that also enable an extension, such as Notifying of the system security classification . To enable multiple extensions, specify all of them in the enabled-extensions list. For example: Update the dconf database: Reboot the system. Troubleshooting If the classification banners are not displayed for an existing user, log in as the user and enable the Classification banner extension using the Extensions application. 7.2. Notifying of the system security classification You can set up a notification that contains a predefined message in an overlay banner. This is useful for deployments where the user is required to read the security classification of the system before logging in. Depending on your configuration, the notification can appear at the login screen, after logging in, on the lock screen, or after a longer time with no user activity. You can always dismiss the notification when it appears. Procedure Install the gnome-shell-extension-heads-up-display package: Create the 99-hud-message file at either of the following locations: To configure a notification at the login screen, create /etc/dconf/db/gdm.d/99-hud-message . To configure a notification in the user session, create /etc/dconf/db/local.d/99-hud-message . Enter the following configuration in the created file: Replace the following values with text that describes the security classification of your system: Security classification title A short heading that identifies the security classification. Security classification description A longer message that provides additional details, such as references to various guidelines. Warning This configuration overrides similar configuration files that also enable an extension, such as Enabling system security classification banners . To enable multiple extensions, specify all of them in the enabled-extensions list. For example: Update the dconf database: Reboot the system. Troubleshooting If the notifications are not displayed for an existing user, log in as the user and enable the Heads-up display message extension using the Extensions application. | [
"dnf install gnome-shell-extension-classification-banner",
"[org/gnome/shell] enabled-extensions=['[email protected]'] [org/gnome/shell/extensions/classification-banner] background-color=' rgba(200,16,46,0.75) ' message=' TOP SECRET ' top-banner= true bottom-banner= true system-info= true color=' rgb(255,255,255) '",
"enabled-extensions=['[email protected]', '[email protected]']",
"dconf update",
"dnf install gnome-shell-extension-heads-up-display",
"[org/gnome/shell] enabled-extensions=['[email protected]'] [org/gnome/shell/extensions/heads-up-display] message-heading=\" Security classification title \" message-body=\" Security classification description \" The following options control when the notification appears: show-when-locked= true show-when-unlocking= true show-when-unlocked= true",
"enabled-extensions=['[email protected]', '[email protected]']",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/administering_the_system_using_the_gnome_desktop_environment/assembly_displaying-the-system-security-classification_administering-the-system-using-the-gnome-desktop-environment |
1.4. Logging Into Directory Server Using the Web Console | 1.4. Logging Into Directory Server Using the Web Console The web console is a browser-based graphical user interface (GUI) that enables users to perform administrative tasks. The Directory Server package automatically installs the Directory Server user interface for the web console. To open Directory Server in the web console: Use a browser and connect to the web console running on port 9090 on the Directory Server host. For example: Log in as the root user or as a user with sudo privileges. Select the Red Hat Directory Server entry. | [
"https:// server.example.com : 9090"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/logging_into_directory_server_using_the_web_console |
Chapter 2. Authentication [operator.openshift.io/v1] | Chapter 2. Authentication [operator.openshift.io/v1] Description Authentication provides information to configure an operator to manage authentication. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 2.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. oauthAPIServer object OAuthAPIServer holds status specific only to oauth-apiserver observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 2.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 2.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 2.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 2.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 2.1.7. .status.oauthAPIServer Description OAuthAPIServer holds status specific only to oauth-apiserver Type object Property Type Description latestAvailableRevision integer LatestAvailableRevision is the latest revision used as suffix of revisioned secrets like encryption-config. A new revision causes a new deployment of pods. 2.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/operator.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/operator.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 2.2.1. /apis/operator.openshift.io/v1/authentications Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Authentication Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body Authentication schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 2.2.2. /apis/operator.openshift.io/v1/authentications/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the Authentication Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Authentication Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body Authentication schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 2.2.3. /apis/operator.openshift.io/v1/authentications/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the Authentication Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Authentication Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body Authentication schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/authentication-operator-openshift-io-v1 |
Chapter 21. System Monitoring Tools | Chapter 21. System Monitoring Tools In order to configure the system, system administrators often need to determine the amount of free memory, how much free disk space is available, how the hard drive is partitioned, or what processes are running. 21.1. Viewing System Processes 21.1.1. Using the ps Command The ps command allows you to display information about running processes. It produces a static list, that is, a snapshot of what is running when you execute the command. If you want a constantly updated list of running processes, use the top command or the System Monitor application instead. To list all processes that are currently running on the system including processes owned by other users, type the following at a shell prompt: For each listed process, the ps ax command displays the process ID ( PID ), the terminal that is associated with it ( TTY ), the current status ( STAT ), the cumulated CPU time ( TIME ), and the name of the executable file ( COMMAND ). For example: To display the owner alongside each process, use the following command: Apart from the information provided by the ps ax command, ps aux displays the effective user name of the process owner ( USER ), the percentage of the CPU ( %CPU ) and memory ( %MEM ) usage, the virtual memory size in kilobytes ( VSZ ), the non-swapped physical memory size in kilobytes ( RSS ), and the time or date the process was started. For example: You can also use the ps command in a combination with grep to see if a particular process is running. For example, to determine if Emacs is running, type: For a complete list of available command line options, see the ps (1) manual page. 21.1.2. Using the top Command The top command displays a real-time list of processes that are running on the system. It also displays additional information about the system uptime, current CPU and memory usage, or total number of running processes, and allows you to perform actions such as sorting the list or killing a process. To run the top command, type the following at a shell prompt: For each listed process, the top command displays the process ID ( PID ), the effective user name of the process owner ( USER ), the priority ( PR ), the nice value ( NI ), the amount of virtual memory the process uses ( VIRT ), the amount of non-swapped physical memory the process uses ( RES ), the amount of shared memory the process uses ( SHR ), the process status field S ), the percentage of the CPU ( %CPU ) and memory ( %MEM ) usage, the cumulated CPU time ( TIME+ ), and the name of the executable file ( COMMAND ). For example: Table 21.1, "Interactive top commands" contains useful interactive commands that you can use with top . For more information, see the top (1) manual page. Table 21.1. Interactive top commands Command Description Enter , Space Immediately refreshes the display. h Displays a help screen for interactive commands. h , ? Displays a help screen for windows and field groups. k Kills a process. You are prompted for the process ID and the signal to send to it. n Changes the number of displayed processes. You are prompted to enter the number. u Sorts the list by user. M Sorts the list by memory usage. P Sorts the list by CPU usage. q Terminates the utility and returns to the shell prompt. 21.1.3. Using the System Monitor Tool The Processes tab of the System Monitor tool allows you to view, search for, change the priority of, and kill processes from the graphical user interface. To start the System Monitor tool from the command line, type gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter . The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . Click the Processes tab to view the list of running processes. Figure 21.1. System Monitor - Processes For each listed process, the System Monitor tool displays its name ( Process Name ), current status ( Status ), percentage of the CPU usage ( % CPU ), nice value ( Nice ), process ID ( ID ), memory usage ( Memory ), the channel the process is waiting in ( Waiting Channel ), and additional details about the session ( Session ). To sort the information by a specific column in ascending order, click the name of that column. Click the name of the column again to toggle the sort between ascending and descending order. By default, the System Monitor tool displays a list of processes that are owned by the current user. Selecting various options from the View menu allows you to: view only active processes, view all processes, view your processes, view process dependencies, Additionally, two buttons enable you to: refresh the list of processes, end a process by selecting it from the list and then clicking the End Process button. 21.2. Viewing Memory Usage 21.2.1. Using the free Command The free command allows you to display the amount of free and used memory on the system. To do so, type the following at a shell prompt: The free command provides information about both the physical memory ( Mem ) and swap space ( Swap ). It displays the total amount of memory ( total ), as well as the amount of memory that is in use ( used ), free ( free ), shared ( shared ), sum of buffers and cached ( buff/cache ), and available ( available ). For example: By default, free displays the values in kilobytes. To display the values in megabytes, supply the -m command line option: For instance: For a complete list of available command line options, see the free (1) manual page. 21.2.2. Using the System Monitor Tool The Resources tab of the System Monitor tool allows you to view the amount of free and used memory on the system. To start the System Monitor tool from the command line, type gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter . The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . Click the Resources tab to view the system's memory usage. Figure 21.2. System Monitor - Resources In the Memory and Swap History section, the System Monitor tool displays a graphical representation of the memory and swap usage history, as well as the total amount of the physical memory ( Memory ) and swap space ( Swap ) and how much of it is in use. 21.3. Viewing CPU Usage 21.3.1. Using the System Monitor Tool The Resources tab of the System Monitor tool allows you to view the current CPU usage on the system. To start the System Monitor tool from the command line, type gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter . The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . Click the Resources tab to view the system's CPU usage. In the CPU History section, the System Monitor tool displays a graphical representation of the CPU usage history and shows the percentage of how much CPU is currently in use. 21.4. Viewing Block Devices and File Systems 21.4.1. Using the lsblk Command The lsblk command allows you to display a list of available block devices. It provides more information and better control on output formatting than the blkid command. It reads information from udev , therefore it is usable by non- root users. To display a list of block devices, type the following at a shell prompt: For each listed block device, the lsblk command displays the device name ( NAME ), major and minor device number ( MAJ:MIN ), if the device is removable ( RM ), its size ( SIZE ), if the device is read-only ( RO ), what type it is ( TYPE ), and where the device is mounted ( MOUNTPOINT ). For example: By default, lsblk lists block devices in a tree-like format. To display the information as an ordinary list, add the -l command line option: For instance: For a complete list of available command line options, see the lsblk (8) manual page. 21.4.2. Using the blkid Command The blkid command allows you to display low-level information about available block devices. It requires root privileges, therefore non- root users should use the lsblk command. To do so, type the following at a shell prompt as root : For each listed block device, the blkid command displays available attributes such as its universally unique identifier ( UUID ), file system type ( TYPE ), or volume label ( LABEL ). For example: By default, the blkid command lists all available block devices. To display information about a particular device only, specify the device name on the command line: For instance, to display information about /dev/vda1 , type as root : You can also use the above command with the -p and -o udev command line options to obtain more detailed information. Note that root privileges are required to run this command: For example: For a complete list of available command line options, see the blkid (8) manual page. 21.4.3. Using the findmnt Command The findmnt command allows you to display a list of currently mounted file systems. To do so, type the following at a shell prompt: For each listed file system, the findmnt command displays the target mount point ( TARGET ), source device ( SOURCE ), file system type ( FSTYPE ), and relevant mount options ( OPTIONS ). For example: By default, findmnt lists file systems in a tree-like format. To display the information as an ordinary list, add the -l command line option: For instance: You can also choose to list only file systems of a particular type. To do so, add the -t command line option followed by a file system type: For example, to all list xfs file systems, type: For a complete list of available command line options, see the findmnt (8) manual page. 21.4.4. Using the df Command The df command allows you to display a detailed report on the system's disk space usage. To do so, type the following at a shell prompt: For each listed file system, the df command displays its name ( Filesystem ), size ( 1K-blocks or Size ), how much space is used ( Used ), how much space is still available ( Available ), the percentage of space usage ( Use% ), and where is the file system mounted ( Mounted on ). For example: By default, the df command shows the partition size in 1 kilobyte blocks and the amount of used and available disk space in kilobytes. To view the information in megabytes and gigabytes, supply the -h command line option, which causes df to display the values in a human-readable format: For instance: For a complete list of available command line options, see the df (1) manual page. 21.4.5. Using the du Command The du command allows you to displays the amount of space that is being used by files in a directory. To display the disk usage for each of the subdirectories in the current working directory, run the command with no additional command line options: For example: By default, the du command displays the disk usage in kilobytes. To view the information in megabytes and gigabytes, supply the -h command line option, which causes the utility to display the values in a human-readable format: For instance: At the end of the list, the du command always shows the grand total for the current directory. To display only this information, supply the -s command line option: For example: For a complete list of available command line options, see the du (1) manual page. 21.4.6. Using the System Monitor Tool The File Systems tab of the System Monitor tool allows you to view file systems and disk space usage in the graphical user interface. To start the System Monitor tool from the command line, type gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter . The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . Click the File Systems tab to view a list of file systems. Figure 21.3. System Monitor - File Systems For each listed file system, the System Monitor tool displays the source device ( Device ), target mount point ( Directory ), and file system type ( Type ), as well as its size ( Total ), and how much space is available ( Available ), and used ( Used ). 21.5. Viewing Hardware Information 21.5.1. Using the lspci Command The lspci command allows you to display information about PCI buses and devices that are attached to them. To list all PCI devices that are in the system, type the following at a shell prompt: This displays a simple list of devices, for example: You can also use the -v command line option to display more verbose output, or -vv for very verbose output: For instance, to determine the manufacturer, model, and memory size of a system's video card, type: For a complete list of available command line options, see the lspci (8) manual page. 21.5.2. Using the lsusb Command The lsusb command allows you to display information about USB buses and devices that are attached to them. To list all USB devices that are in the system, type the following at a shell prompt: This displays a simple list of devices, for example: You can also use the -v command line option to display more verbose output: For instance: For a complete list of available command line options, see the lsusb (8) manual page. 21.5.3. Using the lscpu Command The lscpu command allows you to list information about CPUs that are present in the system, including the number of CPUs, their architecture, vendor, family, model, CPU caches, etc. To do so, type the following at a shell prompt: For example: For a complete list of available command line options, see the lscpu (1) manual page. 21.6. Checking for Hardware Errors Red Hat Enterprise Linux 7 introduced the new hardware event report mechanism ( HERM .) This mechanism gathers system-reported memory errors as well as errors reported by the error detection and correction ( EDAC ) mechanism for dual in-line memory modules ( DIMM s) and reports them to user space. The user-space daemon rasdaemon , catches and handles all reliability, availability, and serviceability ( RAS ) error events that come from the kernel tracing mechanism, and logs them. The functions previously provided by edac-utils are now replaced by rasdaemon . To install rasdaemon , enter the following command as root : Start the service as follows: To make the service run at system start, enter the following command: The ras-mc-ctl utility provides a means to work with EDAC drivers. Enter the following command to see a list of command options: To view a summary of memory controller events, run as root : To view a list of errors reported by the memory controller, run as root : These commands are also described in the ras-mc-ctl(8) manual page. 21.7. Monitoring Performance with Net-SNMP Red Hat Enterprise Linux 7 includes the Net-SNMP software suite, which includes a flexible and extensible simple network management protocol ( SNMP ) agent. This agent and its associated utilities can be used to provide performance data from a large number of systems to a variety of tools which support polling over the SNMP protocol. This section provides information on configuring the Net-SNMP agent to securely provide performance data over the network, retrieving the data using the SNMP protocol, and extending the SNMP agent to provide custom performance metrics. 21.7.1. Installing Net-SNMP The Net-SNMP software suite is available as a set of RPM packages in the Red Hat Enterprise Linux software distribution. Table 21.2, "Available Net-SNMP packages" summarizes each of the packages and their contents. Table 21.2. Available Net-SNMP packages Package Provides net-snmp The SNMP Agent Daemon and documentation. This package is required for exporting performance data. net-snmp-libs The netsnmp library and the bundled management information bases ( MIB s). This package is required for exporting performance data. net-snmp-utils SNMP clients such as snmpget and snmpwalk . This package is required in order to query a system's performance data over SNMP. net-snmp-perl The mib2c utility and the NetSNMP Perl module. Note that this package is provided by the Optional channel. See Section 9.5.7, "Adding the Optional and Supplementary Repositories" for more information on Red Hat additional channels. net-snmp-python An SNMP client library for Python. Note that this package is provided by the Optional channel. See Section 9.5.7, "Adding the Optional and Supplementary Repositories" for more information on Red Hat additional channels. To install any of these packages, use the yum command in the following form: For example, to install the SNMP Agent Daemon and SNMP clients used in the rest of this section, type the following at a shell prompt as root : For more information on how to install new packages in Red Hat Enterprise Linux, see Section 9.2.4, "Installing Packages" . 21.7.2. Running the Net-SNMP Daemon The net-snmp package contains snmpd , the SNMP Agent Daemon. This section provides information on how to start, stop, and restart the snmpd service. For more information on managing system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 21.7.2.1. Starting the Service To run the snmpd service in the current session, type the following at a shell prompt as root : To configure the service to be automatically started at boot time, use the following command: 21.7.2.2. Stopping the Service To stop the running snmpd service, type the following at a shell prompt as root : To disable starting the service at boot time, use the following command: 21.7.2.3. Restarting the Service To restart the running snmpd service, type the following at a shell prompt: This command stops the service and starts it again in quick succession. To only reload the configuration without stopping the service, run the following command instead: This causes the running snmpd service to reload its configuration. 21.7.3. Configuring Net-SNMP To change the Net-SNMP Agent Daemon configuration, edit the /etc/snmp/snmpd.conf configuration file. The default snmpd.conf file included with Red Hat Enterprise Linux 7 is heavily commented and serves as a good starting point for agent configuration. This section focuses on two common tasks: setting system information and configuring authentication. For more information about available configuration directives, see the snmpd.conf (5) manual page. Additionally, there is a utility in the net-snmp package named snmpconf which can be used to interactively generate a valid agent configuration. Note that the net-snmp-utils package must be installed in order to use the snmpwalk utility described in this section. Note For any changes to the configuration file to take effect, force the snmpd service to re-read the configuration by running the following command as root : 21.7.3.1. Setting System Information Net-SNMP provides some rudimentary system information via the system tree. For example, the following snmpwalk command shows the system tree with a default agent configuration. By default, the sysName object is set to the host name. The sysLocation and sysContact objects can be configured in the /etc/snmp/snmpd.conf file by changing the value of the syslocation and syscontact directives, for example: After making changes to the configuration file, reload the configuration and test it by running the snmpwalk command again: 21.7.3.2. Configuring Authentication The Net-SNMP Agent Daemon supports all three versions of the SNMP protocol. The first two versions (1 and 2c) provide for simple authentication using a community string . This string is a shared secret between the agent and any client utilities. The string is passed in clear text over the network however and is not considered secure. Version 3 of the SNMP protocol supports user authentication and message encryption using a variety of protocols. The Net-SNMP agent also supports tunneling over SSH, and TLS authentication with X.509 certificates. Configuring SNMP Version 2c Community To configure an SNMP version 2c community , use either the rocommunity or rwcommunity directive in the /etc/snmp/snmpd.conf configuration file. The format of the directives is as follows: ... where community is the community string to use, source is an IP address or subnet, and OID is the SNMP tree to provide access to. For example, the following directive provides read-only access to the system tree to a client using the community string "redhat" on the local machine: To test the configuration, use the snmpwalk command with the -v and -c options. Configuring SNMP Version 3 User To configure an SNMP version 3 user , use the net-snmp-create-v3-user command. This command adds entries to the /var/lib/net-snmp/snmpd.conf and /etc/snmp/snmpd.conf files which create the user and grant access to the user. Note that the net-snmp-create-v3-user command may only be run when the agent is not running. The following example creates the "admin" user with the password "redhatsnmp": The rwuser directive (or rouser when the -ro command line option is supplied) that net-snmp-create-v3-user adds to /etc/snmp/snmpd.conf has a similar format to the rwcommunity and rocommunity directives: ... where user is a user name and OID is the SNMP tree to provide access to. By default, the Net-SNMP Agent Daemon allows only authenticated requests (the auth option). The noauth option allows you to permit unauthenticated requests, and the priv option enforces the use of encryption. The authpriv option specifies that requests must be authenticated and replies should be encrypted. For example, the following line grants the user "admin" read-write access to the entire tree: To test the configuration, create a .snmp/ directory in your user's home directory and a configuration file named snmp.conf in that directory ( ~/.snmp/snmp.conf ) with the following lines: The snmpwalk command will now use these authentication settings when querying the agent: 21.7.4. Retrieving Performance Data over SNMP The Net-SNMP Agent in Red Hat Enterprise Linux provides a wide variety of performance information over the SNMP protocol. In addition, the agent can be queried for a listing of the installed RPM packages on the system, a listing of currently running processes on the system, or the network configuration of the system. This section provides an overview of OIDs related to performance tuning available over SNMP. It assumes that the net-snmp-utils package is installed and that the user is granted access to the SNMP tree as described in Section 21.7.3.2, "Configuring Authentication" . 21.7.4.1. Hardware Configuration The Host Resources MIB included with Net-SNMP presents information about the current hardware and software configuration of a host to a client utility. Table 21.3, "Available OIDs" summarizes the different OIDs available under that MIB. Table 21.3. Available OIDs OID Description HOST-RESOURCES-MIB::hrSystem Contains general system information such as uptime, number of users, and number of running processes. HOST-RESOURCES-MIB::hrStorage Contains data on memory and file system usage. HOST-RESOURCES-MIB::hrDevices Contains a listing of all processors, network devices, and file systems. HOST-RESOURCES-MIB::hrSWRun Contains a listing of all running processes. HOST-RESOURCES-MIB::hrSWRunPerf Contains memory and CPU statistics on the process table from HOST-RESOURCES-MIB::hrSWRun. HOST-RESOURCES-MIB::hrSWInstalled Contains a listing of the RPM database. There are also a number of SNMP tables available in the Host Resources MIB which can be used to retrieve a summary of the available information. The following example displays HOST-RESOURCES-MIB::hrFSTable : For more information about HOST-RESOURCES-MIB , see the /usr/share/snmp/mibs/HOST-RESOURCES-MIB.txt file. 21.7.4.2. CPU and Memory Information Most system performance data is available in the UCD SNMP MIB . The systemStats OID provides a number of counters around processor usage: In particular, the ssCpuRawUser , ssCpuRawSystem , ssCpuRawWait , and ssCpuRawIdle OIDs provide counters which are helpful when determining whether a system is spending most of its processor time in kernel space, user space, or I/O. ssRawSwapIn and ssRawSwapOut can be helpful when determining whether a system is suffering from memory exhaustion. More memory information is available under the UCD-SNMP-MIB::memory OID, which provides similar data to the free command: Load averages are also available in the UCD SNMP MIB . The SNMP table UCD-SNMP-MIB::laTable has a listing of the 1, 5, and 15 minute load averages: 21.7.4.3. File System and Disk Information The Host Resources MIB provides information on file system size and usage. Each file system (and also each memory pool) has an entry in the HOST-RESOURCES-MIB::hrStorageTable table: The OIDs under HOST-RESOURCES-MIB::hrStorageSize and HOST-RESOURCES-MIB::hrStorageUsed can be used to calculate the remaining capacity of each mounted file system. I/O data is available both in UCD-SNMP-MIB::systemStats ( ssIORawSent.0 and ssIORawRecieved.0 ) and in UCD-DISKIO-MIB::diskIOTable . The latter provides much more granular data. Under this table are OIDs for diskIONReadX and diskIONWrittenX , which provide counters for the number of bytes read from and written to the block device in question since the system boot: 21.7.4.4. Network Information The Interfaces MIB provides information on network devices. IF-MIB::ifTable provides an SNMP table with an entry for each interface on the system, the configuration of the interface, and various packet counters for the interface. The following example shows the first few columns of ifTable on a system with two physical network interfaces: Network traffic is available under the OIDs IF-MIB::ifOutOctets and IF-MIB::ifInOctets . The following SNMP queries will retrieve network traffic for each of the interfaces on this system: 21.7.5. Extending Net-SNMP The Net-SNMP Agent can be extended to provide application metrics in addition to raw system metrics. This allows for capacity planning as well as performance issue troubleshooting. For example, it may be helpful to know that an email system had a 5-minute load average of 15 while being tested, but it is more helpful to know that the email system has a load average of 15 while processing 80,000 messages a second. When application metrics are available via the same interface as the system metrics, this also allows for the visualization of the impact of different load scenarios on system performance (for example, each additional 10,000 messages increases the load average linearly until 100,000). A number of the applications included in Red Hat Enterprise Linux extend the Net-SNMP Agent to provide application metrics over SNMP. There are several ways to extend the agent for custom applications as well. This section describes extending the agent with shell scripts and the Perl plug-ins from the Optional channel. It assumes that the net-snmp-utils and net-snmp-perl packages are installed, and that the user is granted access to the SNMP tree as described in Section 21.7.3.2, "Configuring Authentication" . 21.7.5.1. Extending Net-SNMP with Shell Scripts The Net-SNMP Agent provides an extension MIB ( NET-SNMP-EXTEND-MIB ) that can be used to query arbitrary shell scripts. To specify the shell script to run, use the extend directive in the /etc/snmp/snmpd.conf file. Once defined, the Agent will provide the exit code and any output of the command over SNMP. The example below demonstrates this mechanism with a script which determines the number of httpd processes in the process table. Note The Net-SNMP Agent also provides a built-in mechanism for checking the process table via the proc directive. See the snmpd.conf (5) manual page for more information. The exit code of the following shell script is the number of httpd processes running on the system at a given point in time: To make this script available over SNMP, copy the script to a location on the system path, set the executable bit, and add an extend directive to the /etc/snmp/snmpd.conf file. The format of the extend directive is the following: ... where name is an identifying string for the extension, prog is the program to run, and args are the arguments to give the program. For instance, if the above shell script is copied to /usr/local/bin/check_apache.sh , the following directive will add the script to the SNMP tree: The script can then be queried at NET-SNMP-EXTEND-MIB::nsExtendObjects : Note that the exit code ("8" in this example) is provided as an INTEGER type and any output is provided as a STRING type. To expose multiple metrics as integers, supply different arguments to the script using the extend directive. For example, the following shell script can be used to determine the number of processes matching an arbitrary string, and will also output a text string giving the number of processes: The following /etc/snmp/snmpd.conf directives will give both the number of httpd PIDs as well as the number of snmpd PIDs when the above script is copied to /usr/local/bin/check_proc.sh : The following example shows the output of an snmpwalk of the nsExtendObjects OID: Warning Integer exit codes are limited to a range of 0-255. For values that are likely to exceed 256, either use the standard output of the script (which will be typed as a string) or a different method of extending the agent. This last example shows a query for the free memory of the system and the number of httpd processes. This query could be used during a performance test to determine the impact of the number of processes on memory pressure: 21.7.5.2. Extending Net-SNMP with Perl Executing shell scripts using the extend directive is a fairly limited method for exposing custom application metrics over SNMP. The Net-SNMP Agent also provides an embedded Perl interface for exposing custom objects. The net-snmp-perl package in the Optional channel provides the NetSNMP::agent Perl module that is used to write embedded Perl plug-ins on Red Hat Enterprise Linux. Note Before subscribing to the Optional and Supplementary channels see the Scope of Coverage Details . If you decide to install packages from these channels, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on the Red Hat Customer Portal. The NetSNMP::agent Perl module provides an agent object which is used to handle requests for a part of the agent's OID tree. The agent object's constructor has options for running the agent as a sub-agent of snmpd or a standalone agent. No arguments are necessary to create an embedded agent: The agent object has a register method which is used to register a callback function with a particular OID. The register function takes a name, OID, and pointer to the callback function. The following example will register a callback function named hello_handler with the SNMP Agent which will handle requests under the OID .1.3.6.1.4.1.8072.9999.9999 : Note The OID .1.3.6.1.4.1.8072.9999.9999 ( NET-SNMP-MIB::netSnmpPlaypen ) is typically used for demonstration purposes only. If your organization does not already have a root OID, you can obtain one by contacting an ISO Name Registration Authority (ANSI in the United States). The handler function will be called with four parameters, HANDLER , REGISTRATION_INFO , REQUEST_INFO , and REQUESTS . The REQUESTS parameter contains a list of requests in the current call and should be iterated over and populated with data. The request objects in the list have get and set methods which allow for manipulating the OID and value of the request. For example, the following call will set the value of a request object to the string "hello world": The handler function should respond to two types of SNMP requests: the GET request and the GETNEXT request. The type of request is determined by calling the getMode method on the request_info object passed as the third parameter to the handler function. If the request is a GET request, the caller will expect the handler to set the value of the request object, depending on the OID of the request. If the request is a GETNEXT request, the caller will also expect the handler to set the OID of the request to the available OID in the tree. This is illustrated in the following code example: When getMode returns MODE_GET , the handler analyzes the value of the getOID call on the request object. The value of the request is set to either string_value if the OID ends in ".1.0", or set to integer_value if the OID ends in ".1.1". If the getMode returns MODE_GETNEXT , the handler determines whether the OID of the request is ".1.0", and then sets the OID and value for ".1.1". If the request is higher on the tree than ".1.0", the OID and value for ".1.0" is set. This in effect returns the "" value in the tree so that a program like snmpwalk can traverse the tree without prior knowledge of the structure. The type of the variable is set using constants from NetSNMP::ASN . See the perldoc for NetSNMP::ASN for a full list of available constants. The entire code listing for this example Perl plug-in is as follows: To test the plug-in, copy the above program to /usr/share/snmp/hello_world.pl and add the following line to the /etc/snmp/snmpd.conf configuration file: The SNMP Agent Daemon will need to be restarted to load the new Perl plug-in. Once it has been restarted, an snmpwalk should return the new data: The snmpget should also be used to exercise the other mode of the handler: 21.8. Additional Resources To learn more about gathering system information, see the following resources. 21.8.1. Installed Documentation lscpu (1) - The manual page for the lscpu command. lsusb (8) - The manual page for the lsusb command. findmnt (8) - The manual page for the findmnt command. blkid (8) - The manual page for the blkid command. lsblk (8) - The manual page for the lsblk command. ps (1) - The manual page for the ps command. top (1) - The manual page for the top command. free (1) - The manual page for the free command. df (1) - The manual page for the df command. du (1) - The manual page for the du command. lspci (8) - The manual page for the lspci command. snmpd (8) - The manual page for the snmpd service. snmpd.conf (5) - The manual page for the /etc/snmp/snmpd.conf file containing full documentation of available configuration directives. | [
"ps ax",
"~]USD ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:01 /usr/lib/systemd/systemd --switched-root --system --deserialize 23 2 ? S 0:00 [kthreadd] 3 ? S 0:00 [ksoftirqd/0] 5 ? S> 0:00 [kworker/0:0H] [output truncated]",
"ps aux",
"~]USD ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.3 0.3 134776 6840 ? Ss 09:28 0:01 /usr/lib/systemd/systemd --switched-root --system --d root 2 0.0 0.0 0 0 ? S 09:28 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 09:28 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S> 09:28 0:00 [kworker/0:0H] [output truncated]",
"~]USD ps ax | grep emacs 12056 pts/3 S+ 0:00 emacs 12060 pts/2 S+ 0:00 grep --color=auto emacs",
"top",
"~]USD top top - 16:42:12 up 13 min, 2 users, load average: 0.67, 0.31, 0.19 Tasks: 165 total, 2 running, 163 sleeping, 0 stopped, 0 zombie %Cpu(s): 37.5 us, 3.0 sy, 0.0 ni, 59.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 1016800 total, 77368 free, 728936 used, 210496 buff/cache KiB Swap: 839676 total, 776796 free, 62880 used. 122628 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3168 sjw 20 0 1454628 143240 15016 S 20.3 14.1 0:22.53 gnome-shell 4006 sjw 20 0 1367832 298876 27856 S 13.0 29.4 0:15.58 firefox 1683 root 20 0 242204 50464 4268 S 6.0 5.0 0:07.76 Xorg 4125 sjw 20 0 555148 19820 12644 S 1.3 1.9 0:00.48 gnome-terminal- 10 root 20 0 0 0 0 S 0.3 0.0 0:00.39 rcu_sched 3091 sjw 20 0 37000 1468 904 S 0.3 0.1 0:00.31 dbus-daemon 3096 sjw 20 0 129688 2164 1492 S 0.3 0.2 0:00.14 at-spi2-registr 3925 root 20 0 0 0 0 S 0.3 0.0 0:00.05 kworker/0:0 1 root 20 0 126568 3884 1052 S 0.0 0.4 0:01.61 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 6 root 20 0 0 0 0 S 0.0 0.0 0:00.07 kworker/u2:0 [output truncated]",
"free",
"~]USD free total used free shared buff/cache available Mem: 1016800 727300 84684 3500 204816 124068 Swap: 839676 66920 772756",
"free -m",
"~]USD free -m total used free shared buff/cache available Mem: 992 711 81 3 200 120 Swap: 819 65 754",
"lsblk",
"~]USD lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 20G 0 rom |-vda1 252:1 0 500M 0 part /boot `-vda2 252:2 0 19.5G 0 part |-vg_kvm-lv_root (dm-0) 253:0 0 18G 0 lvm / `-vg_kvm-lv_swap (dm-1) 253:1 0 1.5G 0 lvm [SWAP]",
"lsblk -l",
"~]USD lsblk -l NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 20G 0 rom vda1 252:1 0 500M 0 part /boot vda2 252:2 0 19.5G 0 part vg_kvm-lv_root (dm-0) 253:0 0 18G 0 lvm / vg_kvm-lv_swap (dm-1) 253:1 0 1.5G 0 lvm [SWAP]",
"blkid",
"~]# blkid /dev/vda1: UUID=\"7fa9c421-0054-4555-b0ca-b470a97a3d84\" TYPE=\"ext4\" /dev/vda2: UUID=\"7IvYzk-TnnK-oPjf-ipdD-cofz-DXaJ-gPdgBW\" TYPE=\"LVM2_member\" /dev/mapper/vg_kvm-lv_root: UUID=\"a07b967c-71a0-4925-ab02-aebcad2ae824\" TYPE=\"ext4\" /dev/mapper/vg_kvm-lv_swap: UUID=\"d7ef54ca-9c41-4de4-ac1b-4193b0c1ddb6\" TYPE=\"swap\"",
"blkid device_name",
"~]# blkid /dev/vda1 /dev/vda1: UUID=\"7fa9c421-0054-4555-b0ca-b470a97a3d84\" TYPE=\"ext4\"",
"blkid -po udev device_name",
"~]# blkid -po udev /dev/vda1 ID_FS_UUID=7fa9c421-0054-4555-b0ca-b470a97a3d84 ID_FS_UUID_ENC=7fa9c421-0054-4555-b0ca-b470a97a3d84 ID_FS_VERSION=1.0 ID_FS_TYPE=ext4 ID_FS_USAGE=filesystem",
"findmnt",
"~]USD findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/rhel-root xfs rw,relatime,seclabel,attr2,inode64,noquota ├─/proc proc proc rw,nosuid,nodev,noexec,relatime │ ├─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp=1,timeout=300,minproto=5,maxproto=5,direct │ └─/proc/fs/nfsd sunrpc nfsd rw,relatime ├─/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime │ ├─/sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,seclabel,mode=755 [output truncated]",
"findmnt -l",
"~]USD findmnt -l TARGET SOURCE FSTYPE OPTIONS /proc proc proc rw,nosuid,nodev,noexec,relatime /sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,seclabel /dev devtmpfs devtmpfs rw,nosuid,seclabel,size=933372k,nr_inodes=233343,mode=755 /sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime /dev/shm tmpfs tmpfs rw,nosuid,nodev,seclabel /dev/pts devpts devpts rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000 /run tmpfs tmpfs rw,nosuid,nodev,seclabel,mode=755 /sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,seclabel,mode=755 [output truncated]",
"findmnt -t type",
"~]USD findmnt -t xfs TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/rhel-root xfs rw,relatime,seclabel,attr2,inode64,noquota └─/boot /dev/vda1 xfs rw,relatime,seclabel,attr2,inode64,noquota",
"df",
"~]USD df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_kvm-lv_root 18618236 4357360 13315112 25% / tmpfs 380376 288 380088 1% /dev/shm /dev/vda1 495844 77029 393215 17% /boot",
"df -h",
"~]USD df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_kvm-lv_root 18G 4.2G 13G 25% / tmpfs 372M 288K 372M 1% /dev/shm /dev/vda1 485M 76M 384M 17% /boot",
"du",
"~]USD du 14972 ./Downloads 4 ./.mozilla/extensions 4 ./.mozilla/plugins 12 ./.mozilla 15004 .",
"du -h",
"~]USD du -h 15M ./Downloads 4.0K ./.mozilla/extensions 4.0K ./.mozilla/plugins 12K ./.mozilla 15M .",
"du -sh",
"~]USD du -sh 15M .",
"lspci",
"~]USD lspci 00:00.0 Host bridge: Intel Corporation 82X38/X48 Express DRAM Controller 00:01.0 PCI bridge: Intel Corporation 82X38/X48 Express Host-Primary PCI Express Bridge 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) [output truncated]",
"lspci -v | -vv",
"~]USD lspci -v [output truncated] 01:00.0 VGA compatible controller: nVidia Corporation G84 [Quadro FX 370] (rev a1) (prog-if 00 [VGA controller]) Subsystem: nVidia Corporation Device 0491 Physical Slot: 2 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at f2000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at f0000000 (64-bit, non-prefetchable) [size=32M] I/O ports at 1100 [size=128] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: nouveau Kernel modules: nouveau, nvidiafb [output truncated]",
"lsusb",
"~]USD lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub [output truncated] Bus 001 Device 002: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader) Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse Bus 008 Device 003: ID 04b3:3025 IBM Corp.",
"lsusb -v",
"~]USD lsusb -v [output truncated] Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x03f0 Hewlett-Packard idProduct 0x2c24 Logitech M-UAL-96 Mouse bcdDevice 31.00 iManufacturer 1 iProduct 2 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 [output truncated]",
"lscpu",
"~]USD lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 7 CPU MHz: 1998.000 BogoMIPS: 4999.98 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 3072K NUMA node0 CPU(s): 0-3",
"~]# yum install rasdaemon",
"~]# systemctl start rasdaemon",
"~]# systemctl enable rasdaemon",
"~]USD ras-mc-ctl --help Usage: ras-mc-ctl [OPTIONS...] --quiet Quiet operation. --mainboard Print mainboard vendor and model for this hardware. --status Print status of EDAC drivers. output truncated",
"~]# ras-mc-ctl --summary Memory controller events summary: Corrected on DIMM Label(s): 'CPU_SrcID#0_Ha#0_Chan#0_DIMM#0' location: 0:0:0:-1 errors: 1 No PCIe AER errors. No Extlog errors. MCE records summary: 1 MEMORY CONTROLLER RD_CHANNEL0_ERR Transaction: Memory read error errors 2 No Error errors",
"~]# ras-mc-ctl --errors Memory controller events: 1 3172-02-17 00:47:01 -0500 1 Corrected error(s): memory read error at CPU_SrcID#0_Ha#0_Chan#0_DIMM#0 location: 0:0:0:-1, addr 65928, grain 7, syndrome 0 area:DRAM err_code:0001:0090 socket:0 ha:0 channel_mask:1 rank:0 No PCIe AER errors. No Extlog errors. MCE events: 1 3171-11-09 06:20:21 -0500 error: MEMORY CONTROLLER RD_CHANNEL0_ERR Transaction: Memory read error, mcg mcgstatus=0, mci Corrected_error, n_errors=1, mcgcap=0x01000c16, status=0x8c00004000010090, addr=0x1018893000, misc=0x15020a086, walltime=0x57e96780, cpuid=0x00050663, bank=0x00000007 2 3205-06-22 00:13:41 -0400 error: No Error, mcg mcgstatus=0, mci Corrected_error Error_enabled, mcgcap=0x01000c16, status=0x9400000000000000, addr=0x0000abcd, walltime=0x57e967ea, cpuid=0x00050663, bank=0x00000001 3 3205-06-22 00:13:41 -0400 error: No Error, mcg mcgstatus=0, mci Corrected_error Error_enabled, mcgcap=0x01000c16, status=0x9400000000000000, addr=0x00001234, walltime=0x57e967ea, cpu=0x00000001, cpuid=0x00050663, apicid=0x00000002, bank=0x00000002",
"install package …",
"~]# yum install net-snmp net-snmp-libs net-snmp-utils",
"systemctl start snmpd.service",
"systemctl enable snmpd.service",
"systemctl stop snmpd.service",
"systemctl disable snmpd.service",
"systemctl restart snmpd.service",
"systemctl reload snmpd.service",
"systemctl reload snmpd.service",
"~]# snmpwalk -v2c -c public localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (464) 0:00:04.64 SNMPv2-MIB::sysContact.0 = STRING: Root <root@localhost> (configure /etc/snmp/snmp.local.conf) [output truncated]",
"syslocation Datacenter, Row 4, Rack 3 syscontact UNIX Admin <[email protected]>",
"~]# systemctl reload snmp.service ~]# snmpwalk -v2c -c public localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (35424) 0:05:54.24 SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin < [email protected] > SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 4, Rack 3 [output truncated]",
"directive community source OID",
"rocommunity redhat 127.0.0.1 .1.3.6.1.2.1.1",
"~]# snmpwalk -v2c -c redhat localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (101376) 0:16:53.76 SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <[email protected]> SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 4, Rack 3 [output truncated]",
"~]# systemctl stop snmpd.service ~]# net-snmp-create-v3-user Enter a SNMPv3 user name to create: admin Enter authentication pass-phrase: redhatsnmp Enter encryption pass-phrase: [press return to reuse the authentication pass-phrase] adding the following line to /var/lib/net-snmp/snmpd.conf: createUser admin MD5 \"redhatsnmp\" DES adding the following line to /etc/snmp/snmpd.conf: rwuser admin ~]# systemctl start snmpd.service",
"directive user noauth | auth | priv OID",
"rwuser admin authpriv .1",
"defVersion 3 defSecurityLevel authPriv defSecurityName admin defPassphrase redhatsnmp",
"~]USD snmpwalk -v3 localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 [output truncated]",
"~]USD snmptable -Cb localhost HOST-RESOURCES-MIB::hrFSTable SNMP table: HOST-RESOURCES-MIB::hrFSTable Index MountPoint RemoteMountPoint Type Access Bootable StorageIndex LastFullBackupDate LastPartialBackupDate 1 \"/\" \"\" HOST-RESOURCES-TYPES::hrFSLinuxExt2 readWrite true 31 0-1-1,0:0:0.0 0-1-1,0:0:0.0 5 \"/dev/shm\" \"\" HOST-RESOURCES-TYPES::hrFSOther readWrite false 35 0-1-1,0:0:0.0 0-1-1,0:0:0.0 6 \"/boot\" \"\" HOST-RESOURCES-TYPES::hrFSLinuxExt2 readWrite false 36 0-1-1,0:0:0.0 0-1-1,0:0:0.0",
"~]USD snmpwalk localhost UCD-SNMP-MIB::systemStats UCD-SNMP-MIB::ssIndex.0 = INTEGER: 1 UCD-SNMP-MIB::ssErrorName.0 = STRING: systemStats UCD-SNMP-MIB::ssSwapIn.0 = INTEGER: 0 kB UCD-SNMP-MIB::ssSwapOut.0 = INTEGER: 0 kB UCD-SNMP-MIB::ssIOSent.0 = INTEGER: 0 blocks/s UCD-SNMP-MIB::ssIOReceive.0 = INTEGER: 0 blocks/s UCD-SNMP-MIB::ssSysInterrupts.0 = INTEGER: 29 interrupts/s UCD-SNMP-MIB::ssSysContext.0 = INTEGER: 18 switches/s UCD-SNMP-MIB::ssCpuUser.0 = INTEGER: 0 UCD-SNMP-MIB::ssCpuSystem.0 = INTEGER: 0 UCD-SNMP-MIB::ssCpuIdle.0 = INTEGER: 99 UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 2278 UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 1395 UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 6826 UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 3383736 UCD-SNMP-MIB::ssCpuRawWait.0 = Counter32: 7629 UCD-SNMP-MIB::ssCpuRawKernel.0 = Counter32: 0 UCD-SNMP-MIB::ssCpuRawInterrupt.0 = Counter32: 434 UCD-SNMP-MIB::ssIORawSent.0 = Counter32: 266770 UCD-SNMP-MIB::ssIORawReceived.0 = Counter32: 427302 UCD-SNMP-MIB::ssRawInterrupts.0 = Counter32: 743442 UCD-SNMP-MIB::ssRawContexts.0 = Counter32: 718557 UCD-SNMP-MIB::ssCpuRawSoftIRQ.0 = Counter32: 128 UCD-SNMP-MIB::ssRawSwapIn.0 = Counter32: 0 UCD-SNMP-MIB::ssRawSwapOut.0 = Counter32: 0",
"~]USD snmpwalk localhost UCD-SNMP-MIB::memory UCD-SNMP-MIB::memIndex.0 = INTEGER: 0 UCD-SNMP-MIB::memErrorName.0 = STRING: swap UCD-SNMP-MIB::memTotalSwap.0 = INTEGER: 1023992 kB UCD-SNMP-MIB::memAvailSwap.0 = INTEGER: 1023992 kB UCD-SNMP-MIB::memTotalReal.0 = INTEGER: 1021588 kB UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 634260 kB UCD-SNMP-MIB::memTotalFree.0 = INTEGER: 1658252 kB UCD-SNMP-MIB::memMinimumSwap.0 = INTEGER: 16000 kB UCD-SNMP-MIB::memBuffer.0 = INTEGER: 30760 kB UCD-SNMP-MIB::memCached.0 = INTEGER: 216200 kB UCD-SNMP-MIB::memSwapError.0 = INTEGER: noError(0) UCD-SNMP-MIB::memSwapErrorMsg.0 = STRING:",
"~]USD snmptable localhost UCD-SNMP-MIB::laTable SNMP table: UCD-SNMP-MIB::laTable laIndex laNames laLoad laConfig laLoadInt laLoadFloat laErrorFlag laErrMessage 1 Load-1 0.00 12.00 0 0.000000 noError 2 Load-5 0.00 12.00 0 0.000000 noError 3 Load-15 0.00 12.00 0 0.000000 noError",
"~]USD snmptable -Cb localhost HOST-RESOURCES-MIB::hrStorageTable SNMP table: HOST-RESOURCES-MIB::hrStorageTable Index Type Descr AllocationUnits Size Used AllocationFailures 1 HOST-RESOURCES-TYPES::hrStorageRam Physical memory 1024 Bytes 1021588 388064 ? 3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Virtual memory 1024 Bytes 2045580 388064 ? 6 HOST-RESOURCES-TYPES::hrStorageOther Memory buffers 1024 Bytes 1021588 31048 ? 7 HOST-RESOURCES-TYPES::hrStorageOther Cached memory 1024 Bytes 216604 216604 ? 10 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Swap space 1024 Bytes 1023992 0 ? 31 HOST-RESOURCES-TYPES::hrStorageFixedDisk / 4096 Bytes 2277614 250391 ? 35 HOST-RESOURCES-TYPES::hrStorageFixedDisk /dev/shm 4096 Bytes 127698 0 ? 36 HOST-RESOURCES-TYPES::hrStorageFixedDisk /boot 1024 Bytes 198337 26694 ?",
"~]USD snmptable -Cb localhost UCD-DISKIO-MIB::diskIOTable SNMP table: UCD-DISKIO-MIB::diskIOTable Index Device NRead NWritten Reads Writes LA1 LA5 LA15 NReadX NWrittenX 25 sda 216886272 139109376 16409 4894 ? ? ? 216886272 139109376 26 sda1 2455552 5120 613 2 ? ? ? 2455552 5120 27 sda2 1486848 0 332 0 ? ? ? 1486848 0 28 sda3 212321280 139104256 15312 4871 ? ? ? 212321280 139104256",
"~]USD snmptable -Cb localhost IF-MIB::ifTable SNMP table: IF-MIB::ifTable Index Descr Type Mtu Speed PhysAddress AdminStatus 1 lo softwareLoopback 16436 10000000 up 2 eth0 ethernetCsmacd 1500 0 52:54:0:c7:69:58 up 3 eth1 ethernetCsmacd 1500 0 52:54:0:a7:a3:24 down",
"~]USD snmpwalk localhost IF-MIB::ifDescr IF-MIB::ifDescr.1 = STRING: lo IF-MIB::ifDescr.2 = STRING: eth0 IF-MIB::ifDescr.3 = STRING: eth1 ~]USD snmpwalk localhost IF-MIB::ifOutOctets IF-MIB::ifOutOctets.1 = Counter32: 10060699 IF-MIB::ifOutOctets.2 = Counter32: 650 IF-MIB::ifOutOctets.3 = Counter32: 0 ~]USD snmpwalk localhost IF-MIB::ifInOctets IF-MIB::ifInOctets.1 = Counter32: 10060699 IF-MIB::ifInOctets.2 = Counter32: 78650 IF-MIB::ifInOctets.3 = Counter32: 0",
"#!/bin/sh NUMPIDS= pgrep httpd | wc -l exit USDNUMPIDS",
"extend name prog args",
"extend httpd_pids /bin/sh /usr/local/bin/check_apache.sh",
"~]USD snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendCommand.\"httpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendArgs.\"httpd_pids\" = STRING: /usr/local/bin/check_apache.sh NET-SNMP-EXTEND-MIB::nsExtendInput.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendCacheTime.\"httpd_pids\" = INTEGER: 5 NET-SNMP-EXTEND-MIB::nsExtendExecType.\"httpd_pids\" = INTEGER: exec(1) NET-SNMP-EXTEND-MIB::nsExtendRunType.\"httpd_pids\" = INTEGER: run-on-read(1) NET-SNMP-EXTEND-MIB::nsExtendStorage.\"httpd_pids\" = INTEGER: permanent(4) NET-SNMP-EXTEND-MIB::nsExtendStatus.\"httpd_pids\" = INTEGER: active(1) NET-SNMP-EXTEND-MIB::nsExtendOutput1Line.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendOutputFull.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendOutNumLines.\"httpd_pids\" = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"httpd_pids\".1 = STRING:",
"#!/bin/sh PATTERN=USD1 NUMPIDS= pgrep USDPATTERN | wc -l echo \"There are USDNUMPIDS USDPATTERN processes.\" exit USDNUMPIDS",
"extend httpd_pids /bin/sh /usr/local/bin/check_proc.sh httpd extend snmpd_pids /bin/sh /usr/local/bin/check_proc.sh snmpd",
"~]USD snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 2 NET-SNMP-EXTEND-MIB::nsExtendCommand.\"httpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendCommand.\"snmpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendArgs.\"httpd_pids\" = STRING: /usr/local/bin/check_proc.sh httpd NET-SNMP-EXTEND-MIB::nsExtendArgs.\"snmpd_pids\" = STRING: /usr/local/bin/check_proc.sh snmpd NET-SNMP-EXTEND-MIB::nsExtendInput.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendInput.\"snmpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 NET-SNMP-EXTEND-MIB::nsExtendResult.\"snmpd_pids\" = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"httpd_pids\".1 = STRING: There are 8 httpd processes. NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"snmpd_pids\".1 = STRING: There are 1 snmpd processes.",
"~]USD snmpget localhost 'NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\"' UCD-SNMP-MIB::memAvailReal.0 NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 799664 kB",
"use NetSNMP::agent (':all'); my USDagent = new NetSNMP::agent();",
"USDagent->register(\"hello_world\", \".1.3.6.1.4.1.8072.9999.9999\", \\&hello_handler);",
"USDrequest->setValue(ASN_OCTET_STR, \"hello world\");",
"my USDrequest; my USDstring_value = \"hello world\"; my USDinteger_value = \"8675309\"; for(USDrequest = USDrequests; USDrequest; USDrequest = USDrequest->next()) { my USDoid = USDrequest->getOID(); if (USDrequest_info->getMode() == MODE_GET) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } elsif (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.1\")) { USDrequest->setValue(ASN_INTEGER, USDinteger_value); } } elsif (USDrequest_info->getMode() == MODE_GETNEXT) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.1\"); USDrequest->setValue(ASN_INTEGER, USDinteger_value); } elsif (USDoid < new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.0\"); USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } } }",
"#!/usr/bin/perl use NetSNMP::agent (':all'); use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER); sub hello_handler { my (USDhandler, USDregistration_info, USDrequest_info, USDrequests) = @_; my USDrequest; my USDstring_value = \"hello world\"; my USDinteger_value = \"8675309\"; for(USDrequest = USDrequests; USDrequest; USDrequest = USDrequest->next()) { my USDoid = USDrequest->getOID(); if (USDrequest_info->getMode() == MODE_GET) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } elsif (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.1\")) { USDrequest->setValue(ASN_INTEGER, USDinteger_value); } } elsif (USDrequest_info->getMode() == MODE_GETNEXT) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.1\"); USDrequest->setValue(ASN_INTEGER, USDinteger_value); } elsif (USDoid < new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.0\"); USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } } } } my USDagent = new NetSNMP::agent(); USDagent->register(\"hello_world\", \".1.3.6.1.4.1.8072.9999.9999\", \\&hello_handler);",
"perl do \"/usr/share/snmp/hello_world.pl\"",
"~]USD snmpwalk localhost NET-SNMP-MIB::netSnmpPlaypen NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: \"hello world\" NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309",
"~]USD snmpget localhost NET-SNMP-MIB::netSnmpPlaypen.1.0 NET-SNMP-MIB::netSnmpPlaypen.1.1 NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: \"hello world\" NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-system_monitoring_tools |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.24/rn-openjdk-temurin-support-policy |
Chapter 1. Overview of General Security Concepts | Chapter 1. Overview of General Security Concepts Before digging into how JBoss EAP handles security, it is important to understand a few basic security concepts. 1.1. Authentication Authentication refers to identifying a subject and verifying the authenticity of the identification. The most common authentication mechanism is a username and password combination, but other mechanisms, such as shared keys, smart cards or fingerprints, are also used for authentication. When in the context of Jakarta EE declarative security, the result of a successful authentication is called a principal. 1.2. Authorization Authorization refers to a way of specifying access rights or defining access policies. A system can then implement a mechanism to use those policies to permit or deny access to resources for the requester. In many cases, this is implemented by matching a principal with a set of actions or places they are allowed to access, sometimes referred to as a role. 1.3. Authentication and Authorization in Practice Although authentication and authorization are distinct concepts, they are often linked. Many modules written to handle authentication also handle authorization and vice versa. Example The application MyPersonalSoapbox provides the ability to post and view messages. Principals with the Talk role can post messages and view other posted messages. Users who have not logged in have the Listen role and can view posted messages. Suzy, Adam, and Bob use the application. Suzy and Bob can authenticate with their username and password, but Adam does not have a username and password yet. Suzy has the Talk role, but Bob does not have any roles, neither Talk nor Listen . When Suzy authenticates, she may post and view messages. When Adam uses MyPersonalSoapbox , he cannot log in, but he can see posted messages. When Bob logs in, he cannot post any messages, nor can he view any other posted messages. Suzy is both authenticated and authorized. Adam has not authenticated, but he is authorized, with the Listen role, to view messages. Bob is authenticated but has no authorization and no roles. 1.4. Encryption Encryption refers to encoding sensitive information by applying mathematical algorithms to it. Data is secured by converting, or encrypting, it to an encoded format. To read the data again, the encoded format must be converted, or decrypted, to the original format. Encryption can be applied to simple string data in files or databases or even on data sent across communications streams. Examples of encryption include the following scenarios. LUKS can be used to encrypt Linux file system disks. The blowfish or AES algorithms can be used to encrypt data stored in Postgres databases. The HTTPS protocol encrypts all data via Secure Sockets Layer/Transport Layer Security, SSL/TLS, before transferring it from one party to another. When a user connects from one server to another using the Secure Shell, SSH protocol, all of the communication is sent in an encrypted tunnel. 1.5. SSL/TLS and Certificates SSL/TLS encrypts network traffic between two systems by using a symmetric key that is exchanged between and only known by those two systems. To ensure a secure exchange of the symmetric key, SSL/TLS uses Public Key Infrastructure (PKI), a method of encryption that uses a key pair. A key pair consists of two separate but matching cryptographic keys: a public key and a private key. The public key is shared with any party and is used to encrypt data; the private key is kept secret and is used to decrypt data that has been encrypted using the public key. When a client requests a secure connection to exchange symmetric keys, a handshake phase occurs before secure communication can begin. During the SSL/TLS handshake, the server passes its public key to the client in the form of a certificate. The certificate contains the identity of the server, its URL, the public key of the server, and a digital signature that validates the certificate. The client validates the certificate and decides whether the certificate is trusted. If the certificate is trusted, the client generates the symmetric key for the SSL/TLS connection, encrypts it using the public key of the server, and sends it back to the server. The server uses its private key to decrypt the symmetric key. Further communication between the two machines over this connection is encrypted using the symmetric key. There are two kinds of certificates: self-signed certificates and authority-signed certificates. A self-signed certificate uses its private key to sign itself; that signature is unverified because it is not connected to a chain of trust. An authority-signed certificate is a certificate that is issued to a party by a certificate authority, CA, and is signed by that CA, for example, VeriSign, CAcert, or RSA. The CA verifies the authenticity of the certificate holder. Self-signed certificates can be faster and easier to generate and require less infrastructure to manage, but they can be difficult for clients to verify their authenticity because a third party has not confirmed their authenticity. This inherently makes the self-signed certificate less secure. Authority-signed certificates can take more effort to set up, but they are easier for clients to verify their authenticity. A chain of trust has been created because a third party has confirmed the authenticity of the certificate holder. Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. 1.6. Single Sign-On Single sign-on (SSO) allows principals authenticated to one resource to implicitly authorize access to other resources. If a set of distinct resources is secured by SSO, a user is only required to authenticate the first time they access any of the secured resources. Upon successful authentication, the roles associated with the user are stored and used for authorization of all other associated resources. This allows the user to access any additional authorized resources without reauthenticating. If the user logs out of a resource or a resource invalidates the session programmatically, all persisted authorization data is removed and the process starts over. In the case of a resource session timeout, the SSO session is not invalidated if there are other valid resource sessions associated with that user. SSO may be used for authentication and authorization on web applications and desktop applications. In some cases, an SSO implementation can integrate with both. Within SSO, there are a few common terms used to describe different concepts and parts of the system. Identity Management Identity management (IDM) refers to the idea of managing principals and their associated authentication, authorization, and privileges across one or more systems or domains. The term identity and access management (IAM) is sometimes used to describe this same concept. Identity Provider An identity provider (IDP) is the authoritative entity responsible for authenticating an end user and asserting the identity for that user in a trusted fashion to trusted partners. Identity Store An identity provider needs an identity store to retrieve users' information to use during the authentication and authorization process. Identity stores can be any type of repository: a database, Lightweight Directory Access Protocol (LDAP), properties file, and so on. Service Provider A service provider (SP) relies on an identity provider to assert information about a user via an electronic user credential, leaving the service provider to manage access control and dissemination based on a trusted set of user credential assertions. Clustered and Non-Clustered SSO Non-clustered SSO limits the sharing of authorization information to applications on the same virtual host. There is also no resiliency in the event of a host failure. In a clustered SSO scenario, data can be shared between applications on multiple virtual hosts, which makes it resilient to failures. In addition, a clustered SSO is able to receive requests from a load balancer. 1.6.1. Third-Party SSO Implementations Kerberos Kerberos is a network authentication protocol for client-server applications. It uses secret-key symmetric cryptography to allow secure authentication across a non-secure network. Kerberos uses security tokens called tickets. To use a secured service, users need to obtain a ticket from the ticket granting service (TGS) which is a service that runs on a server in their network. After obtaining the ticket, users request a Service Ticket (ST) from an authentication service (AS) which is another service running in the same network. Users then use the ST to authenticate to the desired service. The TGS and the AS run inside an enclosing service called the key distribution center (KDC). Kerberos is designed to be used in a client-server desktop environment and is not usually used in web applications or thin client environments. However, many organizations use a Kerberos system for desktop authentication and prefer to reuse their existing system rather than create a second one for their web applications. Kerberos is an integral part of Microsoft's Active Directory and is used in many Red Hat Enterprise Linux environments. SPNEGO Simple and protected GSS_API negotiation mechanism (SPNEGO) provides a mechanism for extending a Kerberos-based SSO environment for use in web applications. When an application on a client computer, such as a web browser, attempts to access a protected page on a web server, the server responds that authorization is required. The application then requests an ST from the KDC. The application wraps the ticket in a request formatted for SPNEGO and sends it back to the web application via the browser. The web container running the deployed web application unpacks the request and authenticates the ticket. Access is granted upon successful authentication. SPNEGO works with all types of Kerberos providers, including the Kerberos service within Red Hat Enterprise Linux and the Kerberos server, which is an integral part of Microsoft's Active Directory. Microsoft's Active Directory Active Directory (AD) is a directory service developed by Microsoft to authenticate users and computers in a Microsoft Windows domain. It comes as part of Windows Server. The computer running Windows Server controlling the domain is referred to as the domain controller. Red Hat Enterprise Linux can integrate with Active Directory domains as can Red Hat Identity Management, Red Hat JBoss Enterprise Application Platform, and other Red Hat products. Active Directory relies on three core technologies that work together: LDAP to store information about users, computers, passwords, and other resources Kerberos to provide secure authentication over the network Domain name service (DNS) to provide mappings between IP addresses and host names of computers and other devices in the network 1.6.2. Claims-Based Identity One way to implement SSO is to use a claims-based identity system. A claims-based identity system allows systems to pass identity information but abstracts that information into two components: a claim and an issuer or authority. A claim is statement that one subject, such as a user, group, application, or organization, makes about another. That claim or set of claims is packaged into a token or set of tokens and issued by a provider. Claims-based identity allows individual secured resources to implement SSO without having to know everything about a user. Security Token Service A security token service (STS) is an authentication service that issues security tokens to clients for use when authenticating and authorizing users for secured applications, web services or Jakarta Enterprise Beans. A client attempting to authenticate against an application secured with STS, known as a service provider, will be redirected to a centralized STS authenticator and issued a token. If successful, that client will reattempt to access the service provider, providing their token along with the original request. That service provider will validate the token from the client with the STS and proceed accordingly. This same token may be reused by the client against other web services or Jakarta Enterprise Beans that are connected to the STS. The concept of a centralized STS that can issue, cancel, renew, and validate security tokens and specifies the format of security token request and response messages is known as WS-Trust . Browser-Based SSO In browser-based SSO, one or more web applications, known as service providers, connect to a centralized identity provider in a hub and spoke architecture. The IDP acts as the central source, or hub, for identity and role information by issuing claim statements in SAML tokens to service providers, or spokes. Requests may be issued when a user attempts to access a service provider or if a user attempts to authenticate directly with the identity provider. These are known as SP-initiated and IDP-initiated flows, respectively, and will both issue the same claim statements. SAML Security Assertion Markup Language (SAML) is a data format that allows two parties, usually an identity provider and a service provider, to exchange authentication and authorization information. A SAML token is a type of token issued by an STS or IDP; it can be used to enable SSO. A resource secured by SAML, SAML service provider, redirects users to the SAML identity provider, a type of STS or IDP, to obtain a valid SAML token before authenticating and authorizing that user. Desktop-Based SSO Desktop-based SSO enables service providers and desktop domains, for example Active Directory or Kerberos, to share a principal. In practice, this allows users to log in on their computer using their domain credentials and then have service providers reuse that principal during authentication, without having to reauthenticate, thus providing SSO. 1.7. LDAP Lightweight Directory Access Protocol (LDAP) is a protocol for storing and distributing directory information across a network. This directory information includes information about users, hardware devices, access roles and restrictions, and other information. In LDAP, the distinguished name (DN), uniquely identifies an object in a directory. Each distinguished name must have a unique name and location from all other objects, which is achieved using a number of attribute-value pairs (AVPs). The AVPs define information such as common names and organization unit. The combination of these values results in a unique string required by the LDAP. Some common implementations of LDAP include Red Hat Directory Server, OpenLDAP, Active Directory, IBM Tivoli Directory Server, Oracle Internet Directory, and 389 Directory Server. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/security_architecture/overview_of_general_security_concepts |
Chapter 7. Troubleshooting Ceph placement groups | Chapter 7. Troubleshooting Ceph placement groups This section contains information about fixing the most common errors related to the Ceph Placement Groups (PGs). Prerequisites Verify your network connection. Ensure that Monitors are able to form a quorum. Ensure that all healthy OSDs are up and in , and the backfilling and recovery processes are finished. 7.1. Most common Ceph placement groups errors The following table lists the most common error messages that are returned by the ceph health detail command. The table provides links to corresponding sections that explain the errors and point to specific procedures to fix the problems. In addition, you can list placement groups that are stuck in a state that is not optimal. See Section 7.2, "Listing placement groups stuck in stale , inactive , or unclean state" for details. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. 7.1.1. Placement group error messages A table of common placement group error messages, and a potential fix. Error message See HEALTH_ERR pgs down Placement groups are down pgs inconsistent Inconsistent placement groups scrub errors Inconsistent placement groups HEALTH_WARN pgs stale Stale placement groups unfound Unfound objects 7.1.2. Stale placement groups The ceph health command lists some Placement Groups (PGs) as stale : What This Means The Monitor marks a placement group as stale when it does not receive any status update from the primary OSD of the placement group's acting set or when other OSDs reported that the primary OSD is down . Usually, PGs enter the stale state after you start the storage cluster and until the peering process completes. However, when the PGs remain stale for longer than expected, it might indicate that the primary OSD for those PGs is down or not reporting PG statistics to the Monitor. When the primary OSD storing stale PGs is back up , Ceph starts to recover the PGs. The mon_osd_report_timeout setting determines how often OSDs report PGs statistics to Monitors. By default, this parameter is set to 0.5 , which means that OSDs report the statistics every half a second. To Troubleshoot This Problem Identify which PGs are stale and on what OSDs they are stored. The error message includes information similar to the following example: Example Troubleshoot any problems with the OSDs that are marked as down . For details, see Down OSDs . Additional Resources The Monitoring Placement Group Sets section in the Administration Guide for Red Hat Ceph Storage 8 7.1.3. Inconsistent placement groups Some placement groups are marked as active + clean + inconsistent and the ceph health detail returns an error message similar to the following one: What This Means When Ceph detects inconsistencies in one or more replicas of an object in a placement group, it marks the placement group as inconsistent . The most common inconsistencies are: Objects have an incorrect size. Objects are missing from one replica after a recovery finished. In most cases, errors during scrubbing cause inconsistency within placement groups. To Troubleshoot This Problem Log in to the Cephadm shell: Example Determine which placement group is in the inconsistent state: Determine why the placement group is inconsistent . Start the deep scrubbing process on the placement group: Syntax Replace ID with the ID of the inconsistent placement group, for example: Search the output of the ceph -w for any messages related to that placement group: Syntax Replace ID with the ID of the inconsistent placement group, for example: If the output includes any error messages similar to the following ones, you can repair the inconsistent placement group. See Repairing inconsistent placement groups for details. Syntax If the output includes any error messages similar to the following ones, it is not safe to repair the inconsistent placement group because you can lose data. Open a support ticket in this situation. See Contacting Red Hat support for details. Additional Resources See the Listing placement group inconsistencies in the Red Hat Ceph Storage Troubleshooting Guide . See the Ceph data integrity section in the Red Hat Ceph Storage Architecture Guide . See the Scrubbing the OSD section in the Red Hat Ceph Storage Configuration Guide . 7.1.4. Unclean placement groups The ceph health command returns an error message similar to the following one: What This Means Ceph marks a placement group as unclean if it has not achieved the active+clean state for the number of seconds specified in the mon_pg_stuck_threshold parameter in the Ceph configuration file. The default value of mon_pg_stuck_threshold is 300 seconds. If a placement group is unclean , it contains objects that are not replicated the number of times specified in the osd_pool_default_size parameter. The default value of osd_pool_default_size is 3 , which means that Ceph creates three replicas. Usually, unclean placement groups indicate that some OSDs might be down . To Troubleshoot This Problem Determine which OSDs are down : Troubleshoot and fix any problems with the OSDs. See Down OSDs for details. Additional Resources Listing placement groups stuck in stale inactive or unclean state . 7.1.5. Inactive placement groups The ceph health command returns an error message similar to the following one: What This Means Ceph marks a placement group as inactive if it has not be active for the number of seconds specified in the mon_pg_stuck_threshold parameter in the Ceph configuration file. The default value of mon_pg_stuck_threshold is 300 seconds. Usually, inactive placement groups indicate that some OSDs might be down . To Troubleshoot This Problem Determine which OSDs are down : Troubleshoot and fix any problems with the OSDs. Additional Resources Listing placement groups stuck in stale inactive or unclean state See Down OSDs for details. 7.1.6. Placement groups are down The ceph health detail command reports that some placement groups are down : What This Means In certain cases, the peering process can be blocked, which prevents a placement group from becoming active and usable. Usually, a failure of an OSD causes the peering failures. To Troubleshoot This Problem Determine what blocks the peering process: Syntax Replace ID with the ID of the placement group that is down : Example The recovery_state section includes information on why the peering process is blocked. If the output includes the peering is blocked due to down osds error message, see Down OSDs . If you see any other error message, open a support ticket. See Contacting Red Hat Support service for details. Additional Resources The Ceph OSD peering section in the Red Hat Ceph Storage Administration Guide . 7.1.7. Unfound objects The ceph health command returns an error message similar to the following one, containing the unfound keyword: What This Means Ceph marks objects as unfound when it knows these objects or their newer copies exist but it is unable to find them. As a consequence, Ceph cannot recover such objects and proceed with the recovery process. An Example Situation A placement group stores data on osd.1 and osd.2 . osd.1 goes down . osd.2 handles some write operations. osd.1 comes up . A peering process between osd.1 and osd.2 starts, and the objects missing on osd.1 are queued for recovery. Before Ceph copies new objects, osd.2 goes down . As a result, osd.1 knows that these objects exist, but there is no OSD that has a copy of the objects. In this scenario, Ceph is waiting for the failed node to be accessible again, and the unfound objects blocks the recovery process. To Troubleshoot This Problem Log in to the Cephadm shell: Example Determine which placement group contains unfound objects: List more information about the placement group: Syntax Replace ID with the ID of the placement group containing the unfound objects: Example The might_have_unfound section includes OSDs where Ceph tried to locate the unfound objects: The already probed status indicates that Ceph cannot locate the unfound objects in that OSD. The osd is down status indicates that Ceph cannot contact that OSD. Troubleshoot the OSDs that are marked as down . See Down OSDs for details. If you are unable to fix the problem that causes the OSD to be down , open a support ticket. See Contacting Red Hat Support for service for details. 7.2. Listing placement groups stuck in stale , inactive , or unclean state After a failure, placement groups enter states like degraded or peering . This states indicate normal progression through the failure recovery process. However, if a placement group stays in one of these states for a longer time than expected, it can be an indication of a larger problem. The Monitors report when placement groups get stuck in a state that is not optimal. The mon_pg_stuck_threshold option in the Ceph configuration file determines the number of seconds after which placement groups are considered inactive , unclean , or stale . The following table lists these states together with a short explanation. State What it means Most common causes See inactive The PG has not been able to service read/write requests. Peering problems Inactive placement groups unclean The PG contains objects that are not replicated the desired number of times. Something is preventing the PG from recovering. unfound objects OSDs are down Incorrect configuration Unclean placement groups stale The status of the PG has not been updated by a ceph-osd daemon. OSDs are down Stale placement groups Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example List the stuck PGs: Example Additional Resources See the Placement Group States section in the Red Hat Ceph Storage Administration Guide . 7.3. Listing placement group inconsistencies Use the rados utility to list inconsistencies in various replicas of objects. Use the --format=json-pretty option to list a more detailed output. This section covers the listing of: Inconsistent placement group in a pool Inconsistent objects in a placement group Inconsistent snapshot sets in a placement group Prerequisites A running Red Hat Ceph Storage cluster in a healthy state. Root-level access to the node. Procedure List all the inconsistent placement groups in a pool: Syntax Example List inconsistent objects in a placement group with ID: Syntax Example The following fields are important to determine what causes the inconsistency: name : The name of the object with inconsistent replicas. nspace : The namespace that is a logical separation of a pool. It's empty by default. locator : The key that is used as the alternative of the object name for placement. snap : The snapshot ID of the object. The only writable version of the object is called head . If an object is a clone, this field includes its sequential ID. version : The version ID of the object with inconsistent replicas. Each write operation to an object increments it. errors : A list of errors that indicate inconsistencies between shards without determining which shard or shards are incorrect. See the shard array to further investigate the errors. data_digest_mismatch : The digest of the replica read from one OSD is different from the other OSDs. size_mismatch : The size of a clone or the head object does not match the expectation. read_error : This error indicates inconsistencies caused most likely by disk errors. union_shard_error : The union of all errors specific to shards. These errors are connected to a faulty shard. The errors that end with oi indicate that you have to compare the information from a faulty object to information with selected objects. See the shard array to further investigate the errors. In the above example, the object replica stored on osd.2 has different digest than the replicas stored on osd.0 and osd.1 . Specifically, the digest of the replica is not 0xffffffff as calculated from the shard read from osd.2 , but 0xe978e67f . In addition, the size of the replica read from osd.2 is 0, while the size reported by osd.0 and osd.1 is 968. List inconsistent sets of snapshots: Syntax Example The command returns the following errors: ss_attr_missing : One or more attributes are missing. Attributes are information about snapshots encoded into a snapshot set as a list of key-value pairs. ss_attr_corrupted : One or more attributes fail to decode. clone_missing : A clone is missing. snapset_mismatch : The snapshot set is inconsistent by itself. head_mismatch : The snapshot set indicates that head exists or not, but the scrub results report otherwise. headless : The head of the snapshot set is missing. size_mismatch : The size of a clone or the head object does not match the expectation. Additional Resources Inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . Repairing inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . 7.4. Repairing inconsistent placement groups Due to an error during deep scrubbing, some placement groups can include inconsistencies. Ceph reports such placement groups as inconsistent : Warning You can repair only certain inconsistencies. Do not repair the placement groups if the Ceph logs include the following errors: Open a support ticket instead. See Contacting Red Hat Support for service for details. Prerequisites Root-level access to the Ceph Monitor node. Procedure Repair the inconsistent placement groups: Syntax Replace ID with the ID of the inconsistent placement group. Additional Resources See the Inconsistent placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . See the Listing placement group inconsistencies Red Hat Ceph Storage Troubleshooting Guide . 7.5. Increasing the placement group Insufficient Placement Group (PG) count impacts the performance of the Ceph cluster and data distribution. It is one of the main causes of the nearfull osds error messages. The recommended ratio is between 100 and 300 PGs per OSD. This ratio can decrease when you add more OSDs to the cluster. The pg_num and pgp_num parameters determine the PG count. These parameters are configured per each pool, and therefore, you must adjust each pool with low PG count separately. Important Increasing the PG count is the most intensive process that you can perform on a Ceph cluster. This process might have a serious performance impact if not done in a slow and methodical way. Once you increase pgp_num , you will not be able to stop or reverse the process and you must complete it. Consider increasing the PG count outside of business critical processing time allocation, and alert all clients about the potential performance impact. Do not change the PG count if the cluster is in the HEALTH_ERR state. Prerequisites A running Red Hat Ceph Storage cluster in a healthy state. Root-level access to the node. Procedure Reduce the impact of data redistribution and recovery on individual OSDs and OSD hosts: Lower the value of the osd max backfills , osd_recovery_max_active , and osd_recovery_op_priority parameters: Disable the shallow and deep scrubbing: Use the Ceph Placement Groups (PGs) per Pool Calculator to calculate the optimal value of the pg_num and pgp_num parameters. Increase the pg_num value in small increments until you reach the desired value. Determine the starting increment value. Use a very low value that is a power of two, and increase it when you determine the impact on the cluster. The optimal value depends on the pool size, OSD count, and client I/O load. Increment the pg_num value: Syntax Specify the pool name and the new value, for example: Example Monitor the status of the cluster: Example The PGs state will change from creating to active+clean . Wait until all PGs are in the active+clean state. Increase the pgp_num value in small increments until you reach the desired value: Determine the starting increment value. Use a very low value that is a power of two, and increase it when you determine the impact on the cluster. The optimal value depends on the pool size, OSD count, and client I/O load. Increment the pgp_num value: Syntax Specify the pool name and the new value, for example: Monitor the status of the cluster: The PGs state will change through peering , wait_backfill , backfilling , recover , and others. Wait until all PGs are in the active+clean state. Repeat the steps for all pools with insufficient PG count. Set osd max backfills , osd_recovery_max_active , and osd_recovery_op_priority to their default values: Enable the shallow and deep scrubbing: Additional Resources See the Nearfull OSDs See the Monitoring Placement Group Sets section in the Red Hat Ceph Storage Administration Guide . See Chapter 3, Troubleshooting networking issues for details. See Chapter 4, Troubleshooting Ceph Monitors for details about troubleshooting the most common errors related to Ceph Monitors. See Chapter 5, Troubleshooting Ceph OSDs for details about troubleshooting the most common errors related to Ceph OSDs. See the Auto-scaling placement groups section in the Red Hat Ceph Storage Storage Strategies Guide for more information on PG autoscaler. | [
"HEALTH_WARN 24 pgs stale; 3/300 in osds are down",
"ceph health detail HEALTH_WARN 24 pgs stale; 3/300 in osds are down pg 2.5 is stuck stale+active+remapped, last acting [2,0] osd.10 is down since epoch 23, last address 192.168.106.220:6800/11080 osd.11 is down since epoch 13, last address 192.168.106.220:6803/11539 osd.12 is down since epoch 24, last address 192.168.106.220:6806/11861",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"cephadm shell",
"ceph health detail HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"ceph pg deep-scrub ID",
"ceph pg deep-scrub 0.6 instructing pg 0.6 on osd.0 to deep-scrub",
"ceph -w | grep ID",
"ceph -w | grep 0.6 2022-05-26 01:35:36.778215 osd.106 [ERR] 0.6 deep-scrub stat mismatch, got 636/635 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 1855455/1854371 bytes. 2022-05-26 01:35:36.788334 osd.106 [ERR] 0.6 deep-scrub 1 errors",
"PG . ID shard OSD : soid OBJECT missing attr , missing attr _ATTRIBUTE_TYPE PG . ID shard OSD : soid OBJECT digest 0 != known digest DIGEST , size 0 != known size SIZE PG . ID shard OSD : soid OBJECT size 0 != known size SIZE PG . ID deep-scrub stat mismatch, got MISMATCH PG . ID shard OSD : soid OBJECT candidate had a read error, digest 0 != known digest DIGEST",
"PG . ID shard OSD : soid OBJECT digest DIGEST != known digest DIGEST PG . ID shard OSD : soid OBJECT omap_digest DIGEST != known omap_digest DIGEST",
"HEALTH_WARN 197 pgs stuck unclean",
"ceph osd tree",
"HEALTH_WARN 197 pgs stuck inactive",
"ceph osd tree",
"HEALTH_ERR 7 pgs degraded; 12 pgs down; 12 pgs peering; 1 pgs recovering; 6 pgs stuck unclean; 114/3300 degraded (3.455%); 1/3 in osds are down pg 0.5 is down+peering pg 1.4 is down+peering osd.1 is down since epoch 69, last address 192.168.106.220:6801/8651",
"ceph pg ID query",
"ceph pg 0.5 query { \"state\": \"down+peering\", \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Peering\\/GetInfo\", \"enter_time\": \"2021-08-06 14:40:16.169679\", \"requested_info_from\": []}, { \"name\": \"Started\\/Primary\\/Peering\", \"enter_time\": \"2021-08-06 14:40:16.169659\", \"probing_osds\": [ 0, 1], \"blocked\": \"peering is blocked due to down osds\", \"down_osds_we_would_probe\": [ 1], \"peering_blocked_by\": [ { \"osd\": 1, \"current_lost_at\": 0, \"comment\": \"starting or marking this osd lost may let us proceed\"}]}, { \"name\": \"Started\", \"enter_time\": \"2021-08-06 14:40:16.169513\"} ] }",
"HEALTH_WARN 1 pgs degraded; 78/3778 unfound (2.065%)",
"cephadm shell",
"ceph health detail HEALTH_WARN 1 pgs recovering; 1 pgs stuck unclean; recovery 5/937611 objects degraded (0.001%); 1/312537 unfound (0.000%) pg 3.8a5 is stuck unclean for 803946.712780, current state active+recovering, last acting [320,248,0] pg 3.8a5 is active+recovering, acting [320,248,0], 1 unfound recovery 5/937611 objects degraded (0.001%); **1/312537 unfound (0.000%)**",
"ceph pg ID query",
"ceph pg 3.8a5 query { \"state\": \"active+recovering\", \"epoch\": 10741, \"up\": [ 320, 248, 0], \"acting\": [ 320, 248, 0], <snip> \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Active\", \"enter_time\": \"2021-08-28 19:30:12.058136\", \"might_have_unfound\": [ { \"osd\": \"0\", \"status\": \"already probed\"}, { \"osd\": \"248\", \"status\": \"already probed\"}, { \"osd\": \"301\", \"status\": \"already probed\"}, { \"osd\": \"362\", \"status\": \"already probed\"}, { \"osd\": \"395\", \"status\": \"already probed\"}, { \"osd\": \"429\", \"status\": \"osd is down\"}], \"recovery_progress\": { \"backfill_targets\": [], \"waiting_on_backfill\": [], \"last_backfill_started\": \"0\\/\\/0\\/\\/-1\", \"backfill_info\": { \"begin\": \"0\\/\\/0\\/\\/-1\", \"end\": \"0\\/\\/0\\/\\/-1\", \"objects\": []}, \"peer_backfill_info\": [], \"backfills_in_flight\": [], \"recovering\": [], \"pg_backend\": { \"pull_from_peer\": [], \"pushing\": []}}, \"scrub\": { \"scrubber.epoch_start\": \"0\", \"scrubber.active\": 0, \"scrubber.block_writes\": 0, \"scrubber.finalizing\": 0, \"scrubber.waiting_on\": 0, \"scrubber.waiting_on_whom\": []}}, { \"name\": \"Started\", \"enter_time\": \"2021-08-28 19:30:11.044020\"}],",
"cephadm shell",
"ceph pg dump_stuck inactive ceph pg dump_stuck unclean ceph pg dump_stuck stale",
"rados list-inconsistent-pg POOL --format=json-pretty",
"rados list-inconsistent-pg data --format=json-pretty [0.6]",
"rados list-inconsistent-obj PLACEMENT_GROUP_ID",
"rados list-inconsistent-obj 0.6 { \"epoch\": 14, \"inconsistents\": [ { \"object\": { \"name\": \"image1\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"version\": 1 }, \"errors\": [ \"data_digest_mismatch\", \"size_mismatch\" ], \"union_shard_errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"selected_object_info\": \"0:602f83fe:::foo:head(16'1 client.4110.0:1 dirty|data_digest|omap_digest s 968 uv 1 dd e978e67f od ffffffff alloc_hint [0 0 0])\", \"shards\": [ { \"osd\": 0, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 1, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 2, \"errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"size\": 0, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xffffffff\" } ] } ] }",
"rados list-inconsistent-snapset PLACEMENT_GROUP_ID",
"rados list-inconsistent-snapset 0.23 --format=json-pretty { \"epoch\": 64, \"inconsistents\": [ { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000001\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000002\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"ss_attr_missing\": true, \"extra_clones\": true, \"extra clones\": [ 2, 1 ] } ]",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"_PG_._ID_ shard _OSD_: soid _OBJECT_ digest _DIGEST_ != known digest _DIGEST_ _PG_._ID_ shard _OSD_: soid _OBJECT_ omap_digest _DIGEST_ != known omap_digest _DIGEST_",
"ceph pg repair ID",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 1 --osd_recovery_op_priority 1'",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd pool set POOL pg_num VALUE",
"ceph osd pool set data pg_num 4",
"ceph -s",
"ceph osd pool set POOL pgp_num VALUE",
"ceph osd pool set data pgp_num 4",
"ceph -s",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 3 --osd_recovery_op_priority 3'",
"ceph osd unset noscrub ceph osd unset nodeep-scrub"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/troubleshooting-ceph-placement-groups |
2.2.2. Boost | 2.2.2. Boost The boost package contains a large number of free peer-reviewed portable C++ source libraries. These libraries are suitable for tasks such as portable file-system access and time or date abstraction, serialization, unit testing, thread creation and multi-process synchronization, parsing, graphing, regular expression manipulation, and many others. Installing the boost package will provide just enough libraries to satisfy link dependencies (that is, only shared library files). To make full use of all available libraries and header files for C++ development, you must install boost-devel as well. The boost package is actually a meta-package, containing many library sub-packages. These sub-packages can also be installed individually to provide finer inter-package dependency tracking. The meta-package does not include dependencies for packages for static linking or packages that depend on the underlying Message Passing Interface (MPI) support. MPI support is provided in two forms: one for the default Open MPI implementation (package boost-openmpi ) and another for the alternate MPICH2 implementation (package boost-mpich2 ). The selection of the underlying MPI library in use is up to the user and depends on specific hardware details and user preferences. Please note that these packages can be installed in parallel because installed files have unique directory locations. If static linkage cannot be avoided, the boost-static package will install the necessary static libraries. Both thread-enabled and single-threaded libraries are provided. 2.2.2.1. Additional Information The boost-doc package provides manuals and reference information in HTML form located in the following directory: /usr/share/doc/boost-doc- version /index.html . The main site for the development of Boost is http://boost.org . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/libraries.boost |
2. More to Come | 2. More to Come The System Administrators Guide is part of Red Hat, Inc's growing commitment to provide useful and timely support to Red Hat Enterprise Linux users. As new tools and applications are released, this guide will be expanded to include them. 2.1. Send in Your Feedback If you find an error in the System Administrators Guide , or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla/ ) against the component rh-sag . Be sure to mention the manual's identifier: By mentioning this manual's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily. | [
"rh-sag"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Introduction-More_to_Come |
6.3. Manually Setting the Entry Cache Size | 6.3. Manually Setting the Entry Cache Size The entry cache is used to store directory entries that are used during search and read operations. Setting the entry cache to a size that enables Directory Server to store all records has the highest performance impact on search operations. If entry caching is not configured, Directory Server reads the entry from the id2entry.db database file and converts the DNs from the on-disk format to the in-memory format. Entries that are stored in the cache enable the server to skip the disk I/O and conversion steps. Note Instead of manually setting the entry cache size Red Hat recommends the auto-sizing feature for optimized settings based on the hardware resources. For details, see Section 6.1.1, "Manually Re-enabling the Database and Entry Cache Auto-sizing" . 6.3.1. Manually Setting the Entry Cache Size Using the Command Line To manually set the entry cache size using the command line: Disable automatic cache tuning: Display the suffixes and their corresponding back end: This command displays the name of the back end database to each suffix. You require the suffix's database name in the step. Set the entry cache size for the database: This command sets the entry cache to 2 gigabytes. Restart the Directory Service instance: 6.3.2. Manually Setting the Entry Cache Size Using the Web Console To manually set the entry cache size using the Web Console: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. On the Database tab, select Global Database Configuration . Disable Automatic Cache Tuning . Click Save Configuration . Click the Actions button, and select Restart Instance . Set the size of the database cache in the Entry Cache Size (bytes) field. Click Save Configuration . Click the Actions button, and select Restart Instance . | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --cache-autosize=0",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com suffix list dc=example,dc=com ( userroot )",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --cache-memsize= 2147483648 userRoot",
"dsctl instance_name restart"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning-entry-cache |
23.20. Security Label | 23.20. Security Label The <seclabel> element allows control over the operation of the security drivers. There are three basic modes of operation, 'dynamic' where libvirt automatically generates a unique security label, 'static' where the application/administrator chooses the labels, or 'none' where confinement is disabled. With dynamic label generation, libvirt will always automatically relabel any resources associated with the virtual machine. With static label assignment, by default, the administrator or application must ensure labels are set correctly on any resources, however, automatic relabeling can be enabled if needed. If more than one security driver is used by libvirt, multiple seclabel tags can be used, one for each driver and the security driver referenced by each tag can be defined using the attribute model . Valid input XML configurations for the top-level security label are: <seclabel type='dynamic' model='selinux'/> <seclabel type='dynamic' model='selinux'> <baselabel>system_u:system_r:my_svirt_t:s0</baselabel> </seclabel> <seclabel type='static' model='selinux' relabel='no'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> <seclabel type='static' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> <seclabel type='none'/> Figure 23.86. Security label If no 'type' attribute is provided in the input XML, then the security driver default setting will be used, which may be either 'none' or 'dynamic' . If a <baselabel> is set but no 'type' is set, then the type is presumed to be 'dynamic' . When viewing the XML for a running guest virtual machine with automatic resource relabeling active, an additional XML element, imagelabel , will be included. This is an output-only element, so will be ignored in user supplied XML documents. The following elements can be manipulated with the following values: type - Either static , dynamic or none to determine whether libvirt automatically generates a unique security label or not. model - A valid security model name, matching the currently activated security model. relabel - Either yes or no . This must always be yes if dynamic label assignment is used. With static label assignment it will default to no . <label> - If static labeling is used, this must specify the full security label to assign to the virtual domain. The format of the content depends on the security driver in use: SELinux : a SELinux context. AppArmor : an AppArmor profile. DAC : owner and group separated by colon. They can be defined both as user/group names or UID/GID. The driver will first try to parse these values as names, but a leading plus sign can used to force the driver to parse them as UID or GID. <baselabel> - If dynamic labeling is used, this can optionally be used to specify the base security label. The format of the content depends on the security driver in use. <imagelabel> - This is an output only element, which shows the security label used on resources associated with the virtual domain. The format of the content depends on the security driver in use. When relabeling is in effect, it is also possible to fine-tune the labeling done for specific source file names, by either disabling the labeling (useful if the file exists on NFS or other file system that lacks security labeling) or requesting an alternate label (useful when a management application creates a special label to allow sharing of some, but not all, resources between domains). When a seclabel element is attached to a specific path rather than the top-level domain assignment, only the attribute relabel or the sub-element label are supported. | [
"<seclabel type='dynamic' model='selinux'/> <seclabel type='dynamic' model='selinux'> <baselabel>system_u:system_r:my_svirt_t:s0</baselabel> </seclabel> <seclabel type='static' model='selinux' relabel='no'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> <seclabel type='static' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> <seclabel type='none'/>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-security_label |
Introduction | Introduction Welcome to the Global File System Configuration and Administration document. This book provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). Red Hat GFS depends on the cluster infrastructure of Red Hat Cluster Suite. For information about Red Hat Cluster Suite refer to Red Hat Cluster Suite Overview and Configuring and Managing a Red Hat Cluster . HTML and PDF versions of all the official Red Hat Enterprise Linux manuals and release notes are available online at http://www.redhat.com/docs/ . 1. Audience This book is intended primarily for Linux system administrators who are familiar with the following activities: Linux system administration procedures, including kernel configuration Installation and configuration of shared storage networks, such as Fibre Channel SANs | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/ch-intro-GFS |
Chapter 4. Timestamp Functions | Chapter 4. Timestamp Functions Each timestamp function returns a value to indicate when a function is executed. These returned values can then be used to indicate when an event occurred, provide an ordering for events, or compute the amount of time elapsed between two time stamps. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/timestamp_stp |
Chapter 14. Using Cruise Control to modify topic replication factor | Chapter 14. Using Cruise Control to modify topic replication factor Make requests to the /topic_configuration endpoint of the Cruise Control REST API to modify topic configurations, including the replication factor. Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. You have configured Cruise Control . You have deployed the Cruise Control Metrics Reporter . Procedure Start the Cruise Control server. The server starts on port 9092 by default; optionally, specify a different port. cd /opt/cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number> To verify that Cruise Control is running, send a GET request to the /state endpoint of the Cruise Control server: curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state' Run the bin/kafka-topics.sh command with the --describe option and to check the current replication factor of the target topic: /opt/kafka/bin/kafka-topics.sh \ --bootstrap-server localhost:9092 \ --topic <topic_name> \ --describe Update the replication factor for the topic: curl -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/topic_configuration?topic=<topic_name>&replication_factor=<new_replication_factor>&dryrun=false' For example, curl -X POST 'localhost:9090/kafkacruisecontrol/topic_configuration?topic=topic1&replication_factor=3&dryrun=false' . Run the bin/kafka-topics.sh command with the --describe option, as before, to see the results of the change to the topic. | [
"cd /opt/cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number>",
"curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state'",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic <topic_name> --describe",
"curl -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/topic_configuration?topic=<topic_name>&replication_factor=<new_replication_factor>&dryrun=false'"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/proc-cc-topic-replication-str |
4.7. Uninstalling a Replica | 4.7. Uninstalling a Replica See Section 2.4, "Uninstalling an IdM Server" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/replica-uninstall |
Chapter 11. Debezium logging | Chapter 11. Debezium logging Debezium has extensive logging built into its connectors, and you can change the logging configuration to control which of these log statements appear in the logs and where those logs are sent. Debezium (as well as Kafka, Kafka Connect, and Zookeeper) use the Log4j logging framework for Java. By default, the connectors produce a fair amount of useful information when they start up, but then produce very few logs when the connector is keeping up with the source databases. This is often sufficient when the connector is operating normally, but may not be enough when the connector is behaving unexpectedly. In such cases, you can change the logging level so that the connector generates much more verbose log messages describing what the connector is doing and what it is not doing. 11.1. Debezium logging concepts Before configuring logging, you should understand what Log4J loggers , log levels , and appenders are. Loggers Each log message produced by the application is sent to a specific logger (for example, io.debezium.connector.mysql ). Loggers are arranged in hierarchies. For example, the io.debezium.connector.mysql logger is the child of the io.debezium.connector logger, which is the child of the io.debezium logger. At the top of the hierarchy, the root logger defines the default logger configuration for all of the loggers beneath it. Log levels Every log message produced by the application also has a specific log level : ERROR - errors, exceptions, and other significant problems WARN - potential problems and issues INFO - status and general activity (usually low-volume) DEBUG - more detailed activity that would be useful in diagnosing unexpected behavior TRACE - very verbose and detailed activity (usually very high-volume) Appenders An appender is essentially a destination where log messages are written. Each appender controls the format of its log messages, giving you even more control over what the log messages look like. To configure logging, you specify the desired level for each logger and the appender(s) where those log messages should be written. Since loggers are hierarchical, the configuration for the root logger serves as a default for all of the loggers below it, although you can override any child (or descendant) logger. 11.2. Default Debezium logging configuration If you are running Debezium connectors in a Kafka Connect process, then Kafka Connect uses the Log4j configuration file (for example, /opt/kafka/config/connect-log4j.properties ) in the Kafka installation. By default, this file contains the following configuration: connect-log4j.properties log4j.rootLogger=INFO, stdout 1 log4j.appender.stdout=org.apache.log4j.ConsoleAppender 2 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 3 log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n 4 ... 1 1 1 1 1 1 1 1 1 1 1 1 1 1 The root logger, which defines the default logger configuration. By default, loggers include INFO , WARN , and ERROR messages. These log messages are written to the stdout appender. 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 The stdout appender writes log messages to the console (as opposed to a file). 3 3 3 3 3 3 3 2 3 3 3 3 The stdout appender uses a pattern matching algorithm to format the log messages. 4 4 4 4 4 4 4 3 4 4 4 4 The pattern for the stdout appender (see the Log4j documentation for details). Unless you configure other loggers, all of the loggers that Debezium uses inherit the rootLogger configuration. 11.3. Configuring Debezium logging By default, Debezium connectors write all INFO , WARN , and ERROR messages to the console. You can change the default logging configuration by using one of the following methods: Setting the logging level by configuring loggers Dynamically setting the logging level with the Kafka Connect REST API Setting the logging level by adding mapped diagnostic contexts Note There are other methods that you can use to configure Debezium logging with Log4j. For more information, search for tutorials about setting up and using appenders to send log messages to specific destinations. 11.3.1. Changing the Debezium logging level by configuring loggers The default Debezium logging level provides sufficient information to show whether a connector is healthy or not. However, if a connector is not healthy, you can change its logging level to troubleshoot the issue. In general, Debezium connectors send their log messages to loggers with names that match the fully-qualified name of the Java class that is generating the log message. Debezium uses packages to organize code with similar or related functions. This means that you can control all of the log messages for a specific class or for all of the classes within or under a specific package. Procedure Open the log4j.properties file. Configure a logger for the connector. This example configures loggers for the MySQL connector and the database schema history implementation used by the connector, and sets them to log DEBUG level messages: log4j.properties ... log4j.logger.io.debezium.connector.mysql=DEBUG, stdout 1 log4j.logger.io.debezium.relational.history=DEBUG, stdout 2 log4j.additivity.io.debezium.connector.mysql=false 3 log4j.additivity.io.debezium.storage.kafka.history=false 4 ... 1 Configures the logger named io.debezium.connector.mysql to send DEBUG , INFO , WARN , and ERROR messages to the stdout appender. 2 Configures the logger named io.debezium.relational.history to send DEBUG , INFO , WARN , and ERROR messages to the stdout appender. 3 4 Turns off additivity , which results in log messages not being sent to the appenders of parent loggers (this can prevent seeing duplicate log messages when using multiple appenders). If necessary, change the logging level for a specific subset of the classes within the connector. Increasing the logging level for the entire connector increases the log verbosity, which can make it difficult to understand what is happening. In these cases, you can change the logging level just for the subset of classes that are related to the issue that you are troubleshooting. Set the connector's logging level to either DEBUG or TRACE . Review the connector's log messages. Find the log messages that are related to the issue that you are troubleshooting. The end of each log message shows the name of the Java class that produced the message. Set the connector's logging level back to INFO . Configure a logger for each Java class that you identified. For example, consider a scenario in which you are unsure why the MySQL connector is skipping some events when it is processing the binlog. Rather than turn on DEBUG or TRACE logging for the entire connector, you can keep the connector's logging level at INFO and then configure DEBUG or TRACE on just the class that is reading the binlog: log4j.properties ... log4j.logger.io.debezium.connector.mysql=INFO, stdout log4j.logger.io.debezium.connector.mysql.BinlogReader=DEBUG, stdout log4j.logger.io.debezium.relational.history=INFO, stdout log4j.additivity.io.debezium.connector.mysql=false log4j.additivity.io.debezium.storage.kafka.history=false log4j.additivity.io.debezium.connector.mysql.BinlogReader=false ... 11.3.2. Dynamically changing the Debezium logging level with the Kafka Connect API You can use the Kafka Connect REST API to set logging levels for a connector dynamically at runtime. Unlike log level changes that you set in log4j.properties , changes that you make via the API take effect immediately, and do not require you to restart the worker. The log level setting that you specify in the API applies only to the worker at the endpoint that receives the request. The log levels of other workers in the cluster remain unchanged. The specified level is not persisted after the worker restarts. To make persistent changes to the logging level, set the log level in log4j.properties by configuring loggers or adding mapped diagnostic contexts . Procedure Set the log level by sending a PUT request to the admin/loggers endpoint that specifies the following information: The package for which you want to change the log level. The log level that you want to set. curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/admin/loggers/io.debezium.connector. <connector_package> -d '{"level": " <log_level> "}' For example, to log debug information for a Debezium MySQL connector, send the following request to Kafka Connect: curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/admin/loggers/io.debezium.connector.mysql -d '{"level": "DEBUG"}' 11.3.3. Changing the Debezium logging levely by adding mapped diagnostic contexts Most Debezium connectors (and the Kafka Connect workers) use multiple threads to perform different activities. This can make it difficult to look at a log file and find only those log messages for a particular logical activity. To make the log messages easier to find, Debezium provides several mapped diagnostic contexts (MDC) that provide additional information for each thread. Debezium provides the following MDC properties: dbz.connectorType A short alias for the type of connector. For example, MySql , Mongo , Postgres , and so on. All threads associated with the same type of connector use the same value, so you can use this to find all log messages produced by a given type of connector. dbz.connectorName The name of the connector or database server as defined in the connector's configuration. For example products , serverA , and so on. All threads associated with a specific connector instance use the same value, so you can find all of the log messages produced by a specific connector instance. dbz.connectorContext A short name for an activity running as a separate thread running within the connector's task. For example, main , binlog , snapshot , and so on. In some cases, when a connector assigns threads to specific resources (such as a table or collection), the name of that resource could be used instead. Each thread associated with a connector would use a distinct value, so you can find all of the log messages associated with this particular activity. To enable MDC for a connector, you configure an appender in the log4j.properties file. Procedure Open the log4j.properties file. Configure an appender to use any of the supported Debezium MDC properties. In the following example, the stdout appender is configured to use these MDC properties: log4j.properties ... log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %X{dbz.connectorType}|%X{dbz.connectorName}|%X{dbz.connectorContext} %m [%c]%n ... The configuration in the preceding example produces log messages similar to the ones in the following output: ... 2017-02-07 20:49:37,692 INFO MySQL|dbserver1|snapshot Starting snapshot for jdbc:mysql://mysql:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=convertToNull with user 'debezium' [io.debezium.connector.mysql.SnapshotReader] 2017-02-07 20:49:37,696 INFO MySQL|dbserver1|snapshot Snapshot is using user 'debezium' with these MySQL grants: [io.debezium.connector.mysql.SnapshotReader] 2017-02-07 20:49:37,697 INFO MySQL|dbserver1|snapshot GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'debezium'@'%' [io.debezium.connector.mysql.SnapshotReader] ... Each line in the log includes the connector type (for example, MySQL ), the name of the connector (for example, dbserver1 ), and the activity of the thread (for example, snapshot ). 11.4. Debezium logging on OpenShift If you are using Debezium on OpenShift, you can use the Kafka Connect loggers to configure the Debezium loggers and logging levels. For more information about configuring logging properties in a Kafka Connect schema, see Using AMQ Streams on OpenShift . | [
"log4j.rootLogger=INFO, stdout 1 log4j.appender.stdout=org.apache.log4j.ConsoleAppender 2 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 3 log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n 4",
"log4j.logger.io.debezium.connector.mysql=DEBUG, stdout 1 log4j.logger.io.debezium.relational.history=DEBUG, stdout 2 log4j.additivity.io.debezium.connector.mysql=false 3 log4j.additivity.io.debezium.storage.kafka.history=false 4",
"log4j.logger.io.debezium.connector.mysql=INFO, stdout log4j.logger.io.debezium.connector.mysql.BinlogReader=DEBUG, stdout log4j.logger.io.debezium.relational.history=INFO, stdout log4j.additivity.io.debezium.connector.mysql=false log4j.additivity.io.debezium.storage.kafka.history=false log4j.additivity.io.debezium.connector.mysql.BinlogReader=false",
"curl -s -X PUT -H \"Content-Type:application/json\" http://localhost:8083/admin/loggers/io.debezium.connector. <connector_package> -d '{\"level\": \" <log_level> \"}'",
"curl -s -X PUT -H \"Content-Type:application/json\" http://localhost:8083/admin/loggers/io.debezium.connector.mysql -d '{\"level\": \"DEBUG\"}'",
"log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %X{dbz.connectorType}|%X{dbz.connectorName}|%X{dbz.connectorContext} %m [%c]%n",
"2017-02-07 20:49:37,692 INFO MySQL|dbserver1|snapshot Starting snapshot for jdbc:mysql://mysql:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=convertToNull with user 'debezium' [io.debezium.connector.mysql.SnapshotReader] 2017-02-07 20:49:37,696 INFO MySQL|dbserver1|snapshot Snapshot is using user 'debezium' with these MySQL grants: [io.debezium.connector.mysql.SnapshotReader] 2017-02-07 20:49:37,697 INFO MySQL|dbserver1|snapshot GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'debezium'@'%' [io.debezium.connector.mysql.SnapshotReader]"
] | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/debezium_user_guide/debezium-logging |
22.3. Language Selection | 22.3. Language Selection Use the arrow keys on your keyboard to select a language to use during the installation process (refer to Figure 22.3, "Language Selection" ). With your selected language highlighted, press the Tab key to move to the OK button and press the Enter key to confirm your choice. You can automate this choice in the parameter file with the parameter lang= (refer to Section 26.5, "Loader Parameters" ) or with the kickstart command lang (refer to Section 28.4, "Automating the Installation with Kickstart" ). The language you select here will become the default language for the operating system once it is installed. Selecting the appropriate language also helps target your time zone configuration later in the installation. The installation program tries to define the appropriate time zone based on what you specify on this screen. To add support for additional languages, customize the installation at the package selection stage. For more information, refer to Section 23.17.2, " Customizing the Software Selection " . Figure 22.3. Language Selection Once you select the appropriate language, click to continue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-langselection-s390 |
2.7.8. Road Warrior Access VPN Using Libreswan and XAUTH with X.509 | 2.7.8. Road Warrior Access VPN Using Libreswan and XAUTH with X.509 Libreswan offers a method to natively assign IP address and DNS information to roaming VPN clients as the connection is established by using the XAUTH IPsec extension. Extended authentication (XAUTH) can be deployed using PSK or X.509 certificates. Deploying using X.509 is more secure. Client certificates can be revoked by a certificate revocation list or by Online Certificate Status Protocol ( OCSP ). With X.509 certificates, individual clients cannot impersonate the server. With a PSK, also called Group Password, this is theoretically possible. XAUTH requires the VPN client to additionally identify itself with a user name and password. For One time Passwords (OTP), such as Google Authenticator or RSA SecureID tokens, the one-time token is appended to the user password. There are three possible backends for XAUTH: xauthby=pam This uses the configuration in /etc/pam.d/pluto to authenticate the user. Pluggable Authentication Modules (PAM) can be configured to use various backends by itself. It can use the system account user-password scheme, an LDAP directory, a RADIUS server or a custom password authentication module. See the Using Pluggable Authentication Modules (PAM) chapter for more information. xauthby=file This uses the configuration file /etc/ipsec.d/passwd (not to be confused with /etc/ipsec.d/nsspassword ). The format of this file is similar to the Apache .htpasswd file and the Apache htpasswd command can be used to create entries in this file. However, after the user name and password, a third column is required with the connection name of the IPsec connection used, for example when using a conn remoteusers to offer VPN to remote users, a password file entry should look as follows: user1:USDapr1USDMIwQ3DHbUSD1I69LzTnZhnCT2DPQmAOK.:remoteusers NOTE: when using the htpasswd command, the connection name has to be manually added after the user:password part on each line. xauthby=alwaysok The server will always pretend the XAUTH user and password combination was correct. The client still has to specify a user name and a password, although the server ignores these. This should only be used when users are already identified by X.509 certificates, or when testing the VPN without needing an XAUTH backend. An example server configuration with X.509 certificates: When xauthfail is set to soft, instead of hard, authentication failures are ignored, and the VPN is setup as if the user authenticated properly. A custom updown script can be used to check for the environment variable XAUTH_FAILED . Such users can then be redirected, for example, using iptables DNAT, to a " walled garden " where they can contact the administrator or renew a paid subscription to the service. VPN clients use the modecfgdomain value and the DNS entries to redirect queries for the specified domain to these specified nameservers. This allows roaming users to access internal-only resources using the internal DNS names. The modecfgdns options contain a comma-separated list of internal DNS servers for the client to use for DNS resolution. Optionally, to send a banner text to VPN cliens, use the modecfgbanner option. If leftsubnet is not 0.0.0.0/0 , split tunneling configuration requests are sent automatically to the client. For example, when using leftsubnet=10.0.0.0/8 , the VPN client would only send traffic for 10.0.0.0/8 through the VPN. On the client, the user has to input a user password, which depends on the backend used. For example: xauthby=file The administrator generated the password and stored it in the /etc/ipsec.d/passwd file. xauthby=pam The password is obtained at the location specified in the PAM configuration in the /etc/pam.d/pluto file. xauthby=alwaysok The password is not checked and always accepted. Use this option for testing purposes or if you want to ensure compatibility for xauth-only clients. For more information about XAUTH, see the Extended Authentication within ISAKMP/Oakley (XAUTH) Internet-Draft document. | [
"conn xauth-rsa auto=add authby=rsasig pfs=no rekey=no left=ServerIP leftcert=vpn.example.com #leftid=%fromcert leftid=vpn.example.com leftsendcert=always leftsubnet=0.0.0.0/0 rightaddresspool=10.234.123.2-10.234.123.254 right=%any rightrsasigkey=%cert modecfgdns1=1.2.3.4 modecfgdns2=8.8.8.8 modecfgdomain=example.com modecfgbanner=\"Authorized access is allowed\" leftxauthserver=yes rightxauthclient=yes leftmodecfgserver=yes rightmodecfgclient=yes modecfgpull=yes xauthby=pam dpddelay=30 dpdtimeout=120 dpdaction=clear ike_frag=yes # for walled-garden on xauth failure # xauthfail=soft #leftupdown=/custom/_updown"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/road_warrior_application_using_libreswan_and_xauth_with_x.509 |
Chapter 5. Using admission controller enforcement | Chapter 5. Using admission controller enforcement Red Hat Advanced Cluster Security for Kubernetes works with Kubernetes admission controllers and OpenShift Container Platform admission plugins to allow you to enforce security policies before Kubernetes or OpenShift Container Platform creates workloads, for example, deployments, daemon sets or jobs. The RHACS admission controller prevents users from creating workloads that violate policies you configure in RHACS. Beginning from the RHACS version 3.0.41, you can also configure the admission controller to prevent updates to workloads that violate policies. RHACS uses the ValidatingAdmissionWebhook controller to verify that the resource being provisioned complies with the specified security policies. To handle this, RHACS creates a ValidatingWebhookConfiguration which contains multiple webhook rules. When the Kubernetes or OpenShift Container Platform API server receives a request that matches one of the webhook rules, the API server sends an AdmissionReview request to RHACS. RHACS then accepts or rejects the request based on the configured security policies. Note To use admission controller enforcement on OpenShift Container Platform, you need the Red Hat Advanced Cluster Security for Kubernetes version 3.0.49 or newer. 5.1. Understanding admission controller enforcement If you intend to use admission controller enforcement, consider the following: API latency : Using admission controller enforcement increases Kubernetes or OpenShift Container Platform API latency because it involves additional API validation requests. Many standard Kubernetes libraries, such as fabric8, have short Kubernetes or OpenShift Container Platform API timeouts by default. Also, consider API timeouts in any custom automation you might be using. Image scanning : You can choose whether the admission controller scans images while reviewing requests by setting the Contact Image Scanners option in the cluster configuration panel. If you enable this setting, Red Hat Advanced Cluster Security for Kubernetes contacts the image scanners if the scan or image signature verification results are not already available, which adds considerable latency. If you disable this setting, the enforcement decision only considers image scan criteria if cached scan and signature verification results are available. You can use admission controller enforcement for: Options in the pod securityContext . Deployment configurations. Image components and vulnerabilities. You cannot use admission controller enforcement for: Any runtime behavior, such as processes. Any policies based on port exposure. The admission controller might fail if there are connectivity issues between the Kubernetes or OpenShift Container Platform API server and RHACS Sensor. To resolve this issue, delete the ValidatingWebhookConfiguration object as described in the disabling admission controller enforcement section. If you have deploy-time enforcement enabled for a policy and you enable the admission controller, RHACS attempts to block deployments that violate the policy. If a noncompliant deployment is not rejected by the admission controller, for example, in case of a timeout, RHACS still applies other deploy-time enforcement mechanisms, such as scaling to zero replicas. 5.2. Enabling admission controller enforcement You can enable admission controller enforcement from the Clusters view when you install Sensor or edit an existing cluster configuration. Procedure In the RHACS portal, go to Platform Configuration Clusters . Select an existing cluster from the list or secure a new cluster by selecting Secure a cluster Legacy installation method . If you are securing a new cluster, in the Static Configuration section of the cluster configuration panel, enter the details for your cluster. Red Hat recommends that you only turn on the Configure Admission Controller Webhook to listen on Object Creates toggle if you are planning to use the admission controller to enforce on object create events. Red Hat recommends that you only turn on the Configure Admission Controller Webhook to listen on Object Updates toggle if you are planning to use the admission controller to enforce on update events. Red Hat recommends that you only turn on the Enable Admission Controller Webhook to listen on exec and port-forward events toggle if you are planning to use the admission controller to enforce on pod execution and pod port forwards events. Configure the following options in the Dynamic Configuration section: Enforce on Object Creates : This toggle controls the behavior of the admission control service. You must have the Configure Admission Controller Webhook to listen on Object Creates toggle turned on for this to work. Enforce on Object Updates : This toggle controls the behavior of the admission control service. You must have the Configure Admission Controller Webhook to listen on Object Updates toggle turned on for this to work. Select . In the Download files section, select Download YAML files and keys . Note When enabling admission controller for an existing cluster, follow this guidance: If you make any changes in the Static Configuration section, you must download the YAML files and redeploy the Sensor. If you make any changes in the Dynamic Configuration section, you can skip downloading the files and deployment, as RHACS automatically synchronizes the Sensor and applies the changes. Select Finish . Verification After you provision a new cluster with the generated YAML, run the following command to verify if admission controller enforcement is configured correctly: USD oc get ValidatingWebhookConfiguration 1 1 If you use Kubernetes, enter kubectl instead of oc . Example output NAME CREATED AT stackrox 2019-09-24T06:07:34Z 5.3. Bypassing admission controller enforcement To bypass the admission controller, add the admission.stackrox.io/break-glass annotation to your configuration YAML. Bypassing the admission controller triggers a policy violation which includes deployment details. Red Hat recommends providing an issue-tracker link or some other reference as the value of this annotation so that others can understand why you bypassed the admission controller. 5.4. Disabling admission controller enforcement You can disable admission controller enforcement from the Clusters view on the Red Hat Advanced Cluster Security for Kubernetes (RHACS) portal. Procedure In the RHACS portal, select Platform Configuration Clusters . Select an existing cluster from the list. Turn off the Enforce on Object Creates and Enforce on Object Updates toggles in the Dynamic Configuration section. Select . Select Finish . 5.4.1. Disabling associated policies You can turn off the enforcement on relevant policies, which in turn instructs the admission controller to skip enforcements. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Disable enforcement on the default policies: In the policies view, locate the Kubernetes Actions: Exec into Pod policy. Click the overflow menu, , and then select Disable policy . In the policies view, locate the Kubernetes Actions: Port Forward to Pod policy. Click the overflow menu, , and then select Disable policy . Disable enforcement on any other custom policies that you have created by using criteria from the default Kubernetes Actions: Port Forward to Pod and Kubernetes Actions: Exec into Pod policies. 5.4.2. Disabling the webhook You can disable admission controller enforcement from the Clusters view in the RHACS portal. Important If you disable the admission controller by turning off the webhook, you must redeploy the Sensor bundle. Procedure In the RHACS portal, go to Platform Configuration Clusters . Select an existing cluster from the list. Turn off the Enable Admission Controller Webhook to listen on exec and port-forward events toggle in the Static Configuration section. Select to continue with Sensor setup. Click Download YAML file and keys . From a system that has access to the monitored cluster, extract and run the sensor script: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh Note If you get a warning that you do not have the required permissions to deploy the sensor, follow the on-screen instructions, or contact your cluster administrator for help. After the sensor is deployed, it contacts Central and provides cluster information. Return to the RHACS portal and check if the deployment is successful. If it is successful, a green checkmark appears under section #2. If you do not see a green checkmark, use the following command to check for problems: On OpenShift Container Platform: USD oc get pod -n stackrox -w On Kubernetes: USD kubectl get pod -n stackrox -w Select Finish . Note When you disable the admission controller, RHACS does not delete the ValidatingWebhookConfiguration parameter. However, instead of checking requests for violations, it accepts all AdmissionReview requests. To remove the ValidatingWebhookConfiguration object, run the following command in the secured cluster: On OpenShift Container Platform: USD oc delete ValidatingWebhookConfiguration/stackrox On Kubernetes: USD kubectl delete ValidatingWebhookConfiguration/stackrox 5.5. ValidatingWebhookConfiguration YAML file changes With Red Hat Advanced Cluster Security for Kubernetes you can enforce security policies on: Object creation Object update Pod execution Pod port forward If Central or Sensor is unavailable The admission controller requires an initial configuration from Sensor to work. Kubernetes or OpenShift Container Platform saves this configuration, and it remains accessible even if all admission control service replicas are rescheduled onto other nodes. If this initial configuration exists, the admission controller enforces all configured deploy-time policies. If Sensor or Central becomes unavailable later: you will not be able to run image scans, or query information about cached image scans. However, admission controller enforcement still functions based on the available information gathered before the timeout expires, even if the gathered information is incomplete. you will not be able to disable the admission controller from the RHACS portal or modify enforcement for an existing policy as the changes will not get propagated to the admission control service. Note If you need to disable admission control enforcement, you can delete the validating webhook configuration by running the following command: On OpenShift Container Platform: USD oc delete ValidatingWebhookConfiguration/stackrox On Kubernetes: USD kubectl delete ValidatingWebhookConfiguration/stackrox Make the admission controller more reliable Red Hat recommends that you schedule the admission control service on the control plane and not on worker nodes. The deployment YAML file includes a soft preference for running on the control plane, however it is not enforced. By default, the admission control service runs 3 replicas. To increase reliability, you can increase the replicas by running the following command: USD oc -n stackrox scale deploy/admission-control --replicas=<number_of_replicas> 1 1 If you use Kubernetes, enter kubectl instead of oc . Using with the roxctl CLI You can use the following options when you generate a Sensor deployment YAML file: --admission-controller-listen-on-updates : If you use this option, Red Hat Advanced Cluster Security for Kubernetes generates a Sensor bundle with a ValidatingWebhookConfiguration pre-configured to receive update events from the Kubernetes or OpenShift Container Platform API server. --admission-controller-enforce-on-updates : If you use this option, Red Hat Advanced Cluster Security for Kubernetes configures Central such that the admission controller also enforces security policies object updates. Both these options are optional, and are false by default. | [
"oc get ValidatingWebhookConfiguration 1",
"NAME CREATED AT stackrox 2019-09-24T06:07:34Z",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"oc get pod -n stackrox -w",
"kubectl get pod -n stackrox -w",
"oc delete ValidatingWebhookConfiguration/stackrox",
"kubectl delete ValidatingWebhookConfiguration/stackrox",
"oc delete ValidatingWebhookConfiguration/stackrox",
"kubectl delete ValidatingWebhookConfiguration/stackrox",
"oc -n stackrox scale deploy/admission-control --replicas=<number_of_replicas> 1"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/use-admission-controller-enforcement |
Chapter 6. Optional: Installing and modifying Operators | Chapter 6. Optional: Installing and modifying Operators The Assisted Installer can install select Operators for you with default configurations in either the UI or API. If you require advanced options, install the desired Operators after installing the cluster. The Assisted Installer monitors the installation of the selected operators as part of the cluster installation and reports their status. If one or more Operators encounter errors during installation, the Assisted Installer reports that the cluster installation has completed with a warning that one or more operators failed to install. See the sections below for the Operators you can set when installing or modifying a cluster definition using the Assisted Installer UI or API. For full instructions on installing an OpenShift Container Platform cluster, see Installing with the Assisted Installer UI or Installing with the Assisted Installer API respectively. 6.1. Installing Operators When installng Operators using the Assisted Installer UI, select the Operators on the Operators page of the wizard. When installing Operators using the Assisted Installer API, use the POST method in the /v2/clusters endpoint. 6.1.1. Installing OpenShift Virtualization When you configure the cluster, you can enable OpenShift Virtualization . Note Currently, OpenShift Virtualization is not supported on IBM zSystems and IBM Power. If enabled, the Assisted Installer: Validates that your environment meets the prerequisites outlined below. Configures virtual machine storage as follows: For single-node OpenShift clusters version 4.10 and newer, the Assisted Installer configures the hostpath provisioner . For single-node OpenShift clusters on earlier versions, the Assisted Installer configures the Local Storage Operator . For multi-node clusters, the Assisted Installer configures OpenShift Data Foundation. Prerequisites Supported by Red Hat Enterprise Linux (RHEL) 8 Support for Intel 64 or AMD64 CPU extensions Intel Virtualization Technology or AMD-V hardware virtualization extensions enabled NX (no execute) flag enabled Procedure If you are using the Assisted Installer UI: In the Operators step of the wizard, enable the Install OpenShift Virtualization checkbox. If you are using the Assisted Installer API: When registering a new cluster, add the "olm_operators: [{"name": "cnv"}]" statement. Note CNV stands for container-native virtualization. For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.11", "cpu_architecture" : "x86_64", "base_dns_domain": "example.com", "olm_operators: [{"name": "cnv"}]" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Additional resources For more details about preparing your cluster for OpenShift Virtualization, see the OpenShift Documentation . 6.1.2. Installing Multicluster Engine (MCE) When you configure the cluster, you can enable the Multicluster Engine (MCE) Operator. The Multicluster Engine (MCE) Operator allows you to install additional clusters from the cluster that you are currently installing. Prerequisites OpenShift version 4.10 and above An additional 4 CPU cores and 16GB of RAM for multi-node OpenShift clusters. An additional 8 CPU cores and 32GB RAM for single-node OpenShift clusters. Storage considerations Prior to installation, you must consider the storage required for managing the clusters to be deployed from the Multicluster Engine. You can choose one of the following scenarios for automating storage: Install OpenShift Data Foundation (ODF) on a multi-node cluster. ODF is the recommended storage for clusters, but requires an additional subscription. For details, see Installing OpenShift Data Foundation in this chapter. Install Logical Volume Management Storage (LVMS) on a single-node OpenShift (SNO) cluster. Install Multicluster Engine on a multi-node cluster without configuring storage. Then configure a storage of your choice and enable the Central Infrastructure Management (CIM) service following the installation. For details, see Additional Resources in this chapter. Procedure If you are using the Assisted Installer UI: In the Operators step of the wizard, enable the Install multicluster engine checkbox. If you are using the Assisted Installer API: When registering a new cluster, use the "olm_operators: [{"name": "mce"}]" statement, for example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.11", "cpu_architecture" : "x86_64" "base_dns_domain": "example.com", "olm_operators: [{"name": "mce"}]", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Post-installation steps To use the Assisted Installer technology with the Multicluster Engine, enable the Central Infrastructure Management service. For details, see Enabling the Central Infrastructure Management service . To deploy OpenShift Container Platform clusters using hosted control planes, configure the hosted control planes. For details, see Hosted Control Planes . Additional resources For Advanced Cluster Management documentation related to the Multicluster Engine (MCE) Operator, see Red Hat Advanced Cluster Mangement for Kubernetes For OpenShift Container Platform documentation related to the Multicluster Engine (MCE) Operator, see Multicluster Engine for Kubernetes Operator . 6.1.3. Installing OpenShift Data Foundation When you configure the cluster, you can enable OpenShift Data Foundation . If enabled, the Assisted Installer: Validates that your environment meets the prerequisites outlined below. It does not validate that the disk devices have been reformatted, which you must verify before starting. Configures the storage to use all available disks. When you enable OpenShift Data Foundation, the Assisted Installer creates a StorageCluster resource that specifies all available disks for use with OpenShift Data Foundation. If a different configuration is desired, modify the configuration after installing the cluster or install the Operator after the cluster is installed. Prerequisites The cluster is a three-node OpenShift cluster or has at least 3 worker nodes. Each host has at least one non-installation disk of at least 25GB. The disk devices you use must be empty. There should be no Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disks. Each host has 6 CPU cores for three-node OpenShift or 8 CPU cores for standard clusters, in addition to other CPU requirements. Each host has 19 GiB RAM, in addition to other RAM requirements. Each host has 2 CPU cores and 5GiB RAM per storage disk in addition to other CPU and RAM requirements. You have assigned control plane or worker roles for each host (and not auto-assign). Procedure If you are using the Assisted Installer UI: In the Operators step of the wizard, enable the Install OpenShift Data Foundation checkbox. If you are using the Assisted Installer API: When registering a new cluster, add the "olm_operators: [{"name": "odf"}]" statement. For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.11", "cpu_architecture" : "x86_64", "base_dns_domain": "example.com", "olm_operators: [{"name": "odf"}]", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Additional resources For more details about OpenShift Data Foundation, see the OpenShift Documentation . 6.1.4. Installing Logical Volume Manager Storage When you configure the cluster, you can enable the Logical Volume Manager Storage (LVMS) Operator on single-node OpenShift clusters. Installing the LVMS Operator allows you to dynamically provision local storage. Prerequisites A single-node OpenShift cluster installed with version 4.11 or later At least one non-installation disk One additional CPU core and 400 MB of RAM (1200 MB of RAM for versions earlier than 4.13) Procedure If you are using the Assisted Installer UI: In the Operators step of the wizard, enable the Install Logical Volume Manager Storage checkbox. If you are using the Assisted Installer API: When registering a new cluster, use the olm_operators: [{"name": "lvm"}] statement. For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.14", "cpu_architecture" : "x86_64", "base_dns_domain": "example.com", "olm_operators: [{"name": "lvm"}]" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Additional resources For OpenShift Container Platform documentation related to LVMS, see Persistent storage using LVM Storage . 6.2. Modifying Operators In the Assisted Installer, you can add or remove Operators for a cluster resource that has already been registered as part of a installation step. This is only possible before you start the OpenShift Container Platform installation. To modify the defined Operators: If you are using the Assisted Installer UI, navigate to the Operators page of the wizard and modify your selection. For details, see Installing Operators in this section. If you are using the Assisted Installer API, set the required Operator definition using the PATCH method for the /v2/clusters/{cluster_id} endpoint. Prerequisites You have created a new cluster resource. Procedure Refresh the API token: USD source refresh-token Identify the CLUSTER_ID variable by listing the existing clusters, as follows: USD curl -s https://api.openshift.com/api/assisted-install/v2/clusters -H "Authorization: Bearer USD{API_TOKEN}" | jq '[ .[] | { "name": .name, "id": .id } ]' Sample output [ { "name": "lvmtest", "id": "475358f9-ed3a-442f-ab9e-48fd68bc8188" 1 }, { "name": "mcetest", "id": "b5259f97-be09-430e-b5eb-d78420ee509a" } ] Note 1 The id value is the <cluster_id> . Assign the returned <cluster_id> to the CLUSTER_ID variable and export it: USD export CLUSTER_ID=<cluster_id> Update the cluster with the new Operators: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "olm_operators": [{"name": "mce"}, {"name": "cnv"}], 1 } ' | jq '.id' Note 1 Indicates the Operators to be installed. Valid values include mce , cnv , lvm , and odf . To remove a previously installed Operator, exclude it from the list of values. To remove all previously installed Operators, type "olm_operators": [] . Sample output { <various cluster properties>, "monitored_operators": [ { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "console", "operator_type": "builtin", "status_updated_at": "0001-01-01T00:00:00.000Z", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "cvo", "operator_type": "builtin", "status_updated_at": "0001-01-01T00:00:00.000Z", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "mce", "namespace": "multicluster-engine", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "multicluster-engine", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "cnv", "namespace": "openshift-cnv", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "hco-operatorhub", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "lvm", "namespace": "openshift-local-storage", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "local-storage-operator", "timeout_seconds": 4200 } ], <more cluster properties> Note The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types: "operator_type": "builtin" : Operators of this type are an integral part of OpenShift Container Platform. "operator_type": "olm" : Operators of this type are added either manually by a user or automatically due to dependencies. In the example, the lso Operator was added automatically because the cnv Operator requires it. | [
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"cnv\"}]\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\" \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"mce\"}]\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"odf\"}]\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.14\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"lvm\"}]\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"source refresh-token",
"curl -s https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '[ .[] | { \"name\": .name, \"id\": .id } ]'",
"[ { \"name\": \"lvmtest\", \"id\": \"475358f9-ed3a-442f-ab9e-48fd68bc8188\" 1 }, { \"name\": \"mcetest\", \"id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\" } ]",
"export CLUSTER_ID=<cluster_id>",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"olm_operators\": [{\"name\": \"mce\"}, {\"name\": \"cnv\"}], 1 } ' | jq '.id'",
"{ <various cluster properties>, \"monitored_operators\": [ { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"console\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cvo\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"mce\", \"namespace\": \"multicluster-engine\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"multicluster-engine\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cnv\", \"namespace\": \"openshift-cnv\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"hco-operatorhub\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"lvm\", \"namespace\": \"openshift-local-storage\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"local-storage-operator\", \"timeout_seconds\": 4200 } ], <more cluster properties>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/assisted_installer_for_openshift_container_platform/assembly_installing-operators |
5.5.4. Enabling Storage Access | 5.5.4. Enabling Storage Access Once a mass storage device has been properly partitioned, and a file system written to it, the storage is available for general use. For some operating systems, this is true; as soon as the operating system detects the new mass storage device, it can be formatted by the system administrator and may be accessed immediately with no additional effort. Other operating systems require an additional step. This step -- often referred to as mounting -- directs the operating system as to how the storage may be accessed. Mounting storage normally is done via a special utility program or command, and requires that the mass storage device (and possibly the partition as well) be explicitly identified. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-usable-access |
30.6. Modifying sudo Rules | 30.6. Modifying sudo Rules Modifying sudo Rules in the Web UI Under the Policy tab, click Sudo Sudo Rules . Click the name of the rule to display its configuration page. Change the settings as required. On some configuration pages, the Save button is available at the top of the page. On these pages, click the button to confirm the changes. The sudo rule configuration page includes several configuration areas: The General area In this area, you can modify the rule's description and sudo order . The sudo order field accepts integers and defines the order in which IdM evaluates the rules. The rule with the highest sudo order value is evaluated first. The Options area In this area, you can add sudoers options to the rule. Click Add above the options list. Figure 30.5. Adding a sudo Option Enter the sudoers option. For example, to specify that sudo will not prompt the user to authenticate, add the !authenticate option: Figure 30.6. Entering a sudoers Option For more information on sudoers options, see the sudoers (5) man page. Click Add . The Who area In this area, you can select the users or user groups to which the sudo rule will be applied. These users will be entitled to use sudo as defined in the rule. To specify that all system users will be able to use sudo as defined in the rule, select Anyone . To apply the rule to specific users or groups only, select Specified Users and Groups and then follow these steps: Click Add above the users or user groups list. Figure 30.7. Adding Users to a sudo Rule Select the users or user groups to add to the rule, and click the > arrow button to move them to the Prospective column. To add an external user, specify the user in the External field, and then click the > arrow button. Figure 30.8. Selecting Users for a sudo Rule Click Add . The Access This Host area In this area, you can select the hosts on which the sudo rule will be in effect. These are the hosts where the users will be granted sudo permissions. To specify that the rule will be in effect on all hosts, select Anyone . To apply the rule to specific hosts or host groups only, select Specified Hosts and Groups and then follow these steps: Click Add above the hosts list. Figure 30.9. Adding Hosts to a sudo Rule Select the hosts or host groups to include with the rule, and click the > arrow button to move them to the Prospective column. To add an external host, specify the host in the External field, and then click the > arrow button. Figure 30.10. Selecting Hosts for a sudo Rule Click Add . The Run Commands area In this area, you can select the commands to be included in the sudo rule. You can specify that users will be either allowed or denied to use specific commands. To specify that users will be allowed to use any command with sudo , select Any Command . To associate the rule with specific commands or command groups, select Specified Commands and Groups and then follow these steps: Click one of the Add buttons to add a command or a command group. To specify allowed commands or command groups, use the Allow area. To specify denied commands or command groups, use the Deny area. Figure 30.11. Adding Commands to a sudo Rule Select the commands or command groups to include with the rule, and click the > arrow button to move them to the Prospective column. Figure 30.12. Selecting Commands for a sudo Rule Click Add . The As Whom area In this area, you can configure the sudo rule to run the given commands as a specific, non-root user. Note that if you add a group of RunAs users, UIDs of the members of the group will be used to run the command. If you add a RunAs group, the GID of the group will be used to run the command. To specify that the rule will be run as any user on the system, select Anyone . To specify that the rule will be run as any group on the system, select Any Group . Click Add above the users list. Figure 30.13. Configuring sudo Rules to Execute Commands as a Specific User Select the required users or groups, and use the > arrow button to move them to the Prospective column. To add an external entity, specify it in the External field, and then click the > arrow button. Figure 30.14. Selecting Users for the Command Click Add . Modifying sudo Rules from the Command Line The IdM command-line utilities allow you to configure several sudo rule areas: General sudo rules management To change the general configuration for a sudo rule, use the ipa sudorule-mod command. The most common options accepted by the command are: The --desc option to change the sudo rule description. For example: The --order option to define the order of the specified rule. For example: Options to specify a category of entities: --usercat (user category), --hostcat (host category), --cmdcat (command category), --runasusercat (run-as user category), and --runasgroupcat (run-as group category). These options only accept the all value that associates the rule with all users, hosts, commands, run-as users, or run-as groups. For example, to specify that all users will be able to use sudo as defined in the sudo_rule rule: Note that if the rule is already associated with a specific entity, you must remove it before defining the corresponding all category. For example, if sudo_rule was previously associated with a specific user using the ipa sudorule-add-user command, you must first use the ipa sudorule-remove-user command to remove the user. For more details and a complete list of options accepted by ipa sudorule-mod , run the command with the --help option added. Managing sudo options To add a sudoers option, use the ipa sudorule-add-option command. For example, to specify that users using sudo based on the files-commands rule will not be required to authenticate, add the !authenticate option: For more information on sudoers options, see the sudoers (5) man page. To remove a sudoers option, use the ipa sudorule-remove-option command. For example: Managing who is granted the permission to use sudo To specify an individual user, add the --users option to the ipa sudorule-add-user command. To specify a user group, add the --groups option to ipa sudorule-add-user . For example, to add user and user_group to the files-commands rule: To remove an individual user or group, use the ipa sudorule-remove-user . For example, to remove a user: Managing where the users are granted the sudo permissions To specify a host, add the --hosts option to the ipa sudorule-add-host command. To specify a host group, add the --hostgroups option to ipa sudorule-add-host . For example, to add example.com and host_group to the files-commands rule: To remove a host or host group, use the ipa sudorule-remove-host command. For example: Managing what commands can be used with sudo You can specify that users will be either allowed or denied to use specific commands. To specify an allowed command or command group, add the --sudocmds or --sudocmdgroups option to the ipa sudorule-add-allow-command . To specify a denied command or command group, add the --sudocmds or --sudocmdgroups option to the ipa sudorule-add-deny-command command. For example, to add the /usr/bin/less command and the files command group as allowed to the files-commands rule: To remove a command or command group from a rule, use the ipa sudorule-remove-allow-command or ipa sudorule-remove-deny-command commands. For example: Note that the --sudocmds option only accepts commands added to IdM, as described in Section 30.4.1, "Adding sudo Commands" . Managing as whom the sudo commands are run To use the UIDs of an individual user or users in a group as the identity under which the commands are run, use the --users or --groups options with the ipa sudorule-add-runasuser command. To use the GID of a user group as the identity for the commands, use the ipa sudorule-add-runasgroup --groups command. If you specify no user or group, sudo commands will be run as root. For example, to specify that the identity of user will be used to execute the commands in the sudo rule: For more information on the ipa sudorule-* commands, see the output of the ipa help sudorule command or run a particular command with the --help option added. Example 30.1. Adding and Modifying a New sudo Rule from the Command Line To allow a specific user group to use sudo with any command on selected servers: Obtain a Kerberos ticket for the admin user or any other user allowed to manage sudo rules. Add a new sudo rule to IdM. Define the who : specify the group of users who will be entitled to use the sudo rule. Define the where : specify the group of hosts where the users will be granted the sudo permissions. Define the what : to allow the users to run any sudo command, add the all command category to the rule. To let the sudo commands be executed as root, do not specify any run-as users or groups. Add the !authenticate sudoers option to specify that the users will not be required to authenticate when using the sudo command. Display the new sudo rule configuration to verify it is correct. | [
"ipa sudorule-mod sudo_rule_name --desc=\" sudo_rule_description \"",
"ipa sudorule-mod sudo_rule_name --order= 3",
"ipa sudorule-mod sudo_rule --usercat=all",
"ipa sudorule-add-option files-commands Sudo Option: !authenticate --------------------------------------------------------- Added option \"!authenticate\" to Sudo Rule \"files-commands\" ---------------------------------------------------------",
"ipa sudorule-remove-option files-commands Sudo Option: authenticate ------------------------------------------------------------- Removed option \"authenticate\" from Sudo Rule \"files-commands\" -------------------------------------------------------------",
"ipa sudorule-add-user files-commands --users=user --groups=user_group ------------------------- Number of members added 2 -------------------------",
"ipa sudorule-remove-user files-commands [member user]: user [member group]: --------------------------- Number of members removed 1 ---------------------------",
"ipa sudorule-add-host files-commands --hosts=example.com --hostgroups=host_group ------------------------- Number of members added 2 -------------------------",
"ipa sudorule-remove-host files-commands [member host]: example.com [member host group]: --------------------------- Number of members removed 1 ---------------------------",
"ipa sudorule-add-allow-command files-commands --sudocmds=/usr/bin/less --sudocmdgroups=files ------------------------- Number of members added 2 -------------------------",
"ipa sudorule-remove-allow-command files-commands [member sudo command]: /usr/bin/less [member sudo command group]: --------------------------- Number of members removed 1 ---------------------------",
"ipa sudorule-add-runasuser files-commands --users=user RunAs Users: user",
"kinit admin Password for [email protected]:",
"ipa sudorule-add new_sudo_rule --desc=\"Rule for user_group\" --------------------------------- Added Sudo Rule \"new_sudo_rule\" --------------------------------- Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE",
"ipa sudorule-add-user new_sudo_rule --groups=user_group Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE User Groups: user_group ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host new_sudo_rule --hostgroups=host_group Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE User Groups: user_group Host Groups: host_group ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-mod new_sudo_rule --cmdcat=all ------------------------------ Modified Sudo Rule \"new_sudo_rule\" ------------------------------ Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group",
"ipa sudorule-add-option new_sudo_rule Sudo Option: !authenticate ----------------------------------------------------- Added option \"!authenticate\" to Sudo Rule \"new_sudo_rule\" ----------------------------------------------------- Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group Sudo Option: !authenticate",
"ipa sudorule-show new_sudo_rule Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group Sudo Option: !authenticate"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/modify-sudo-rules |
Storage | Storage OpenShift Container Platform 4.7 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"df -h /var/lib",
"Filesystem Size Used Avail Use% Mounted on /dev/sda1 69G 32G 34G 49% /",
"oc delete pv <pv-name>",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s",
"oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:",
"oc get pv <pv-claim>",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi",
"apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3",
"oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false",
"apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5",
"oc create -f cinder-persistentvolume.yaml",
"oc create serviceaccount <service_account>",
"oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4",
"{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }",
"{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar",
"\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"",
"apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''",
"apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4",
"oc create -f pv.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual",
"oc create -f pvc.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false",
"oc adm new-project openshift-local-storage",
"oc annotate project openshift-local-storage openshift.io/node-selector=''",
"OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)",
"apiVersion: operators.coreos.com/v1alpha2 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"USD{OC_VERSION}\" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f openshift-local-storage.yaml",
"oc -n openshift-local-storage get pods",
"NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m",
"oc get csvs -n openshift-local-storage",
"NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"localblock-sc\" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6",
"oc create -f <local-volume>.yaml",
"oc get all -n openshift-local-storage",
"NAME READY STATUS RESTARTS AGE pod/local-disks-local-provisioner-h97hj 1/1 Running 0 46m pod/local-disks-local-provisioner-j4mnn 1/1 Running 0 46m pod/local-disks-local-provisioner-kbdnx 1/1 Running 0 46m pod/local-disks-local-diskmaker-ldldw 1/1 Running 0 46m pod/local-disks-local-diskmaker-lvrv4 1/1 Running 0 46m pod/local-disks-local-diskmaker-phxdq 1/1 Running 0 46m pod/local-storage-operator-54564d9988-vxvhx 1/1 Running 0 47m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator ClusterIP 172.30.49.90 <none> 60000/TCP 47m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/local-disks-local-provisioner 3 3 3 3 3 <none> 46m daemonset.apps/local-disks-local-diskmaker 3 3 3 3 3 <none> 46m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 47m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-54564d9988 1 1 1 47m",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"oc create -f <example-pv>.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4",
"oc create -f <local-pvc>.yaml",
"apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3",
"oc create -f <local-pod>.yaml",
"apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM",
"oc apply -f local-volume-set.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"localblock-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg",
"spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists",
"oc edit localvolume <name> -n openshift-local-storage",
"oc delete pv <pv-name>",
"oc debug node/<node-name>",
"chroot /host",
"cd /mnt/openshift-local-storage/<sc-name> 1",
"rm <symlink>",
"oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces",
"oc delete pv <pv-name>",
"oc delete project openshift-local-storage",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7",
"oc get pv",
"NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m",
"ls -lZ /opt/nfs -d",
"drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs",
"id nfsnobody",
"uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)",
"spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2",
"spec: containers: 1 - name: securityContext: runAsUser: 65534 2",
"setsebool -P virt_use_nfs 1",
"/<example_fs> *(rw,root_squash)",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3",
"oc create -f pvc.yaml",
"vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk",
"shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5",
"oc create -f pv1.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4",
"oc create -f pvc1.yaml",
"oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF",
"oc new-app mysql-persistent",
"--> Deploying template \"openshift/mysql-persistent\" to project default",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s",
"kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: \"/data\" name: my-csi-inline-vol command: [ \"sleep\", \"1000000\" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar",
"oc create -f my-csi-app.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete",
"oc create -f volumesnapshotclass.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2",
"oc create -f volumesnapshot-dynamic.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1",
"oc create -f volumesnapshot-manual.yaml",
"oc describe volumesnapshot mysnap",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: \"2020-01-29T12:24:30Z\" 2 readyToUse: true 3 restoreSize: 500Mi",
"oc get volumesnapshotcontent",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1",
"oc delete volumesnapshot <volumesnapshot_name>",
"volumesnapshot.snapshot.storage.k8s.io \"mysnapshot\" deleted",
"oc delete volumesnapshotcontent <volumesnapshotcontent_name>",
"oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{\"metadata\": {\"finalizers\":null}}'",
"volumesnapshotclass.snapshot.storage.k8s.io \"csi-ocs-rbd-snapclass\" deleted",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"oc create -f pvc-restore.yaml",
"oc get pvc",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1",
"oc create -f pvc-clone.yaml",
"oc get pvc pvc-1-clone",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1",
"oc describe storageclass csi-gce-pd-cmek",
"Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi",
"oc apply -f pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard-csi -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"oc create -f cinder-claim.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2",
"oc create -f pvc-manila.yaml",
"oc get pvc pvc-manila",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"false\" 2 provisioner: csi.ovirt.org parameters: storageDomainName: <rhv-storage-domain-name> 3 thinProvisioning: \"true\" 4 csi.storage.k8s.io/fstype: ext4 5",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ovirt spec: storageClassName: ovirt-csi-sc 1 accessModes: - ReadWriteOnce resources: requests: storage: <volume size> 2 volumeMode: <volume mode> 3",
"oc create -f pvc-ovirt.yaml",
"oc get pvc pvc-ovirt",
"apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1",
"oc describe pvc <pvc_name>",
"kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: gp2 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"",
"kubernetes.io/description: My Storage Class Description",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold provisioner: kubernetes.io/cinder parameters: type: fast 1 availability: nova 2 fsType: ext4 3",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 1 iopsPerGB: \"10\" 2 encrypted: \"true\" 3 kmsKeyId: keyvalue 4 fsType: ext4 5",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-premium provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 1 allowVolumeExpansion: true parameters: kind: Managed 2 storageaccounttype: Premium_LRS 3 reclaimPolicy: Delete",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']",
"oc adm policy add-cluster-role-to-user <persistent-volume-binder-role>",
"system:serviceaccount:kube-system:persistent-volume-binder",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 1 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/vsphere-volume 1 parameters: diskformat: thin 2",
"oc get storageclass",
"NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs",
"oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/storage/index |
Chapter 4. Debug Parameters | Chapter 4. Debug Parameters These parameters allow you to set debug mode on a per-service basis. The Debug parameter acts as a global parameter for all services and the per-service parameters can override the effects of global parameter on individual services. Parameter Description BarbicanDebug Set to True to enable debugging OpenStack Key Manager (barbican) service. The default value is false . CinderDebug Set to True to enable debugging on OpenStack Block Storage (cinder) services. The default value is false . ConfigDebug Whether to run configuration management (e.g. Puppet) in debug mode. The default value is false . Debug Set to True to enable debugging on all services. The default value is false . DesignateDebug Set to True to enable debugging Designate services. The default value is false . GlanceDebug Set to True to enable debugging OpenStack Image Storage (glance) service. The default value is false . HeatDebug Set to True to enable debugging OpenStack Orchestration (heat) services. The default value is false . HorizonDebug Set to True to enable debugging OpenStack Dashboard (horizon) service. The default value is false . IronicDebug Set to True to enable debugging OpenStack Bare Metal (ironic) services. The default value is false . KeystoneDebug Set to True to enable debugging OpenStack Identity (keystone) service. The default value is false . ManilaDebug Set to True to enable debugging OpenStack Shared File Systems (manila) services. The default value is false . MemcachedDebug Set to True to enable debugging Memcached service. The default value is false . NeutronDebug Set to True to enable debugging OpenStack Networking (neutron) services. The default value is false . NovaDebug Set to True to enable debugging OpenStack Compute (nova) services. The default value is false . OctaviaDebug Set to True to enable debugging OpenStack Load Balancing-as-a-Service (octavia) services. The default value is false . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_debug-parameters_overcloud_parameters |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in two versions, Red Hat build of OpenJDK 8u and Red Hat build of OpenJDK 11u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.12/pr01 |
Chapter 8. Performing health checks on Red Hat Quay deployments | Chapter 8. Performing health checks on Red Hat Quay deployments Health check mechanisms are designed to assess the health and functionality of a system, service, or component. Health checks help ensure that everything is working correctly, and can be used to identify potential issues before they become critical problems. By monitoring the health of a system, Red Hat Quay administrators can address abnormalities or potential failures for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Performing health checks can also help reduce the likelihood of encountering troubleshooting scenarios. Health check mechanisms can play a role in diagnosing issues by providing valuable information about the system's current state. By comparing health check results with expected benchmarks or predefined thresholds, deviations or anomalies can be identified quicker. 8.1. Red Hat Quay health check endpoints Important Links contained herein to any external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or its entities, products, or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. Red Hat Quay has several health check endpoints. The following table shows you the health check, a description, an endpoint, and an example output. Table 8.1. Health check endpoints Health check Description Endpoint Example output instance The instance endpoint acquires the entire status of the specific Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , disk_space , registry_gunicorn , service_key , and web_gunicorn. Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/instance or https://{quay-ip-endpoint}/health {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} endtoend The endtoend endpoint conducts checks on all services of your Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , redis , storage . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/endtoend {"data":{"services":{"auth":true,"database":true,"redis":true,"storage":true}},"status_code":200} warning The warning endpoint conducts a check on the warnings. Returns a dict with key-value pairs for the following: disk_space_warning . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/warning {"data":{"services":{"disk_space_warning":true}},"status_code":503} 8.2. Navigating to a Red Hat Quay health check endpoint Use the following procedure to navigate to the instance endpoint. This procedure can be repeated for endtoend and warning endpoints. Procedure On your web browser, navigate to https://{quay-ip-endpoint}/health/instance . You are taken to the health instance page, which returns information like the following: {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} For Red Hat Quay, "status_code": 200 means that the instance is health. Conversely, if you receive "status_code": 503 , there is an issue with your deployment. Additional resources | [
"{\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploy_red_hat_quay_-_high_availability/health-check-quay |
2.5. Tracking Tag History | 2.5. Tracking Tag History The ETL Service collects tag information as displayed in the Administration Portal every minute and stores this data in the tags historical tables. The ETL Service tracks five types of changes: A tag is created in the Administration Portal - the ETL Service copies the tag details, position in the tag tree and relation to other objects in the tag tree. A entity is attached to the tag tree in the Administration Portal - the ETL Service replicates the addition to the ovirt_engine_history database as a new entry. A tag is updated - the ETL Service replicates the change of tag details to the ovirt_engine_history database as a new entry. An entity or tag branch is removed from the Administration Portal - the ovirt_engine_history database flags the corresponding tag and relations as removed in new entries. Removed tags and relations are only flagged as removed or detached. A tag branch is moved - the corresponding tag and relations are updated as new entries. Moved tags and relations are only flagged as updated. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/data_warehouse_guide/tracking_tag_history |
1.3. Communicate as Much as Possible | 1.3. Communicate as Much as Possible When it comes to your users, you can never communicate too much. Be aware that small system changes you might think are practically unnoticeable could very well completely confuse the administrative assistant in Human Resources. The method by which you communicate with your users can vary according to your organization. Some organizations use email; others, an internal website. Still others may rely on Usenet news or IRC. A sheet of paper tacked to a bulletin board in the breakroom may even suffice at some places. In any case, use whatever method(s) that work well at your organization. In general, it is best to follow this paraphrased approach used in writing newspaper stories: Tell your users what you are going to do Tell your users what you are doing Tell your users what you have done The following sections look at these steps in more depth. 1.3.1. Tell Your Users What You Are Going to Do Make sure you give your users sufficient warning before you do anything. The actual amount of warning necessary varies according to the type of change (upgrading an operating system demands more lead time than changing the default color of the system login screen), as well as the nature of your user community (more technically adept users may be able to handle changes more readily than users with minimal technical skills.) At a minimum, you should describe: The nature of the change When it will take place Why it is happening Approximately how long it should take The impact (if any) that the users can expect due to the change Contact information should they have any questions or concerns Here is a hypothetical situation. The Finance department has been experiencing problems with their database server being very slow at times. You are going to bring the server down, upgrade the CPU module to a faster model, and reboot. Once this is done, you will move the database itself to faster, RAID-based storage. Here is one possible announcement for this situation: System Downtime Scheduled for Friday Night Starting this Friday at 6pm (midnight for our associates in Berlin), all financial applications will be unavailable for a period of approximately four hours. During this time, changes to both the hardware and software on the Finance database server will be performed. These changes should greatly reduce the time required to run the Accounts Payable and Accounts Receivable applications, and the weekly Balance Sheet report. Other than the change in runtime, most people should notice no other change. However, those of you that have written your own SQL queries should be aware that the layout of some indices will change. This is documented on the company intranet website, on the Finance page. Should you have any questions, comments, or concerns, please contact System Administration at extension 4321. A few points are worth noting: Effectively communicate the start and duration of any downtime that might be involved in the change. Make sure you give the time of the change in such a way that it is useful to all users, no matter where they may be located. Use terms that your users understand. The people impacted by this work do not care that the new CPU module is a 2GHz unit with twice as much L2 cache, or that the database is being placed on a RAID 5 logical volume. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-philosophy-communicate |
8.4.3. Using Yum Variables | 8.4.3. Using Yum Variables You can use and reference the following built-in variables in yum commands and in all Yum configuration files (that is, /etc/yum.conf and all .repo files in the /etc/yum.repos.d/ directory): USDreleasever You can use this variable to reference the release version of Red Hat Enterprise Linux. Yum obtains the value of USDreleasever from the distroverpkg= value line in the /etc/yum.conf configuration file. If there is no such line in /etc/yum.conf , then yum infers the correct value by deriving the version number from the redhat-release-server package. The value of USDreleasever typically consists of the major release number and the variant of Red Hat Enterprise Linux, for example 6Client , or 6Server . USDarch You can use this variable to refer to the system's CPU architecture as returned when calling Python's os.uname() function. Valid values for USDarch include i686 and x86_64 . USDbasearch You can use USDbasearch to reference the base architecture of the system. For example, i686 machines have a base architecture of i386 , and AMD64 and Intel 64 machines have a base architecture of x86_64 . USDYUM0-9 These ten variables are each replaced with the value of any shell environment variables with the same name. If one of these variables is referenced (in /etc/yum.conf for example) and a shell environment variable with the same name does not exist, then the configuration file variable is not replaced. To define a custom variable or to override the value of an existing one, create a file with the same name as the variable (without the " USD " sign) in the /etc/yum/vars/ directory, and add the desired value on its first line. For example, repository descriptions often include the operating system name. To define a new variable called USDosname , create a new file with " Red Hat Enterprise Linux " on the first line and save it as /etc/yum/vars/osname : Instead of " Red Hat Enterprise Linux 6 " , you can now use the following in the .repo files: | [
"~]# echo \"Red Hat Enterprise Linux\" > /etc/yum/vars/osname",
"name=USDosname USDreleasever"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-using_yum_variables |
Chapter 30. Additional resources | Chapter 30. Additional resources Managing and monitoring KIE Server Packaging and deploying an Red Hat Decision Manager project | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/additional_resources_2 |
Connecting to Red Hat Insights through Insights proxy | Connecting to Red Hat Insights through Insights proxy Red Hat Insights 1-latest Insights proxy allows those with constraints preventing Internet access to connect to Red Hat Insights Red Hat Customer Content Services | [
"subscription-manager repos --enable=insights-proxy-for-rhel-9-x86_64-rpms",
"subscription-manager repos--enable=insights-proxy-for-rhel-9-aarch64-rpms",
"dnf install -y rhproxy",
"rpm -q rhproxy",
"useradd rhproxy",
"id rhproxy",
"[rhproxy@server ~]USD podman login registry.redhat.io",
"firewall-cmd --permanent --add-port=3128/tcp",
"firewall-cmd --permanent --add-port=8443/tcp",
"firewall-cmd --reload",
"[rhproxy@server ~] USD rhproxy install",
"[rhproxy@server ~] USD rhproxy start",
"[rhproxy@server ~] USD rhproxy status",
"[rhproxy@server ~]USD curl -L -x http://USD(hostname):3128 https://mirrors.fedoraproject.org/",
"curl -k -L https://<rhproxy-hostname>:8443/download/bin/configure-client.sh -o configure-client.sh",
"chmod +x configure-client.sh",
"./configure-client.sh --configure --proxy-host <rhproxy-hostname>",
"insights-client --test-connection",
"[rhproxy@server ~]USD rhproxy status",
"[rhproxy@server ~]USD rhproxy restart",
"curl -x http://USD(hostname):3128 https://<host name>",
"[rhpproxy@server ~]USD rhproxy restart",
"[root@client ~] ./configure-client.sh --unconfigure",
"https://cert-api.access.redhat.com:443",
"https://cert.cloud.redhat.com:443",
"https://cert.console.redhat.com:443",
"https://console.redhat.com:443",
"https://sso.redhat.com:443",
"proxy_hostname =",
"proxy_scheme = http",
"proxy_port =",
"proxy_user =",
"proxy_password =",
"no_proxy =",
"/etc/insights-client/insights-client.conf",
"insights-client --test-connection --net-debug",
"End API URL Connection Test: SUCCESS Connectivity tests completed successfully See `/var/log/insights-client/insights-client.log` for more details."
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html-single/connecting_to_red_hat_insights_through_insights_proxy/index |
14.6. The (Non-Transactional) CarMart Quickstart in Remote Client-Server Mode (JBoss Enterprise Web Server) | 14.6. The (Non-Transactional) CarMart Quickstart in Remote Client-Server Mode (JBoss Enterprise Web Server) The Carmart (non-transactional) quickstart is supported for JBoss Data Grid's Remote Client-Server Mode with the JBoss Enterprise Web Server container. Report a bug 14.6.1. Build and Deploy the CarMart Quickstart in Remote Client-Server Mode This quickstart accesses Red Hat JBoss Data Grid via Hot Rod. This feature is not available for the Transactional CarMart quickstart. Important This quickstart deploys to JBoss Enterprise Web Server or Tomcat. The application cannot be deployed to JBoss Data Grid because it does not support application deployment. Prerequisites Prerequisites for this procedure are as follows: Obtain the most recent supported JBoss Data Grid Remote Client-Server Mode distribution files from Red Hat . Ensure that the JBoss Data Grid and JBoss Enterprise Application Platform Maven repositories are installed and configured. For details, see Chapter 3, Install and Use the Maven Repositories Add a server element to the Maven settings.xml file. In the id elements within server , add the appropriate tomcat credentials. Procedure 14.10. Build and Deploy the CarMart Quickstart in Remote Client-Server Mode Configure the Standalone File Add the following configuration to the standalone.xml file located in the USDJDG_HOME/standalone/configuration/ directory. Add the following configuration within the infinispan subsystem tags: Note If the carcache element already exists in your configuration, replace it with the provided configuration. Start the Container Start the JBoss server instance where your application will deploy. For Linux: For Windows: Build the Application Use the following command to build your application in the relevant directory: Deploy the Application Use the following command to deploy the application in the relevant directory: Report a bug 14.6.2. View the CarMart Quickstart in Remote Client-Server Mode The following procedure outlines how to view the CarMart quickstart in Red Hat JBoss Data Grid's Remote Client-Server Mode: Prerequisite The CarMart quickstart must be built and deployed be viewed. Procedure 14.11. View the CarMart Quickstart in Remote Client-Server Mode Visit the following link in a browser window to view the application: Report a bug 14.6.3. Remove the CarMart Quickstart in Remote Client-Server Mode The following procedure provides directions to remove an already deployed application in Red Hat JBoss Data Grid's Remote Client-Server mode. Procedure 14.12. Remove an Application in Remote Client-Server Mode To remove an application, use the following command from the root directory of this quickstart: Report a bug | [
"<server> <id>tomcat</id> <username>admin</username> <password>admin</password> </server>",
"<local-cache name=\"carcache\" start=\"EAGER\" batching=\"false\" statistics=\"true\"> <eviction strategy=\"LIRS\" max-entries=\"4\"/> </local-cache>",
"USDJBOSS_EWS_HOME/tomcat7/bin/catalina.sh run",
"USDJBOSS_EWS_HOME\\tomcat7\\bin\\catalina.bat run",
"mvn clean package -Premote-tomcat",
"mvn tomcat:deploy -Premote-tomcat",
"http://localhost:8080/jboss-carmart",
"mvn tomcat:undeploy -Premote-tomcat"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-The_Non-Transactional_CarMart_Quickstart_in_Remote_Client-Server_Mode_JBoss_Enterprise_Web_Server |
Chapter 6. Updating Drivers During Installation on AMD64 and Intel 64 Systems | Chapter 6. Updating Drivers During Installation on AMD64 and Intel 64 Systems In most cases, Red Hat Enterprise Linux already includes drivers for the devices that make up your system. However, if your system contains hardware that has been released very recently, drivers for this hardware might not yet be included. Sometimes, a driver update that provides support for a new device might be available from Red Hat or your hardware vendor on a driver disc that contains RPM packages . Typically, the driver disc is available for download as an ISO image file . Important Driver updates should only be performed if a missing driver prevents you to complete the installation successfully. The drivers included in the kernel should always be preferred over drivers provided by other means. Often, you do not need the new hardware during the installation process. For example, if you use a DVD to install to a local hard drive, the installation will succeed even if drivers for your network card are not available. In such a situation, complete the installation and add support for the new hardware afterward - see Red Hat Enterprise Linux 7 System Administrator's Guide for details of adding this support. In other situations, you might want to add drivers for a device during the installation process to support a particular configuration. For example, you might want to install drivers for a network device or a storage adapter card to give the installation program access to the storage devices that your system uses. You can use a driver disc to add this support during installation in one of two ways: place the ISO image file of the driver disc in a location accessible to the installation program, on a local hard drive, on a USB flash drive, or on a CD or DVD. create a driver disc by extracting the image file onto a CD or a DVD, or a USB flash drive. See the instructions for making installation discs in Section 3.1, "Making an Installation CD or DVD" for more information on burning ISO image files to a CD or DVD, and Section 3.2, "Making Installation USB Media" for instructions on writing ISO images to USB drives. If Red Hat, your hardware vendor, or a trusted third party told you that you will require a driver update during the installation process, choose a method to supply the update from the methods described in this chapter and test it before beginning the installation. Conversely, do not perform a driver update during installation unless you are certain that your system requires it. The presence of a driver on a system for which it was not intended can complicate support. Warning Driver update disks sometimes disable conflicting kernel drivers, where necessary. In rare cases, unloading a kernel module in this way can cause installation errors. 6.1. Limitations of Driver Updates During Installation On UEFI systems with the Secure Boot technology enabled, all drivers being loaded must be signed with a valid certificate, otherwise the system will refuse them. All drivers provided by Red Hat are signed by one of Red Hat's private keys and authenticated by the corresponding Red Hat public key in the kernel. If you load any other drivers (ones not provided on the Red Hat Enterprise Linux installation DVD), you must make sure that they are signed as well. More information about signing custom drivers can be found in the Working with Kernel Modules chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-driver-updates-x86 |
Chapter 15. Backing up and restoring a director Operator deployed overcloud | Chapter 15. Backing up and restoring a director Operator deployed overcloud To back up a Red Hat OpenStack Platform (RHOSP) overcloud that was deployed with director Operator (OSPdO), you must backup the Red Hat OpenShift Container Platform (RHOCP) OSPdO resources, and the use the Relax-and-Recover (ReaR) tool to backup the control plane and overcloud. 15.1. Backing up and restoring director Operator resources Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) provides custom resource definitions (CRDs) for backing up and restoring a deployment. You do not have to manually export and import multiple configurations. OSPdO knows which custom resources (CRs), including the ConfigMap and Secret CRs, that it needs to create a complete backup because it is aware of the state of all resources. Therefore, OSPdO does not backup any configuration that is in an incomplete or error state. To backup and restore an OSPdO deployment, you create an OpenStackBackupRequest CR to initiate the creation or restoration of a backup. Your OpenStackBackupRequest CR creates the OpenStackBackup CR that stores the backup of the custom resources (CRs), the ConfigMap and the Secret configurations for the specified namespace. 15.1.1. Backing up director Operator resources To create a backup you must create an OpenStackBackupRequest custom resource (CR) for the namespace. The OpenStackBackup CR is created when the OpenStackBackupRequest object is created in save mode. Procedure Create a file named openstack_backup.yaml on your workstation. Add the following configuration to your openstack_backup.yaml file to create the OpenStackBackupRequest custom resource (CR): 1 Set the mode to save to request creation of an OpenStackBackup CR. 2 Optional: Include any ConfigMap resources that you created manually. 3 Optional: Include any Secret resources that you created manually. Note OSPdO attempts to include all ConfigMap and Secret objects associated with the OSPdO CRs in the namespace, such as OpenStackControlPlane and OpenStackBaremetalSet . You do not need to include those in the additional lists. Save the openstack_backup.yaml file. Create the OpenStackBackupRequest CR: Monitor the creation status of the OpenStackBackupRequest CR: The Quiescing state indicates that OSPdO is waiting for the CRs to reach their finished state. The number of CRs can affect how long it takes to finish creating the backup. If the status remains in the Quiescing state for longer than expected, you can investigate the OSPdO logs to check progress: Replace <operator_pod> with the name of the Operator pod. The Saved state indicates that the OpenStackBackup CR is created. The Error state indicates the backup has failed to create. Review the request contents to find the error: View the OpenStackBackup resource to confirm it exists: 15.1.2. Restoring director Operator resources from a backup When you request to restore a backup, Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) takes the contents of the specified OpenStackBackup resource and attempts to apply them to all existing custom resources (CRs), ConfigMap and Secret resources present within the namespace. OSPdO overwrites any existing resources in the namespace, and creates new resources for those not found within the namespace. Procedure List the available backups: Inspect the details of a specific backup: Replace <name> with the name of the backup you want to inspect. Create a file named openstack_restore.yaml on your workstation. Add the following configuration to your openstack_restore.yaml file to create the OpenStackBackupRequest custom resource (CR): Replace <mode> with one of the following options: restore : Requests a restore from an existing OpenStackBackup . cleanRestore : Completely wipes the existing OSPdO resources within the namespace before restoring and creating new resources from the existing OpenStackBackup . Replace <restore_source> with the ID of the OpenStackBackup to restore, for example, openstackbackupsave-1641928378 . Save the openstack_restore.yaml file. Create the OpenStackBackupRequest CR: Monitor the creation status of the OpenStackBackupRequest CR: The Loading state indicates that all resources from the OpenStackBackup are being applied against the cluster. The Reconciling state indicates that all resources are loaded and OSPdO has begun reconciling to attempt to provision all resources. The Restored state indicates that the OpenStackBackup CR has been restored. The Error state indicates the restoration has failed. Review the request contents to find the error: 15.2. Backing up and restoring a director Operator deployed overcloud with the Relax-and-Recover tool To back up a director Operator deployed overcloud with the Relax-and-Recover (ReaR) tool, you configure the backup node, install the ReaR tool on the control plane, and create the backup image. You can create backups as a part of your regular environment maintenance. In addition, you must back up the control plane before performing updates or upgrades. You can use the backups to restore the control plane to its state if an error occurs during an update or upgrade. 15.2.1. Supported backup formats and protocols The backup and restore process uses the open-source tool Relax-and-Recover (ReaR) to create and restore bootable backup images. ReaR is written in Bash and supports multiple image formats and multiple transport protocols. The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports when you use ReaR to back up and restore a director Operator deployed control plane. Bootable media formats ISO File transport protocols SFTP NFS 15.2.2. Configuring the backup storage location You can install and configure an NFS server to store the backup file. Before you create a backup of the control plane, configure the backup storage location in the bar-vars.yaml environment file. This file stores the key-value parameters that you want to pass to the backup execution. Important If you previously installed and configured an NFS or SFTP server, you do not need to complete this procedure. You enter the server information when you set up ReaR on the node that you want to back up. By default, the Relax-and-Recover (ReaR) IP address parameter for the NFS server is 192.168.24.1 . You must add the parameter tripleo_backup_and_restore_server to set the IP address value that matches your environment. Procedure Create an NFS backup directory on your workstation: Create the bar-vars.yaml file on your workstation: USD touch /home/stack/bar-vars.yaml In the bar-vars.yaml file, configure the backup storage location: tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir> Replace <ip_address> with the IP address of your NFS server, for example, 172.22.0.1 . The default IP address is 192.168.24.1 Replace <backup_dir> with the location of the backup storage folder, for example, /home/nfs/backup . 15.2.3. Performing a backup of the control plane To create a backup of the control plane, you must install and configure Relax-and-Recover (ReaR) on each of the Controller virtual machines (VMs). Important Due to a known issue, the ReaR backup of overcloud nodes continues even if a Controller node is down. Ensure that all your Controller nodes are running before you run the ReaR backup. A fix is planned for a later Red Hat OpenStack Platform (RHOSP) release. For more information, see BZ#2077335 - Back up of the overcloud ctlplane keeps going even if one controller is unreachable . Procedure Extract the static Ansible inventory file from the location in which it was saved during installation: USD oc rsh openstackclient USD cd USD find . -name tripleo-ansible-inventory.yaml USD cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml . Replace <stack> with the name of your stack, for example, cloud-admin . By default, the name of the stack is overcloud . Install ReaR on each Controller virtual machine (VM): USD openstack overcloud backup --setup-rear --extra-vars /home/cloud-admin/bar-vars.yaml --inventory /home/cloud-admin/tripleo-ansible-inventory.yaml Open the /etc/rear/local.conf file on each Controller VM : In the /etc/rear/local.conf file, add the NETWORKING_PREPARATION_COMMANDS parameter to configure the Controller VM networks in the following format: Replace <command_1> , <command_2> , and all commands up to <command_n> , with commands that configure the network interface names or IP addresses. For example, you can add the ip link add br-ctlplane type bridge command to configure the control plane bridge name or add the ip link set eth0 up command to set the name of the interface. You can add more commands to the parameter based on your network configuration. Repeat the following command on each Controller VM to back up their config-drive partitions: [root@controller-0 ~]# dd if=/dev/vda1 of=/mnt/config-drive Create a backup of the Controller VMs: USD oc rsh openstackclient USD openstack overcloud backup --inventory /home/cloud-admin/tripleo-ansible-inventory.yaml The backup process runs sequentially on each Controller VM without disrupting the service to your environment. Note You cannot use cron to schedule backups because cron cannot be used on the openstackclient pod. 15.2.4. Restoring the control plane If an error occurs during an update or upgrade, you can restore the control plane to its state by using the backup ISO image that you created using the Relax-and-Recover (ReaR) tool. To restore the control plane, you must restore all Controller virtual machines (VMs) to ensure state consistency. You can find the backup ISO images on the backup node. Note Red Hat supports backups of Red Hat OpenStack Platform with native SDNs, such as Open vSwitch (OVS) and the default Open Virtual Network (OVN). For information about third-party SDNs, refer to the third-party SDN documentation. Prerequisites You have created a backup of the control plane nodes. You have access to the backup node. A vncviewer package is installed on the workstation. Procedure Power off each Controller VM. Ensure that all the Controller VMs are powered off completely: Upload the backup ISO images for each Controller VM into a cluster PVC: Replace <backup_image> with name of the PVC backup image for the Controller VM. For example, backup-controller-0-202310231141 . Replace <pvc_size> with the size of PVC required for the image specified with the --image-path option. For example, 4G . Replace <image_path> with the path to the backup ISO image for the Controller VM. For example, /home/nfs/backup/controller-0/controller-0-202310231141.iso . Disable the director Operator by changing its replicas to 0 : Replace <csv> with the CSV from the environment, for example, osp-director-operator.v1.3.1 . Verify that the osp-director-operator-controller-manager pod is stopped: Create a backup of each Controller VM resource: Update the Controller VM resource with bootOrder set to 1 and attach the uploaded PVC as a CD-ROM: Replace <backup_image> with name of the PVC backup image uploaded for the Controller VM in step 2. For example, backup-controller-0-202310231141 . Start each Controller VM: Wait until the status of each Controller VM is RUNNING . Connect to each Controller VM by using VNC: Note If you are using SSH to access the Red Hat OpenShift Container Platform (RHOCP) CLI on a remote system, ensure the SSH X11 forwarding is correctly configured. For more information, see the Red Hat Knowledgebase solution How do I configure X11 forwarding over SSH in Red Hat Enterprise Linux? . ReaR starts automatic recovery after a timeout by default. If recovery does not start automically, you can manually select the Recover option from the Relax-and-Recover boot menu and specify the name of the control plane node to recover. Wait until the recovery is finished. When the control plane node restoration process completes, the console displays the following message: Enter the recovery shell as root. When the command line console is available, restore the config-drive partition of each control plane node: Power off each node: Update the Controller VM resource and deattach the CD-ROM. Make sure the rootDisk has bootOrder: 1 . Enable the director Operator by changing its replicas to 1 : Verify that the osp-director-operator-controller-manager pod is started. Start each Controller VM: Wait until the Controller VMs are running. SELinux is relabelled on first boot. Check the cluster status: If the Galera cluster does not restore as part of the restoration procedure, you must restore Galera manually. For more information, see Restoring the Galera cluster manually . | [
"apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBackupRequest metadata: name: openstackbackupsave namespace: openstack spec: mode: save 1 additionalConfigMaps: [] 2 additionalSecrets: [] 3",
"oc create -f openstack_backup.yaml -n openstack",
"oc get openstackbackuprequest openstackbackupsave -n openstack",
"NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackupsave save Quiescing",
"oc logs <operator_pod> -c manager -f 2022-01-11T18:26:15.180Z INFO controllers.OpenStackBackupRequest Quiesce for save for OpenStackBackupRequest openstackbackupsave is waiting for: [OpenStackBaremetalSet: compute, OpenStackControlPlane: overcloud, OpenStackVMSet: controller]",
"NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackupsave save Saved 2022-01-11T19:12:58Z",
"oc get openstackbackuprequest openstackbackupsave -o yaml -n openstack",
"oc get openstackbackup -n openstack NAME AGE openstackbackupsave-1641928378 6m7s",
"oc get osbackup",
"oc get backup <name> -o yaml",
"apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBackupRequest metadata: name: openstackbackuprestore namespace: openstack spec: mode: <mode> restoreSource: <restore_source>",
"oc create -f openstack_restore.yaml -n openstack",
"oc get openstackbackuprequest openstackbackuprestore -n openstack",
"NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Loading",
"NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Reconciling",
"NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP openstackbackuprestore restore openstackbackupsave-1641928378 Restored 2022-01-12T13:48:57Z",
"oc get openstackbackuprequest openstackbackuprestore -o yaml -n openstack",
"mkdir -p /home/nfs/backup chmod 777 /home/nfs/backup cat >/etc/exports.d/backup.exports<<EOF /home/nfs/backup *(rw,sync,no_root_squash) EOF exportfs -av",
"touch /home/stack/bar-vars.yaml",
"tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_dir>",
"oc rsh openstackclient cd find . -name tripleo-ansible-inventory.yaml cp ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml .",
"openstack overcloud backup --setup-rear --extra-vars /home/cloud-admin/bar-vars.yaml --inventory /home/cloud-admin/tripleo-ansible-inventory.yaml",
"ssh controller-0 [cloud-admin@controller-0 ~]USD sudo -i cat >>/etc/rear/local.conf<<EOF",
"NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...'<command_n>')",
"dd if=/dev/vda1 of=/mnt/config-drive",
"oc rsh openstackclient openstack overcloud backup --inventory /home/cloud-admin/tripleo-ansible-inventory.yaml",
"oc get vm",
"virtctl image-upload pvc <backup_image> --pvc-size=<pvc_size> --image-path=<image_path> --insecure",
"oc patch csv -n openstack <csv> --type json -p=\"[{\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/replicas\", \"value\": \"0\"}]\"",
"oc pod osp-director-operator-controller-manager",
"oc get vm controller-0 -o yaml > controller-0-bk.yaml",
"oc edit vm controller-0 @@ -96,10 +96,7 @@ devices: disks: - bootOrder: 1 + cdrom: + bus: sata + name: cdromiso + - dedicatedIOThread: false - dedicatedIOThread: false disk: bus: virtio name: rootdisk @@ -177,9 +174,6 @@ name: tenant terminationGracePeriodSeconds: 0 volumes: + - name: cdromiso + persistentVolumeClaim: + claimName: <backup_image> - dataVolume: name: controller-0-36a1 name: rootdisk",
"virtctl start controller-0",
"virtctl vnc controller-0",
"Finished recovering your system Exiting rear recover Running exit tasks",
"once completed, restore the config-drive partition (which is ISO9660) RESCUE <control_plane_node>:~ USD dd if=/mnt/local/mnt/config-drive of=<config_drive_partition>",
"RESCUE <control_plane_node>:~ # poweroff",
"oc patch csv -n openstack <csv> --type json -p=\"[{\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/replicas\", \"value\": \"1\"}]\"",
"virtctl start controller-0 virtctl start controller-1 virtctl start controller-2",
"pcs status"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/assembly_backing-up-and-restoring-a-director-operator-deployed-overcloud |
5.6. Performing a Two-Administrator Enrollment | 5.6. Performing a Two-Administrator Enrollment Enrolling machines as clients in the IdM domain is a two-part process. A host entry is created for the client (and stored in the 389 Directory Server instance), and then a keytab is created to provision the client. Both parts are performed automatically by the ipa-client-install command. It is also possible to perform those steps separately; this allows for administrators to prepare machines and the IdM server configuration in advance of actually configuring the clients. This allows more flexible setup scenarios, including bulk deployments. When performing a manual enrollment, the host entry is created separately, and then enrollment is completed when the client script is run, which creates the requisite keytab. Note There are two ways to set the password. You can either supply your own or have IdM generate a random one. There may be a situation where an administrator in one group is prohibited from creating a host entry and, therefore, from simply running the ipa-client-install command and allowing it to create the host. However, that administrator may have the right to run the command after a host entry exists. In that case, one administrator can create the host entry manually, then the second administrator can complete the enrollment by running the ipa-client-install command. An administrator creates the host entry, as described in Section 5.4.2, "Other Examples of Adding a Host Entry" . The second administrator installs the IdM client packages on the machine, as in Section 5.3, "Configuring a Linux System as an IdM Client" . When the second administrator runs the setup script, he must pass his Kerberos password and username (principal) with the ipa-client-install command. For example: The keytab is generated on the server and provisioned to the client machine, so that the client machine is not able to connect to the IdM domain. The keytab is saved with root:root ownership and 0600 permissions. | [
"ipa-client-install -w secret -p admin2"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/enrollment_with_separation_of_duties |
Chapter 4. Enabling and disabling encryption in-transit post deployment | Chapter 4. Enabling and disabling encryption in-transit post deployment You can enable encryption in-transit for the existing clusters after the deployment of clusters both in internal and external modes. 4.1. Enabling encryption in-transit after deployment in internal mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Procedure Patch the storagecluster to add encryption enabled as true to the storage cluster spec: Check the configurations. Wait for around 10 minutes for ceph daemons to restart and then check the pods. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume." 4.2. Disabling encryption in-transit after deployment in internal mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Encryption in-transit is enabled. Procedure Patch the storagecluster to update encryption enabled as false in the storage cluster spec: Check the configurations. Wait for around 10 minutes for ceph daemons to restart and then check the pods. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume." 4.3. Enabling encryption in-transit after deployment in external mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Procedure Patch the storagecluster to add encryption enabled as true the storage cluster spec: Check the connection settings in the CR. 4.3.1. Applying encryption in-transit on Red Hat Ceph Storage cluster Procedure Apply Encryption in-transit settings. Check the settings. Restart all Ceph daemons. Wait for the restarting of all the daemons. 4.3.2. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume. 4.4. Disabling encryption in-transit after deployment in external mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Encryption in-transit is enabled for the external mode cluster. Procedure Removing encryption in-transit settings from Red Hat Ceph Storage cluster Remove and check encryption in-transit configurations. Restart all Ceph daemons. Patching the CR Patch the storagecluster to update encryption enabled as false in the storage cluster spec: Check the configurations. Remount existing volumes Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume. | [
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": true}}} }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o yaml | yq '.spec.network' connections: encryption: enabled: true",
"oc get pods -n openshift-storage | grep rook-ceph rook-ceph-crashcollector-ip-10-0-2-111.ec2.internal-796ffcm9kn9 1/1 Running 0 5m11s rook-ceph-crashcollector-ip-10-0-27-61.ec2.internal-854b4d8sk5z 1/1 Running 0 5m9s rook-ceph-crashcollector-ip-10-0-33-53.ec2.internal-589d9f4f8vx 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-2-111.ec2.internal-6d48cdc5fd-2tmsl 1/1 Running 0 5m9s rook-ceph-exporter-ip-10-0-27-61.ec2.internal-546c66c7cc-9lnpz 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-33-53.ec2.internal-b5555994c-x8mzz 1/1 Running 0 5m5s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7bd754f6vwps2 2/2 Running 0 4m56s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6cc5cc647c78m 2/2 Running 0 4m30s rook-ceph-mgr-a-6f8467578d-f8279 3/3 Running 0 3m40s rook-ceph-mgr-b-66754d99cf-9q58g 3/3 Running 0 3m27s rook-ceph-mon-a-75bc5dd655-tvdqf 2/2 Running 0 4m7s rook-ceph-mon-b-6b6d4d9b4c-tjbpz 2/2 Running 0 4m55s rook-ceph-mon-c-7456bb5f67-rtwpj 2/2 Running 0 4m32s rook-ceph-operator-7b5b9cdb9b-tvmb6 1/1 Running 0 45m rook-ceph-osd-0-b78dd99f6-n4wbm 2/2 Running 0 3m3s rook-ceph-osd-1-5887bf6d8d-2sncc 2/2 Running 0 2m39s rook-ceph-osd-2-784b59c4c8-44phh 2/2 Running 0 2m14s rook-ceph-osd-prepare-a075cf185c9b2e5d92ec3f7769565e38-ztrms 0/1 Completed 0 42m rook-ceph-osd-prepare-b4b48dc5e3bef99ab377e2a255a9142a-mvgnd 0/1 Completed 0 42m rook-ceph-osd-prepare-fae2ea2ad4aacbf62010ae5b60b87f57-6t9l5 0/1 Completed 0 42m",
"oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 27m Ready 2024-11-06T16:15:26Z 4.18.0",
"~ USD oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": false}}} }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o yaml | yq '.spec.network' connections: encryption: enabled: false",
"oc get pods -n openshift-storage | grep rook-ceph rook-ceph-crashcollector-ip-10-0-2-111.ec2.internal-796ffcm9kn9 1/1 Running 0 5m11s rook-ceph-crashcollector-ip-10-0-27-61.ec2.internal-854b4d8sk5z 1/1 Running 0 5m9s rook-ceph-crashcollector-ip-10-0-33-53.ec2.internal-589d9f4f8vx 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-2-111.ec2.internal-6d48cdc5fd-2tmsl 1/1 Running 0 5m9s rook-ceph-exporter-ip-10-0-27-61.ec2.internal-546c66c7cc-9lnpz 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-33-53.ec2.internal-b5555994c-x8mzz 1/1 Running 0 5m5s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7bd754f6vwps2 2/2 Running 0 4m56s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6cc5cc647c78m 2/2 Running 0 4m30s rook-ceph-mgr-a-6f8467578d-f8279 3/3 Running 0 3m40s rook-ceph-mgr-b-66754d99cf-9q58g 3/3 Running 0 3m27s rook-ceph-mon-a-75bc5dd655-tvdqf 2/2 Running 0 4m7s rook-ceph-mon-b-6b6d4d9b4c-tjbpz 2/2 Running 0 4m55s rook-ceph-mon-c-7456bb5f67-rtwpj 2/2 Running 0 4m32s rook-ceph-operator-7b5b9cdb9b-tvmb6 1/1 Running 0 45m rook-ceph-osd-0-b78dd99f6-n4wbm 2/2 Running 0 3m3s rook-ceph-osd-1-5887bf6d8d-2sncc 2/2 Running 0 2m39s rook-ceph-osd-2-784b59c4c8-44phh 2/2 Running 0 2m14s rook-ceph-osd-prepare-a075cf185c9b2e5d92ec3f7769565e38-ztrms 0/1 Completed 0 42m rook-ceph-osd-prepare-b4b48dc5e3bef99ab377e2a255a9142a-mvgnd 0/1 Completed 0 42m rook-ceph-osd-prepare-fae2ea2ad4aacbf62010ae5b60b87f57-6t9l5 0/1 Completed 0 42m",
"oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 27m Ready 2024-11-06T16:15:26Z 4.18.0",
"oc patch storagecluster ocs-external-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": true}}} }]' storagecluster.ocs.openshift.io/ocs-external-storagecluster patched",
"get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 9h Ready true 2024-11-06T20:48:03Z 4.18.0",
"oc get storagecluster ocs-external-storagecluster -o yaml | yq '.spec.network.connections' encryption: enabled: true",
"root@ceph-client ~]# ceph config set global ms_client_mode secure ceph config set global ms_cluster_mode secure ceph config set global ms_service_mode secure ceph config set global rbd_default_map_options ms_mode=secure",
"ceph config dump | grep ms_ ceph config dump | grep ms_ global basic ms_client_mode secure * global basic ms_cluster_mode secure * global basic ms_service_mode secure * global advanced rbd_default_map_options ms_mode=secure *",
"ceph orch ls --format plain | tail -n +2 | awk '{print USD1}' | xargs -I {} ceph orch restart {} Scheduled to restart alertmanager.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-2 on host 'osd-2' Scheduled to restart ceph-exporter.osd-3 on host 'osd-3' Scheduled to restart ceph-exporter.osd-1 on host 'osd-1' Scheduled to restart crash.osd-0 on host 'osd-0' Scheduled to restart crash.osd-2 on host 'osd-2' Scheduled to restart crash.osd-3 on host 'osd-3' Scheduled to restart crash.osd-1 on host 'osd-1' Scheduled to restart grafana.osd-0 on host 'osd-0' Scheduled to restart mds.fsvol001.osd-0.lpciqk on host 'osd-0' Scheduled to restart mds.fsvol001.osd-2.wocnxz on host 'osd-2' Scheduled to restart mgr.osd-0.dtkyni on host 'osd-0' Scheduled to restart mgr.osd-2.kqcxwu on host 'osd-2' Scheduled to restart mon.osd-2 on host 'osd-2' Scheduled to restart mon.osd-3 on host 'osd-3' Scheduled to restart mon.osd-1 on host 'osd-1' Scheduled to restart node-exporter.osd-0 on host 'osd-0' Scheduled to restart node-exporter.osd-2 on host 'osd-2' Scheduled to restart node-exporter.osd-3 on host 'osd-3' Scheduled to restart node-exporter.osd-1 on host 'osd-1' Scheduled to restart osd.1 on host 'osd-0' Scheduled to restart osd.4 on host 'osd-0' Scheduled to restart osd.0 on host 'osd-2' Scheduled to restart osd.5 on host 'osd-2' Scheduled to restart osd.2 on host 'osd-3' Scheduled to restart osd.6 on host 'osd-3' Scheduled to restart osd.3 on host 'osd-1' Scheduled to restart osd.7 on host 'osd-1' Scheduled to restart prometheus.osd-0 on host 'osd-0' Scheduled to restart rgw.rgw.ssl.osd-1.smzpfj on host 'osd-1'",
"ceph config rm global ms_client_mode ceph config rm global ms_cluster_mode ceph config rm global ms_service_mode ceph config rm global rbd_default_map_options ceph config dump | grep ms_",
"ceph orch ls --format plain | tail -n +2 | awk '{print USD1}' | xargs -I {} ceph orch restart {} Scheduled to restart alertmanager.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-2 on host 'osd-2' Scheduled to restart ceph-exporter.osd-3 on host 'osd-3' Scheduled to restart ceph-exporter.osd-1 on host 'osd-1' Scheduled to restart crash.osd-0 on host 'osd-0' Scheduled to restart crash.osd-2 on host 'osd-2' Scheduled to restart crash.osd-3 on host 'osd-3' Scheduled to restart crash.osd-1 on host 'osd-1' Scheduled to restart grafana.osd-0 on host 'osd-0' Scheduled to restart mds.fsvol001.osd-0.lpciqk on host 'osd-0' Scheduled to restart mds.fsvol001.osd-2.wocnxz on host 'osd-2' Scheduled to restart mgr.osd-0.dtkyni on host 'osd-0' Scheduled to restart mgr.osd-2.kqcxwu on host 'osd-2' Scheduled to restart mon.osd-2 on host 'osd-2' Scheduled to restart mon.osd-3 on host 'osd-3' Scheduled to restart mon.osd-1 on host 'osd-1' Scheduled to restart node-exporter.osd-0 on host 'osd-0' Scheduled to restart node-exporter.osd-2 on host 'osd-2' Scheduled to restart node-exporter.osd-3 on host 'osd-3' Scheduled to restart node-exporter.osd-1 on host 'osd-1' Scheduled to restart osd.1 on host 'osd-0' Scheduled to restart osd.4 on host 'osd-0' Scheduled to restart osd.0 on host 'osd-2' Scheduled to restart osd.5 on host 'osd-2' Scheduled to restart osd.2 on host 'osd-3' Scheduled to restart osd.6 on host 'osd-3' Scheduled to restart osd.3 on host 'osd-1' Scheduled to restart osd.7 on host 'osd-1' Scheduled to restart prometheus.osd-0 on host 'osd-0' Scheduled to restart rgw.rgw.ssl.osd-1.smzpfj on host 'osd-1'",
"ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.osd-0 osd-0 *:9093,9094 running (116s) 9s ago 10h 19.5M - 0.26.0 7dbf12091920 4694a72d4bbd ceph-exporter.osd-0 osd-0 running (19s) 9s ago 10h 7310k - 18.2.1-229.el9cp 3fd804e38f5b 49bdc7d99471 ceph-exporter.osd-1 osd-1 running (97s) 26s ago 10h 7285k - 18.2.1-229.el9cp 3fd804e38f5b 7000d59d23b4 ceph-exporter.osd-2 osd-2 running (76s) 26s ago 10h 7306k - 18.2.1-229.el9cp 3fd804e38f5b 3907515cc352 ceph-exporter.osd-3 osd-3 running (49s) 26s ago 10h 6971k - 18.2.1-229.el9cp 3fd804e38f5b 3f3952490780 crash.osd-0 osd-0 running (17s) 9s ago 10h 6878k - 18.2.1-229.el9cp 3fd804e38f5b 38e041fb86e3 crash.osd-1 osd-1 running (96s) 26s ago 10h 6895k - 18.2.1-229.el9cp 3fd804e38f5b 21ce3ef7d896 crash.osd-2 osd-2 running (74s) 26s ago 10h 6899k - 18.2.1-229.el9cp 3fd804e38f5b 210ca9c8d928 crash.osd-3 osd-3 running (47s) 26s ago 10h 6899k - 18.2.1-229.el9cp 3fd804e38f5b 710d42d9d138 grafana.osd-0 osd-0 *:3000 running (114s) 9s ago 10h 72.9M - 10.4.0-pre f142b583a1b1 3dc5e2248e95 mds.fsvol001.osd-0.qjntcu osd-0 running (99s) 9s ago 10h 17.5M - 18.2.1-229.el9cp 3fd804e38f5b 50efa881c04b mds.fsvol001.osd-2.qneujv osd-2 running (51s) 26s ago 10h 15.3M - 18.2.1-229.el9cp 3fd804e38f5b a306f2d2d676 mgr.osd-0.zukgyq osd-0 *:9283,8765,8443 running (21s) 9s ago 10h 442M - 18.2.1-229.el9cp 3fd804e38f5b 8ef9b728675e mgr.osd-1.jqfyal osd-1 *:8443,9283,8765 running (92s) 26s ago 10h 480M - 18.2.1-229.el9cp 3fd804e38f5b 1ab52db89bfd mon.osd-1 osd-1 running (90s) 26s ago 10h 41.7M 2048M 18.2.1-229.el9cp 3fd804e38f5b 88d1fe1e10ac mon.osd-2 osd-2 running (72s) 26s ago 10h 31.1M 2048M 18.2.1-229.el9cp 3fd804e38f5b 02f57d3bb44f mon.osd-3 osd-3 running (45s) 26s ago 10h 24.0M 2048M 18.2.1-229.el9cp 3fd804e38f5b 5e3783f2b4fa node-exporter.osd-0 osd-0 *:9100 running (15s) 9s ago 10h 7843k - 1.7.0 8c904aa522d0 2dae2127349b node-exporter.osd-1 osd-1 *:9100 running (94s) 26s ago 10h 11.2M - 1.7.0 8c904aa522d0 010c3fcd55cd node-exporter.osd-2 osd-2 *:9100 running (69s) 26s ago 10h 17.2M - 1.7.0 8c904aa522d0 436f2d513f31 node-exporter.osd-3 osd-3 *:9100 running (41s) 26s ago 10h 12.4M - 1.7.0 8c904aa522d0 5579f0d494b8 osd.0 osd-0 running (109s) 9s ago 10h 126M 4096M 18.2.1-229.el9cp 3fd804e38f5b 997076cd39d4 osd.1 osd-1 running (85s) 26s ago 10h 139M 4096M 18.2.1-229.el9cp 3fd804e38f5b 08b720f0587d osd.2 osd-2 running (65s) 26s ago 10h 143M 4096M 18.2.1-229.el9cp 3fd804e38f5b 104ad4227163 osd.3 osd-3 running (36s) 26s ago 10h 94.5M 1435M 18.2.1-229.el9cp 3fd804e38f5b db8b265d9f43 osd.4 osd-0 running (104s) 9s ago 10h 164M 4096M 18.2.1-229.el9cp 3fd804e38f5b 50dcbbf7e012 osd.5 osd-1 running (80s) 26s ago 10h 131M 4096M 18.2.1-229.el9cp 3fd804e38f5b 63b21fe970b5 osd.6 osd-3 running (32s) 26s ago 10h 243M 1435M 18.2.1-229.el9cp 3fd804e38f5b 26c7ba208489 osd.7 osd-2 running (61s) 26s ago 10h 130M 4096M 18.2.1-229.el9cp 3fd804e38f5b 871a2b75e64f prometheus.osd-0 osd-0 *:9095 running (12s) 9s ago 10h 44.6M - 2.48.0 58069186198d e49a064d2478 rgw.rgw.ssl.osd-1.bsmbgd osd-1 *:80 running (78s) 26s ago 10h 75.4M - 18.2.1-229.el9cp 3fd804e38f5b d03c9f7ae4a4",
"oc patch storagecluster ocs-external-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": false}}} }]' storagecluster.ocs.openshift.io/ocs-external-storagecluster patched",
"oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 12h Ready true 2024-11-06T20:48:03Z 4.18.0",
"oc get storagecluster ocs-external-storagecluster -o yaml | yq '.spec.network.connections' encryption: enabled: false"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/enabling-and-disabling-encryption-in-transit-post-deployment_rhodf |
Chapter 22. Write Barriers | Chapter 22. Write Barriers A write barrier is a kernel mechanism used to ensure that file system metadata is correctly written and ordered on persistent storage, even when storage devices with volatile write caches lose power. File systems with write barriers enabled ensures that data transmitted via fsync() is persistent throughout a power loss. Enabling write barriers incurs a substantial performance penalty for some applications. Specifically, applications that use fsync() heavily or create and delete many small files will likely run much slower. 22.1. Importance of Write Barriers File systems safely update metadata, ensuring consistency. Journalled file systems bundle metadata updates into transactions and send them to persistent storage in the following manner: The file system sends the body of the transaction to the storage device. The file system sends a commit block. If the transaction and its corresponding commit block are written to disk, the file system assumes that the transaction will survive any power failure. However, file system integrity during power failure becomes more complex for storage devices with extra caches. Storage target devices like local S-ATA or SAS drives may have write caches ranging from 32MB to 64MB in size (with modern drives). Hardware RAID controllers often contain internal write caches. Further, high end arrays, like those from NetApp, IBM, Hitachi and EMC (among others), also have large caches. Storage devices with write caches report I/O as "complete" when the data is in cache; if the cache loses power, it loses its data as well. Worse, as the cache de-stages to persistent storage, it may change the original metadata ordering. When this occurs, the commit block may be present on disk without having the complete, associated transaction in place. As a result, the journal may replay these uninitialized transaction blocks into the file system during post-power-loss recovery; this will cause data inconsistency and corruption. How Write Barriers Work Write barriers are implemented in the Linux kernel via storage write cache flushes before and after the I/O, which is order-critical . After the transaction is written, the storage cache is flushed, the commit block is written, and the cache is flushed again. This ensures that: The disk contains all the data. No re-ordering has occurred. With barriers enabled, an fsync() call also issues a storage cache flush. This guarantees that file data is persistent on disk even if power loss occurs shortly after fsync() returns. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-writebarriers |
Chapter 10. Updating SSSD Containers | Chapter 10. Updating SSSD Containers This procedure describes how you can update System Security Services Daemon (SSSD) containers if a new version of the rhel7/sssd image is released. Procedure Stop the SSSD service: If SSSD is running as a system container: If SSSD is running as an application container: Use the docker rm command to remove the image: Install the latest SSSD image: Start the SSSD service: If SSSD runs as a system container: If SSSD runs as an application container, start each container using the atomic start command: | [
"systemctl stop sssd",
"atomic stop <container_name>",
"docker rm rhel7/sssd",
"atomic install rhel7/sssd",
"systemctl start sssd",
"atomic start <container_name>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/using_containerized_identity_management_services/sssd-centralized-ccache-updating-sssd-containers |
Red Hat JBoss Web Server 6.0 Release Notes | Red Hat JBoss Web Server 6.0 Release Notes Red Hat JBoss Web Server 6.0 For Use with the Red Hat JBoss Web Server 6.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_release_notes/index |
Chapter 4. Configuration | Chapter 4. Configuration This chapter describes the process for binding the AMQ OpenWire JMS implementation to your JMS application and setting configuration options. JMS uses the Java Naming Directory Interface (JNDI) to register and look up API implementations and other resources. This enables you to write code to the JMS API without tying it to a particular implementation. Configuration options are exposed as query parameters on the connection URI. For more information about configuring AMQ OpenWire JMS, see the ActiveMQ user guide . 4.1. Configuring the JNDI initial context JMS applications use a JNDI InitialContext object obtained from an InitialContextFactory to look up JMS objects such as the connection factory. AMQ OpenWire JMS provides an implementation of the InitialContextFactory in the org.apache.activemq.jndi.ActiveMQInitialContextFactory class. The InitialContextFactory implementation is discovered when the InitialContext object is instantiated: javax.naming.Context context = new javax.naming.InitialContext(); To find an implementation, JNDI must be configured in your environment. There are three ways of achieving this: using a jndi.properties file, using a system property, or using the initial context API. Using a jndi.properties file Create a file named jndi.properties and place it on the Java classpath. Add a property with the key java.naming.factory.initial . Example: Setting the JNDI initial context factory using a jndi.properties file java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory In Maven-based projects, the jndi.properties file is placed in the <project-dir> /src/main/resources directory. Using a system property Set the java.naming.factory.initial system property. Example: Setting the JNDI initial context factory using a system property USD java -Djava.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory ... Using the initial context API Use the JNDI initial context API to set properties programatically. Example: Setting JNDI properties programatically Hashtable<Object, Object> env = new Hashtable<>(); env.put("java.naming.factory.initial", "org.apache.activemq.jndi.ActiveMQInitialContextFactory"); InitialContext context = new InitialContext(env); Note that you can use the same API to set the JNDI properties for connection factories, queues, and topics. 4.2. Configuring the connection factory The JMS connection factory is the entry point for creating connections. It uses a connection URI that encodes your application-specific configuration settings. To set the factory name and connection URI, create a property in the format below. You can store this configuration in a jndi.properties file or set the corresponding system property. The JNDI property format for connection factories connectionFactory. <lookup-name> = <connection-uri> For example, this is how you might configure a factory named app1 : Example: Setting the connection factory in a jndi.properties file connectionFactory.app1 = tcp://example.net:61616?jms.clientID=backend You can then use the JNDI context to look up your configured connection factory using the name app1 : ConnectionFactory factory = (ConnectionFactory) context.lookup("app1"); 4.3. Connection URIs Connections are configured using a connection URI. The connection URI specifies the remote host, port, and a set of configuration options, which are set as query parameters. For more information about the available options, see Chapter 5, Configuration options . The connection URI format The scheme is tcp for unencrypted connections and ssl for SSL/TLS connections. For example, the following is a connection URI that connects to host example.net at port 61616 and sets the client ID to backend : Example: A connection URI Failover URIs URIs used for reconnect and failover can contain multiple connection URIs. They take the following form: The failover URI format Transport options prefixed with nested. are applied to each connection URI in the list. 4.4. Configuring queue and topic names JMS provides the option of using JNDI to look up deployment-specific queue and topic resources. To set queue and topic names in JNDI, create properties in the following format. Either place this configuration in a jndi.properties file or set corresponding system properties. The JNDI property format for queues and topics queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name> For example, the following properties define the names jobs and notifications for two deployment-specific resources: Example: Setting queue and topic names in a jndi.properties file queue.jobs = app1/work-items topic.notifications = app1/updates You can then look up the resources by their JNDI names: Queue queue = (Queue) context.lookup("jobs"); Topic topic = (Topic) context.lookup("notifications"); | [
"javax.naming.Context context = new javax.naming.InitialContext();",
"java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory",
"java -Djava.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory",
"Hashtable<Object, Object> env = new Hashtable<>(); env.put(\"java.naming.factory.initial\", \"org.apache.activemq.jndi.ActiveMQInitialContextFactory\"); InitialContext context = new InitialContext(env);",
"connectionFactory. <lookup-name> = <connection-uri>",
"connectionFactory.app1 = tcp://example.net:61616?jms.clientID=backend",
"ConnectionFactory factory = (ConnectionFactory) context.lookup(\"app1\");",
"<scheme>://<host>:<port>[?<option>=<value>[&<option>=<value>...]]",
"tcp://example.net:61616?jms.clientID=backend",
"failover:(<connection-uri>[,<connection-uri>])[?<option>=<value>[&<option>=<value>...]]",
"queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name>",
"queue.jobs = app1/work-items topic.notifications = app1/updates",
"Queue queue = (Queue) context.lookup(\"jobs\"); Topic topic = (Topic) context.lookup(\"notifications\");"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_openwire_jms_client/configuration |
Chapter 11. Integrating with Amazon S3 | Chapter 11. Integrating with Amazon S3 You can integrate Red Hat Advanced Cluster Security for Kubernetes with Amazon S3 to enable data backups. You can use these backups for data restoration in the case of an infrastructure disaster or corrupt data. After you integrate with Amazon S3, you can schedule daily or weekly backups and do manual on-demand backups. The backup includes the entire Red Hat Advanced Cluster Security for Kubernetes database, which includes all configurations, resources, events, and certificates. Make sure that backups are stored securely. Important If you are using Red Hat Advanced Cluster Security for Kubernetes version 3.0.53 or older, the backup does not include certificates. If your Amazon S3 is part of an air-gapped environment, you must add your AWS root CA as a trusted certificate authority in Red Hat Advanced Cluster Security for Kubernetes. 11.1. Configuring Amazon S3 integration in Red Hat Advanced Cluster Security for Kubernetes To configure Amazon S3 backups, create a new integration in Red Hat Advanced Cluster Security for Kubernetes. Prerequisites An existing S3 Bucket. To create a new bucket with required permissions, see the Amazon documentation topic Creating a bucket . Read , write , and delete permissions for the S3 bucket, the Access key ID , and the Secret access key . If you are using KIAM , kube2iam or another proxy, then an IAM role that has the read , write , and delete permissions. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the External backups section and select Amazon S3 . Click New Integration ( add icon). Enter a name for Integration Name . Enter the number of backups to retain in the Backups To Retain box. For Schedule , select the backup frequency as daily or weekly and the time to run the backup process. Enter the Bucket name where you want to store the backup. Optionally, enter an Object Prefix if you want to save the backups in a specific folder structure. For more information, see the Amazon documentation topic Working with object metadata . Enter the Endpoint for the bucket if you are using a non-public S3 instance, otherwise leave it blank. Enter the Region for the bucket. Turn on the Use Container IAM Role toggle or enter the Access Key ID , and the Secret Access Key . Select Test to confirm that the integration with Amazon S3 is working. Select Create to generate the configuration. Once configured, Red Hat Advanced Cluster Security for Kubernetes automatically backs up all data according to the specified schedule. 11.2. Performing on-demand backups on Amazon S3 Uses the RHACS portal to trigger manual backups of Red Hat Advanced Cluster Security for Kubernetes on Amazon S3. Prerequisites You must have already integrated Red Hat Advanced Cluster Security for Kubernetes with Amazon S3. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the External backups section, click Amazon S3 . Select the integration name for the S3 bucket where you want to do a backup. Click Trigger Backup . Note Currently, when you select the Trigger Backup option, there is no notification. However, Red Hat Advanced Cluster Security for Kubernetes begins the backup task in the background. 11.3. Additional resources Backing up Red Hat Advanced Cluster Security for Kubernetes Restoring from a backup | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrate-with-amazon-s3 |
Chapter 7. Red Hat Developer Hub integration with Amazon Web Services (AWS) | Chapter 7. Red Hat Developer Hub integration with Amazon Web Services (AWS) You can integrate your Red Hat Developer Hub application with Amazon Web Services (AWS), which can help you streamline your workflows within the AWS ecosystem. Integrating the Developer Hub resources with AWS provides access to a comprehensive suite of tools, services, and solutions. The integration with AWS requires the deployment of Developer Hub in Elastic Kubernetes Service (EKS) using one of the following methods: The Helm chart The Red Hat Developer Hub Operator 7.1. Monitoring and logging with Amazon Web Services (AWS) in Red Hat Developer Hub In the Red Hat Developer Hub, monitoring and logging are facilitated through Amazon Web Services (AWS) integration. With features like Amazon CloudWatch for real-time monitoring and Amazon Prometheus for comprehensive logging, you can ensure the reliability, scalability, and compliance of your Developer Hub application hosted on AWS infrastructure. This integration enables you to oversee, diagnose, and refine your applications in the Red Hat ecosystem, leading to an improved development and operational journey. 7.1.1. Monitoring with Amazon Prometheus Red Hat Developer Hub provides Prometheus metrics related to the running application. For more information about enabling or deploying Prometheus for EKS clusters, see Prometheus metrics in the Amazon documentation. To monitor Developer Hub using Amazon Prometheus , you need to create an Amazon managed service for the Prometheus workspace and configure the ingestion of the Developer Hub Prometheus metrics. For more information, see Create a workspace and Ingest Prometheus metrics to the workspace sections in the Amazon documentation. After ingesting Prometheus metrics into the created workspace, you can configure the metrics scraping to extract data from pods based on specific pod annotations. 7.1.1.1. Configuring annotations for monitoring You can configure the annotations for monitoring in both Helm deployment and Operator-backed deployment. Helm deployment To annotate the backstage pod for monitoring, update your values.yaml file as follows: upstream: backstage: # --- TRUNCATED --- podAnnotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '7007' prometheus.io/scheme: 'http' Operator-backed deployment Procedure As an administrator of the operator, edit the default configuration to add Prometheus annotations as follows: # Update OPERATOR_NS accordingly OPERATOR_NS=rhdh-operator kubectl edit configmap backstage-default-config -n "USD{OPERATOR_NS}" Find the deployment.yaml key in the ConfigMap and add the annotations to the spec.template.metadata.annotations field as follows: deployment.yaml: |- apiVersion: apps/v1 kind: Deployment # --- truncated --- spec: template: # --- truncated --- metadata: labels: rhdh.redhat.com/app: # placeholder for 'backstage-<cr-name>' # --- truncated --- annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '7007' prometheus.io/scheme: 'http' # --- truncated --- Save your changes. Verification To verify if the scraping works: Use kubectl to port-forward the Prometheus console to your local machine as follows: kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090 Open your web browser and navigate to http://localhost:9090 to access the Prometheus console. Monitor relevant metrics, such as process_cpu_user_seconds_total . 7.1.2. Logging with Amazon CloudWatch logs Logging within the Red Hat Developer Hub relies on the winston library . By default, logs at the debug level are not recorded. To activate debug logs, you must set the environment variable LOG_LEVEL to debug in your Red Hat Developer Hub instance. 7.1.2.1. Configuring the application log level You can configure the application log level in both Helm deployment and Operator-backed deployment. Helm deployment To update the logging level, add the environment variable LOG_LEVEL to your Helm chart's values.yaml file: upstream: backstage: # --- Truncated --- extraEnvVars: - name: LOG_LEVEL value: debug Operator-backed deployment You can modify the logging level by including the environment variable LOG_LEVEL in your custom resource as follows: spec: # Other fields omitted application: extraEnvs: envs: - name: LOG_LEVEL value: debug 7.1.2.2. Retrieving logs from Amazon CloudWatch The CloudWatch Container Insights are used to capture logs and metrics for Amazon EKS. For more information, see Logging for Amazon EKS documentation. To capture the logs and metrics, install the Amazon CloudWatch Observability EKS add-on in your cluster. Following the setup of Container Insights, you can access container logs using Logs Insights or Live Tail views. CloudWatch names the log group where all container logs are consolidated in the following manner: /aws/containerinsights/<ClusterName>/application Following is an example query to retrieve logs from the Developer Hub instance: fields @timestamp, @message, kubernetes.container_name | filter kubernetes.container_name in ["install-dynamic-plugins", "backstage-backend"] 7.2. Using Amazon Cognito as an authentication provider in Red Hat Developer Hub In this section, Amazon Cognito is an AWS service for adding an authentication layer to Developer Hub. You can sign in directly to the Developer Hub using a user pool or fedarate through a third-party identity provider. Although Amazon Cognito is not part of the core authentication providers for the Developer Hub, it can be integrated using the generic OpenID Connect (OIDC) provider. You can configure your Developer Hub in both Helm Chart and Operator-backed deployments. Prerequisites You have a User Pool or you have created a new one. For more information about user pools, see Amazon Cognito user pools documentation. Note Ensure that you have noted the AWS region where the user pool is located and the user pool ID. You have created an App Client within your user pool for integrating the hosted UI. For more information, see Setting up the hosted UI with the Amazon Cognito console . When setting up the hosted UI using the Amazon Cognito console, ensure to make the following adjustments: In the Allowed callback URL(s) section, include the URL https://<rhdh_url>/api/auth/oidc/handler/frame . Ensure to replace <rhdh_url> with your Developer Hub application's URL, such as, my.rhdh.example.com . Similarly, in the Allowed sign-out URL(s) section, add https://<rhdh_url> . Replace <rhdh_url> with your Developer Hub application's URL, such as my.rhdh.example.com . Under OAuth 2.0 grant types , select Authorization code grant to return an authorization code. Under OpenID Connect scopes , ensure to select at least the following scopes: OpenID Profile Email Helm deployment Procedure Edit or create your custom app-config-rhdh ConfigMap as follows: apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # --- Truncated --- app: title: Red Hat Developer Hub signInPage: oidc auth: environment: production session: secret: USD{AUTH_SESSION_SECRET} providers: oidc: production: clientId: USD{AWS_COGNITO_APP_CLIENT_ID} clientSecret: USD{AWS_COGNITO_APP_CLIENT_SECRET} metadataUrl: USD{AWS_COGNITO_APP_METADATA_URL} callbackUrl: USD{AWS_COGNITO_APP_CALLBACK_URL} scope: 'openid profile email' prompt: auto Edit or create your custom secrets-rhdh Secret using the following template: apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: AUTH_SESSION_SECRET: "my super auth session secret - change me!!!" AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id" AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret" AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration" AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame" Add references of both the ConfigMap and Secret resources in your values.yaml file: upstream: backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: fsGroup: 2000 extraAppConfig: - filename: app-config-rhdh.yaml configMapRef: app-config-rhdh extraEnvVarsSecrets: - secrets-rhdh Upgrade the Helm deployment: helm upgrade rhdh \ openshift-helm-charts/redhat-developer-hub \ [--version 1.2.6] \ --values /path/to/values.yaml Operator-backed deployment Add the following code to your app-config-rhdh ConfigMap: apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # --- Truncated --- signInPage: oidc auth: # Production to disable guest user login environment: production # Providing an auth.session.secret is needed because the oidc provider requires session support. session: secret: USD{AUTH_SESSION_SECRET} providers: oidc: production: # See https://github.com/backstage/backstage/blob/master/plugins/auth-backend-module-oidc-provider/config.d.ts clientId: USD{AWS_COGNITO_APP_CLIENT_ID} clientSecret: USD{AWS_COGNITO_APP_CLIENT_SECRET} metadataUrl: USD{AWS_COGNITO_APP_METADATA_URL} callbackUrl: USD{AWS_COGNITO_APP_CALLBACK_URL} # Minimal set of scopes needed. Feel free to add more if needed. scope: 'openid profile email' # Note that by default, this provider will use the 'none' prompt which assumes that your are already logged on in the IDP. # You should set prompt to: # - auto: will let the IDP decide if you need to log on or if you can skip login when you have an active SSO session # - login: will force the IDP to always present a login form to the user prompt: auto Add the following code to your secrets-rhdh Secret: apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: # --- Truncated --- # TODO: Change auth session secret. AUTH_SESSION_SECRET: "my super auth session secret - change me!!!" # TODO: user pool app client ID AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id" # TODO: user pool app client Secret AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret" # TODO: Replace region and user pool ID AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration" # TODO: Replace <rhdh_dns> AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame" Ensure your Custom Resource contains references to both the app-config-rhdh ConfigMap and secrets-rhdh Secret: apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: # TODO: this the name of your Developer Hub instance name: my-rhdh spec: application: imagePullSecrets: - "rhdh-pull-secret" route: enabled: false appConfig: configMaps: - name: "app-config-rhdh" extraEnvs: secrets: - name: "secrets-rhdh" Optional: If you have an existing Developer Hub instance backed by the Custom Resource and you have not edited it, you can manually delete the Developer Hub deployment to recreate it using the operator. Run the following command to delete the Developer Hub deployment: kubectl delete deployment -l app.kubernetes.io/instance=<CR_NAME> Verification Navigate to your Developer Hub web URL and sign in using OIDC authentication, which prompts you to authenticate through the configured AWS Cognito user pool. Once logged in, access Settings and verify user details. | [
"upstream: backstage: # --- TRUNCATED --- podAnnotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '7007' prometheus.io/scheme: 'http'",
"Update OPERATOR_NS accordingly OPERATOR_NS=rhdh-operator edit configmap backstage-default-config -n \"USD{OPERATOR_NS}\"",
"deployment.yaml: |- apiVersion: apps/v1 kind: Deployment # --- truncated --- spec: template: # --- truncated --- metadata: labels: rhdh.redhat.com/app: # placeholder for 'backstage-<cr-name>' # --- truncated --- annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '7007' prometheus.io/scheme: 'http' # --- truncated ---",
"--namespace=prometheus port-forward deploy/prometheus-server 9090",
"upstream: backstage: # --- Truncated --- extraEnvVars: - name: LOG_LEVEL value: debug",
"spec: # Other fields omitted application: extraEnvs: envs: - name: LOG_LEVEL value: debug",
"fields @timestamp, @message, kubernetes.container_name | filter kubernetes.container_name in [\"install-dynamic-plugins\", \"backstage-backend\"]",
"apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | # --- Truncated --- app: title: Red Hat Developer Hub signInPage: oidc auth: environment: production session: secret: USD{AUTH_SESSION_SECRET} providers: oidc: production: clientId: USD{AWS_COGNITO_APP_CLIENT_ID} clientSecret: USD{AWS_COGNITO_APP_CLIENT_SECRET} metadataUrl: USD{AWS_COGNITO_APP_METADATA_URL} callbackUrl: USD{AWS_COGNITO_APP_CALLBACK_URL} scope: 'openid profile email' prompt: auto",
"apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: AUTH_SESSION_SECRET: \"my super auth session secret - change me!!!\" AWS_COGNITO_APP_CLIENT_ID: \"my-aws-cognito-app-client-id\" AWS_COGNITO_APP_CLIENT_SECRET: \"my-aws-cognito-app-client-secret\" AWS_COGNITO_APP_METADATA_URL: \"https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration\" AWS_COGNITO_APP_CALLBACK_URL: \"https://[rhdh_dns]/api/auth/oidc/handler/frame\"",
"upstream: backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: fsGroup: 2000 extraAppConfig: - filename: app-config-rhdh.yaml configMapRef: app-config-rhdh extraEnvVarsSecrets: - secrets-rhdh",
"helm upgrade rhdh openshift-helm-charts/redhat-developer-hub [--version 1.2.6] --values /path/to/values.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | # --- Truncated --- signInPage: oidc auth: # Production to disable guest user login environment: production # Providing an auth.session.secret is needed because the oidc provider requires session support. session: secret: USD{AUTH_SESSION_SECRET} providers: oidc: production: # See https://github.com/backstage/backstage/blob/master/plugins/auth-backend-module-oidc-provider/config.d.ts clientId: USD{AWS_COGNITO_APP_CLIENT_ID} clientSecret: USD{AWS_COGNITO_APP_CLIENT_SECRET} metadataUrl: USD{AWS_COGNITO_APP_METADATA_URL} callbackUrl: USD{AWS_COGNITO_APP_CALLBACK_URL} # Minimal set of scopes needed. Feel free to add more if needed. scope: 'openid profile email' # Note that by default, this provider will use the 'none' prompt which assumes that your are already logged on in the IDP. # You should set prompt to: # - auto: will let the IDP decide if you need to log on or if you can skip login when you have an active SSO session # - login: will force the IDP to always present a login form to the user prompt: auto",
"apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: # --- Truncated --- # TODO: Change auth session secret. AUTH_SESSION_SECRET: \"my super auth session secret - change me!!!\" # TODO: user pool app client ID AWS_COGNITO_APP_CLIENT_ID: \"my-aws-cognito-app-client-id\" # TODO: user pool app client Secret AWS_COGNITO_APP_CLIENT_SECRET: \"my-aws-cognito-app-client-secret\" # TODO: Replace region and user pool ID AWS_COGNITO_APP_METADATA_URL: \"https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration\" # TODO: Replace <rhdh_dns> AWS_COGNITO_APP_CALLBACK_URL: \"https://[rhdh_dns]/api/auth/oidc/handler/frame\"",
"apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: # TODO: this the name of your Developer Hub instance name: my-rhdh spec: application: imagePullSecrets: - \"rhdh-pull-secret\" route: enabled: false appConfig: configMaps: - name: \"app-config-rhdh\" extraEnvs: secrets: - name: \"secrets-rhdh\"",
"delete deployment -l app.kubernetes.io/instance=<CR_NAME>"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/administration_guide_for_red_hat_developer_hub/assembly-rhdh-integration-aws |
6.3. Removing Browser Configuration for Ticket Delegation (For Upgrading from 6.2) | 6.3. Removing Browser Configuration for Ticket Delegation (For Upgrading from 6.2) As part of establishing Kerberos authentication, a principal is given a ticket granting ticket (TGT). Whenever that principal attempts to contact a service or application within the Kerberos domain, the service checks for an active TGT and then requests its own service-specific ticket from the TGT for that principal to access that service. As part of configuring the web browser used to access the IdM web UI (and any other Kerberos-aware web applications), versions of Identity Management required that the TGT delegation be forwarded to the IdM server. This required adding the delegation-uris parameter to the about:config setup in Firefox: In Red Hat Enterprise Linux 6.3, Identity Management uses the Kerberos Services for User to Proxy (S4U2Proxy), so this additional delegation step is no longer required. Updating Existing Configured Browsers For browsers which have already been configured to use the Identity Management web UI, the delegation-uris setting can be cleared after upgrading to ipa-server-3.0.0 or ipa-client-3.0.0 . There is no need to restart the browser after changing the delegation-uris setting. Updating configure.jar for New Browser Configuration The browser configuration is defined in the configure.jar file. This JAR file is generated when the server is installed and it is not updated with other files when IdM is updated. Any browsers configured will still have the delegation-uris parameter set unnecessarily, even after the IdM server is upgraded. However, the configure.jar file can be updated. The preferences.html file in configure.jar sets the delegation-uris parameter. The updated preferences.html file can be added to configure.jar , and then configure.jar can be re-signed and re-deployed on the IdM servers. Note Only update the configure.jar file on the initial IdM server. This is the master server, and it is the only server which has a signing certificate. Then propagate the updated file to the other servers and replicas. Update the packages on the initial IdM master server (the first instance). This will bring in the 3.0 UI packages, including the configure.jar file. Back up the existing configure.jar file. Create a temporary working directory. Copy the updated preferences.html file to the working directory. Use the signtool command (one of the NSS utilities) to add the new preferences.html file and re-sign the configure.jar file. The -e option tells the tool to sign only files with a .html extension. The -Z option creates a new JAR file. Copy the regenerated configure.jar file to all other IdM servers and replicas. | [
"network.negotiate-auth.delegation-uris .example.com",
"mv /usr/share/ipa/html/configure.jar /usr/share/ipa/html/configure.jar.old",
"mkdir /tmp/sign",
"cp /usr/share/ipa/html/preferences.html /tmp/sign",
"signtool -d /etc/httpd/alias -k Signing-Cert -Z /usr/share/ipa/html/configure.jar -e \".html\" -p `cat /etc/httpd/alias/pwdfile.txt` /tmp/sign"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/ticket-delegation |
1.2. SystemTap Capabilities | 1.2. SystemTap Capabilities SystemTap was originally developed to provide functionality for Red Hat Enterprise Linux similar to Linux probing tools such as dprobes and the Linux Trace Toolkit. SystemTap aims to supplement the existing suite of Linux monitoring tools by providing users with the infrastructure to track kernel activity. In addition, SystemTap combines this capability with two attributes: Flexibility: SystemTap's framework allows users to develop simple scripts for investigating and monitoring a wide variety of kernel functions, system calls, and other events that occur in kernel space. With this, SystemTap is not so much a tool as it is a system that allows you to develop your own kernel-specific forensic and monitoring tools. Ease-of-Use: as mentioned earlier, SystemTap allows users to probe kernel-space events without having to resort to the lengthy instrument, recompile, install, and reboot the kernel process. Most of the SystemTap scripts enumerated in Chapter 4, Useful SystemTap Scripts demonstrate system forensics and monitoring capabilities not natively available with other similar tools (such as top , OProfile , or ps ). These scripts are provided to give readers extensive examples of the application of SystemTap, which in turn will educate them further on the capabilities they can employ when writing their own SystemTap scripts. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/intro-systemtap-vs-others |
Chapter 1. Metadata APIs | Chapter 1. Metadata APIs 1.1. APIRequestCount [apiserver.openshift.io/v1] Description APIRequestCount tracks requests made to an API. The instance name must be of the form resource.version.group , matching the resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object 1.3. ComponentStatus [v1] Description ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+ Type object 1.4. ConfigMap [v1] Description ConfigMap holds configuration data for pods to consume. Type object 1.5. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object 1.6. Event [events.k8s.io/v1] Description Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 1.7. Event [v1] Description Event is a report of an event somewhere in the cluster. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 1.8. Lease [coordination.k8s.io/v1] Description Lease defines a lease concept. Type object 1.9. Namespace [v1] Description Namespace provides a scope for Names. Use of multiple namespaces is optional. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/metadata_apis/metadata-apis |
Chapter 6. Deploying Streams for Apache Kafka using installation artifacts | Chapter 6. Deploying Streams for Apache Kafka using installation artifacts Having prepared your environment for a deployment of Streams for Apache Kafka , you can deploy Streams for Apache Kafka to an OpenShift cluster. Use the installation files provided with the release artifacts. Streams for Apache Kafka is based on Strimzi 0.40.x. You can deploy Streams for Apache Kafka 2.7 on OpenShift 4.12 to 4.16. The steps to deploy Streams for Apache Kafka using the installation files are as follows: Deploy the Cluster Operator Use the Cluster Operator to deploy the following: Kafka cluster Topic Operator User Operator Optionally, deploy the following Kafka components according to your requirements: Kafka Connect Kafka MirrorMaker Kafka Bridge Note To run the commands in this guide, an OpenShift user must have the rights to manage role-based access control (RBAC) and CRDs. 6.1. Basic deployment path You can set up a deployment where Streams for Apache Kafka manages a single Kafka cluster in the same namespace. You might use this configuration for development or testing. Or you can use Streams for Apache Kafka in a production environment to manage a number of Kafka clusters in different namespaces. The first step for any deployment of Streams for Apache Kafka is to install the Cluster Operator using the install/cluster-operator files. A single command applies all the installation files in the cluster-operator folder: oc apply -f ./install/cluster-operator . The command sets up everything you need to be able to create and manage a Kafka deployment, including the following: Cluster Operator ( Deployment , ConfigMap ) Streams for Apache Kafka CRDs ( CustomResourceDefinition ) RBAC resources ( ClusterRole , ClusterRoleBinding , RoleBinding ) Service account ( ServiceAccount ) The basic deployment path is as follows: Download the release artifacts Create an OpenShift namespace in which to deploy the Cluster Operator Deploy the Cluster Operator Update the install/cluster-operator files to use the namespace created for the Cluster Operator Install the Cluster Operator to watch one, multiple, or all namespaces Create a Kafka cluster After which, you can deploy other Kafka components and set up monitoring of your deployment. 6.2. Deploying the Cluster Operator The Cluster Operator is responsible for deploying and managing Kafka clusters within an OpenShift cluster. When the Cluster Operator is running, it starts to watch for updates of Kafka resources. By default, a single replica of the Cluster Operator is deployed. You can add replicas with leader election so that additional Cluster Operators are on standby in case of disruption. For more information, see Section 9.5.4, "Running multiple Cluster Operator replicas with leader election" . 6.2.1. Specifying the namespaces the Cluster Operator watches The Cluster Operator watches for updates in the namespaces where the Kafka resources are deployed. When you deploy the Cluster Operator, you specify which namespaces to watch in the OpenShift cluster. You can specify the following namespaces: A single selected namespace (the same namespace containing the Cluster Operator) Multiple selected namespaces All namespaces in the cluster Watching multiple selected namespaces has the most impact on performance due to increased processing overhead. To optimize performance for namespace monitoring, it is generally recommended to either watch a single namespace or monitor the entire cluster. Watching a single namespace allows for focused monitoring of namespace-specific resources, while monitoring all namespaces provides a comprehensive view of the cluster's resources across all namespaces. The Cluster Operator watches for changes to the following resources: Kafka for the Kafka cluster. KafkaConnect for the Kafka Connect cluster. KafkaConnector for creating and managing connectors in a Kafka Connect cluster. KafkaMirrorMaker for the Kafka MirrorMaker instance. KafkaMirrorMaker2 for the Kafka MirrorMaker 2 instance. KafkaBridge for the Kafka Bridge instance. KafkaRebalance for the Cruise Control optimization requests. When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as Deployments, Pods, Services and ConfigMaps. Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource. Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption. When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources. Note While the Cluster Operator can watch one, multiple, or all namespaces in an OpenShift cluster, the Topic Operator and User Operator watch for KafkaTopic and KafkaUser resources in a single namespace. For more information, see Section 1.2.1, "Watching Streams for Apache Kafka resources in OpenShift namespaces" . 6.2.2. Deploying the Cluster Operator to watch a single namespace This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources in a single namespace in your OpenShift cluster. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Deploy the Cluster Operator: oc create -f install/cluster-operator -n my-cluster-operator-namespace Check the status of the deployment: oc get deployments -n my-cluster-operator-namespace Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.2.3. Deploying the Cluster Operator to watch multiple namespaces This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources across multiple namespaces in your OpenShift cluster. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to add a list of all the namespaces the Cluster Operator will watch to the STRIMZI_NAMESPACE environment variable. For example, in this procedure the Cluster Operator will watch the namespaces watched-namespace-1 , watched-namespace-2 , watched-namespace-3 . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3 For each namespace listed, install the RoleBindings . In this example, we replace watched-namespace in these commands with the namespaces listed in the step, repeating them for watched-namespace-1 , watched-namespace-2 , watched-namespace-3 : oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace> Deploy the Cluster Operator: oc create -f install/cluster-operator -n my-cluster-operator-namespace Check the status of the deployment: oc get deployments -n my-cluster-operator-namespace Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.2.4. Deploying the Cluster Operator to watch all namespaces This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources across all namespaces in your OpenShift cluster. When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to set the value of the STRIMZI_NAMESPACE environment variable to * . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: # ... serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: "*" # ... Create ClusterRoleBindings that grant cluster-wide access for all namespaces to the Cluster Operator. oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator Deploy the Cluster Operator to your OpenShift cluster. oc create -f install/cluster-operator -n my-cluster-operator-namespace Check the status of the deployment: oc get deployments -n my-cluster-operator-namespace Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.3. Deploying Kafka To be able to manage a Kafka cluster with the Cluster Operator, you must deploy it as a Kafka resource. Streams for Apache Kafka provides example deployment files to do this. You can use these files to deploy the Topic Operator and User Operator at the same time. After you have deployed the Cluster Operator, use a Kafka resource to deploy the following components: A Kafka cluster that uses KRaft or ZooKeeper: KRaft-based or ZooKeeper-based Kafka cluster with node pools ZooKeeper-based Kafka cluster without node pools Topic Operator User Operator Node pools provide configuration for a set of Kafka nodes. By using node pools, nodes can have different configuration within the same Kafka cluster. If you haven't deployed a Kafka cluster as a Kafka resource, you can't use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of OpenShift. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, by deploying them as standalone components . You can also deploy and use other Kafka components with a Kafka cluster not managed by Streams for Apache Kafka. 6.3.1. Deploying a Kafka cluster with node pools This procedure shows how to deploy Kafka with node pools to your OpenShift cluster using the Cluster Operator. Node pools represent a distinct group of Kafka nodes within a Kafka cluster that share the same configuration. For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the kafka resource. The deployment uses a YAML file to provide the specification to create a KafkaNodePool resource. You can use node pools with Kafka clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper for cluster management. To deploy a Kafka cluster in KRaft mode, you must use the KafkaNodePool resources. Streams for Apache Kafka provides the following example files that you can use to create a Kafka cluster that uses node pools: kafka-with-dual-role-kraft-nodes.yaml Deploys a Kafka cluster with one pool of KRaft nodes that share the broker and controller roles. kafka-with-kraft.yaml Deploys a persistent Kafka cluster with one pool of controller nodes and one pool of broker nodes. kafka-with-kraft-ephemeral.yaml Deploys an ephemeral Kafka cluster with one pool of controller nodes and one pool of broker nodes. kafka.yaml Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration. Note You can perform the steps outlined here to deploy a new Kafka cluster with KafkaNodePool resources or migrate your existing Kafka cluster . Prerequisites The Cluster Operator must be deployed. Procedure Deploy a KRaft-based Kafka cluster. To deploy a Kafka cluster in KRaft mode with a single node pool that uses dual-role nodes: oc apply -f examples/kafka/kraft/kafka-with-dual-role-nodes.yaml To deploy a persistent Kafka cluster in KRaft mode with separate node pools for broker and controller nodes: oc apply -f examples/kafka/kraft/kafka.yaml To deploy an ephemeral Kafka cluster in KRaft mode with separate node pools for broker and controller nodes: oc apply -f examples/kafka/kraft/kafka-ephemeral.yaml To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers: oc apply -f examples/kafka/kafka-with-node-pools.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the node pool names and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster is the name of the Kafka cluster. pool-a is the name of the node pool. A sequential index number starting with 0 identifies each Kafka pod created. If you are using ZooKeeper, you'll also see the ZooKeeper pods. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Information on the deployment is also shown in the status of the KafkaNodePool resource, including a list of IDs for nodes in the pool. Note Node IDs are assigned sequentially starting at 0 (zero) across all node pools within a cluster. This means that node IDs might not run sequentially within a specific node pool. If there are gaps in the sequence of node IDs across the cluster, the node to be added is assigned an ID that fills the gap. When scaling down, the node with the highest node ID within a pool is removed. Additional resources Node pool configuration 6.3.2. Deploying a ZooKeeper-based Kafka cluster without node pools This procedure shows how to deploy a ZooKeeper-based Kafka cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a Kafka resource. Streams for Apache Kafka provides the following example files to create a Kafka cluster that uses ZooKeeper for cluster management: kafka-persistent.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes. kafka-jbod.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes). kafka-persistent-single.yaml Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node. kafka-ephemeral.yaml Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes. kafka-ephemeral-single.yaml Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node. In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment. Ephemeral cluster In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses emptyDir volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using an emptyDir volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down. Persistent cluster A persistent Kafka cluster uses persistent volumes to store ZooKeeper and Kafka data. A PersistentVolume is acquired using a PersistentVolumeClaim to make it independent of the actual type of the PersistentVolume . The PersistentVolumeClaim can use a StorageClass to trigger automatic volume provisioning. When no StorageClass is specified, OpenShift will try to use the default StorageClass . The following examples show some common types of persistent volumes: If your OpenShift cluster runs on Amazon AWS, OpenShift can provision Amazon EBS volumes If your OpenShift cluster runs on Microsoft Azure, OpenShift can provision Azure Disk Storage volumes If your OpenShift cluster runs on Google Cloud, OpenShift can provision Persistent Disk volumes If your OpenShift cluster runs on bare metal, OpenShift can provision local persistent volumes The example YAML files specify the latest supported Kafka version, and configuration for its supported log message format version and inter-broker protocol version. The inter.broker.protocol.version property for the Kafka config must be the version supported by the specified Kafka version ( spec.kafka.version ). The property represents the version of Kafka protocol used in a Kafka cluster. From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. The example clusters are named my-cluster by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name property of the Kafka resource in the relevant YAML file. Default cluster name and specified Kafka versions apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.7.0 #... config: #... log.message.format.version: "3.7" inter.broker.protocol.version: "3.7" # ... Prerequisites The Cluster Operator must be deployed. Procedure Deploy a ZooKeeper-based Kafka cluster. To deploy an ephemeral cluster: oc apply -f examples/kafka/kafka-ephemeral.yaml To deploy a persistent cluster: oc apply -f examples/kafka/kafka-persistent.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the pod names and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0 my-cluster is the name of the Kafka cluster. A sequential index number starting with 0 identifies each Kafka and ZooKeeper pod created. With the default deployment, you create an Entity Operator cluster, 3 Kafka pods, and 3 ZooKeeper pods. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Additional resources Kafka cluster configuration 6.3.3. Deploying the Topic Operator using the Cluster Operator This procedure describes how to deploy the Topic Operator using the Cluster Operator. The Topic Operator can be deployed for use in either bidirectional mode or unidirectional mode. To learn more about bidirectional and unidirectional topic management, see Section 10.1, "Topic management modes" . You configure the entityOperator property of the Kafka resource to include the topicOperator . By default, the Topic Operator watches for KafkaTopic resources in the namespace of the Kafka cluster deployed by the Cluster Operator. You can also specify a namespace using watchedNamespace in the Topic Operator spec . A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you use Streams for Apache Kafka to deploy multiple Kafka clusters into the same namespace, enable the Topic Operator for only one Kafka cluster or use the watchedNamespace property to configure the Topic Operators to watch other namespaces. If you want to use the Topic Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, you must deploy the Topic Operator as a standalone component . For more information about configuring the entityOperator and topicOperator properties, see Configuring the Entity Operator . Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include topicOperator : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the Topic Operator spec using the properties described in the EntityTopicOperatorSpec schema reference . Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: oc apply -f <kafka_configuration_file> Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the pod name and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ... my-cluster is the name of the Kafka cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . 6.3.4. Deploying the User Operator using the Cluster Operator This procedure describes how to deploy the User Operator using the Cluster Operator. You configure the entityOperator property of the Kafka resource to include the userOperator . By default, the User Operator watches for KafkaUser resources in the namespace of the Kafka cluster deployment. You can also specify a namespace using watchedNamespace in the User Operator spec . A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use the User Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, you must deploy the User Operator as a standalone component . For more information about configuring the entityOperator and userOperator properties, see Configuring the Entity Operator . Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include userOperator : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the User Operator spec using the properties described in EntityUserOperatorSpec schema reference . Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: oc apply -f <kafka_configuration_file> Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the pod name and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ... my-cluster is the name of the Kafka cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . 6.3.5. Connecting to ZooKeeper from a terminal ZooKeeper services are secured with encryption and authentication and are not intended to be used by external applications that are not part of Streams for Apache Kafka. However, if you want to use CLI tools that require a connection to ZooKeeper, you can use a terminal inside a ZooKeeper pod and connect to localhost:12181 as the ZooKeeper address. Prerequisites An OpenShift cluster is available. A Kafka cluster is running. The Cluster Operator is running. Procedure Open the terminal using the OpenShift console or run the exec command from your CLI. For example: oc exec -ti my-cluster -zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls / Be sure to use localhost:12181 . 6.3.6. List of Kafka cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster. Shared resources <kafka_cluster_name>-cluster-ca Secret with the Cluster CA private key used to encrypt the cluster communication. <kafka_cluster_name>-cluster-ca-cert Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers. <kafka_cluster_name>-clients-ca Secret with the Clients CA private key used to sign user certificates <kafka_cluster_name>-clients-ca-cert Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka users. <kafka_cluster_name>-cluster-operator-certs Secret with Cluster operators keys for communication with Kafka and ZooKeeper. ZooKeeper nodes <kafka_cluster_name>-zookeeper Name given to the following ZooKeeper resources: StrimziPodSet for managing the ZooKeeper node pods. Service account used by the ZooKeeper nodes. PodDisruptionBudget configured for the ZooKeeper nodes. <kafka_cluster_name>-zookeeper-<pod_id> Pods created by the StrimziPodSet. <kafka_cluster_name>-zookeeper-nodes Headless Service needed to have DNS resolve the ZooKeeper pods IP addresses directly. <kafka_cluster_name>-zookeeper-client Service used by Kafka brokers to connect to ZooKeeper nodes as clients. <kafka_cluster_name>-zookeeper-config ConfigMap that contains the ZooKeeper ancillary configuration, and is mounted as a volume by the ZooKeeper node pods. <kafka_cluster_name>-zookeeper-nodes Secret with ZooKeeper node keys. <kafka_cluster_name>-network-policy-zookeeper Network policy managing access to the ZooKeeper services. data-<kafka_cluster_name>-zookeeper-<pod_id> Persistent Volume Claim for the volume used for storing data for a specific ZooKeeper node. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data. Kafka brokers <kafka_cluster_name>-kafka Name given to the following Kafka resources: StrimziPodSet for managing the Kafka broker pods. Service account used by the Kafka pods. PodDisruptionBudget configured for the Kafka brokers. <kafka_cluster_name>-kafka-<pod_id> Name given to the following Kafka resources: Pods created by the StrimziPodSet. ConfigMaps with Kafka broker configuration. <kafka_cluster_name>-kafka-brokers Service needed to have DNS resolve the Kafka broker pods IP addresses directly. <kafka_cluster_name>-kafka-bootstrap Service can be used as bootstrap servers for Kafka clients connecting from within the OpenShift cluster. <kafka_cluster_name>-kafka-external-bootstrap Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is external and port is 9094 . <kafka_cluster_name>-kafka-<pod_id> Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is external and port is 9094 . <kafka_cluster_name>-kafka-external-bootstrap Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type route . The old route name will be used for backwards compatibility when the listener name is external and port is 9094 . <kafka_cluster_name>-kafka-<pod_id> Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type route . The old route name will be used for backwards compatibility when the listener name is external and port is 9094 . <kafka_cluster_name>-kafka-<listener_name>-bootstrap Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners. <kafka_cluster_name>-kafka-<listener_name>-<pod_id> Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners. <kafka_cluster_name>-kafka-<listener_name>-bootstrap Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type route . The new route name will be used for all other external listeners. <kafka_cluster_name>-kafka-<listener_name>-<pod_id> Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type route . The new route name will be used for all other external listeners. <kafka_cluster_name>-kafka-config ConfigMap containing the Kafka ancillary configuration, which is mounted as a volume by the broker pods when the UseStrimziPodSets feature gate is disabled. <kafka_cluster_name>-kafka-brokers Secret with Kafka broker keys. <kafka_cluster_name>-network-policy-kafka Network policy managing access to the Kafka services. strimzi- namespace-name -<kafka_cluster_name>-kafka-init Cluster role binding used by the Kafka brokers. <kafka_cluster_name>-jmx Secret with JMX username and password used to secure the Kafka broker port. This resource is created only when JMX is enabled in Kafka. data-<kafka_cluster_name>-kafka-<pod_id> Persistent Volume Claim for the volume used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data. data-<id>-<kafka_cluster_name>-kafka-<pod_id> Persistent Volume Claim for the volume id used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data. Kafka node pools If you are using Kafka node pools, the resources created apply to the nodes managed in the node pools whether they are operating as brokers, controllers, or both. The naming convention includes the name of the Kafka cluster and the node pool: <kafka_cluster_name>-<pool_name> . <kafka_cluster_name>-<pool_name> Name given to the StrimziPodSet for managing the Kafka node pool. <kafka_cluster_name>-<pool_name>-<pod_id> Name given to the following Kafka node pool resources: Pods created by the StrimziPodSet. ConfigMaps with Kafka node configuration. data-<kafka_cluster_name>-<pool_name>-<pod_id> Persistent Volume Claim for the volume used for storing data for a specific node. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data. data-<id>-<kafka_cluster_name>-<pool_name>-<pod_id> Persistent Volume Claim for the volume id used for storing data for a specific node. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data. Entity Operator These resources are only created if the Entity Operator is deployed using the Cluster Operator. <kafka_cluster_name>-entity-operator Name given to the following Entity Operator resources: Deployment with Topic and User Operators. Service account used by the Entity Operator. Network policy managing access to the Entity Operator metrics. <kafka_cluster_name>-entity-operator-<random_string> Pod created by the Entity Operator deployment. <kafka_cluster_name>-entity-topic-operator-config ConfigMap with ancillary configuration for Topic Operators. <kafka_cluster_name>-entity-user-operator-config ConfigMap with ancillary configuration for User Operators. <kafka_cluster_name>-entity-topic-operator-certs Secret with Topic Operator keys for communication with Kafka and ZooKeeper. <kafka_cluster_name>-entity-user-operator-certs Secret with User Operator keys for communication with Kafka and ZooKeeper. strimzi-<kafka_cluster_name>-entity-topic-operator Role binding used by the Entity Topic Operator. strimzi-<kafka_cluster_name>-entity-user-operator Role binding used by the Entity User Operator. Kafka Exporter These resources are only created if the Kafka Exporter is deployed using the Cluster Operator. <kafka_cluster_name>-kafka-exporter Name given to the following Kafka Exporter resources: Deployment with Kafka Exporter. Service used to collect consumer lag metrics. Service account used by the Kafka Exporter. Network policy managing access to the Kafka Exporter metrics. <kafka_cluster_name>-kafka-exporter-<random_string> Pod created by the Kafka Exporter deployment. Cruise Control These resources are only created if Cruise Control was deployed using the Cluster Operator. <kafka_cluster_name>-cruise-control Name given to the following Cruise Control resources: Deployment with Cruise Control. Service used to communicate with Cruise Control. Service account used by the Cruise Control. <kafka_cluster_name>-cruise-control-<random_string> Pod created by the Cruise Control deployment. <kafka_cluster_name>-cruise-control-config ConfigMap that contains the Cruise Control ancillary configuration, and is mounted as a volume by the Cruise Control pods. <kafka_cluster_name>-cruise-control-certs Secret with Cruise Control keys for communication with Kafka and ZooKeeper. <kafka_cluster_name>-network-policy-cruise-control Network policy managing access to the Cruise Control service. 6.4. Deploying Kafka Connect Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database or messaging system, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed. In Streams for Apache Kafka, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by Streams for Apache Kafka. Using the concept of connectors , Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability. The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect resource and connectors created using the KafkaConnector resource. In order to use Kafka Connect, you need to do the following. Deploy a Kafka Connect cluster Add connectors to integrate with other systems Note The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context. 6.4.1. Deploying Kafka Connect to your OpenShift cluster This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator. A Kafka Connect cluster deployment is implemented with a configurable number of nodes (also called workers ) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable. The deployment uses a YAML file to provide the specification to create a KafkaConnect resource. Streams for Apache Kafka provides example configuration files . In this procedure, we use the following example file: examples/connect/kafka-connect.yaml Important If deploying Kafka Connect clusters to run in parallel, each instance must use unique names for internal Kafka Connect topics. To do this, configure each Kafka Connect instance to replace the defaults . Prerequisites The Cluster Operator must be deployed. Running Kafka cluster. Procedure Deploy Kafka Connect to your OpenShift cluster. Use the examples/connect/kafka-connect.yaml file to deploy Kafka Connect. oc apply -f examples/connect/kafka-connect.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0 my-connect-cluster is the name of the Kafka Connect cluster. A pod ID identifies each pod created. With the default deployment, you create a single Kafka Connect pod. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Additional resources Kafka Connect cluster configuration 6.4.2. List of Kafka Connect cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: <connect_cluster_name>-connect Name given to the following Kafka Connect resources: StrimziPodSet that creates the Kafka Connect worker node pods. Headless service that provides stable DNS names to the Kafka Connect pods. Service account used by the Kafka Connect pods. Pod disruption budget configured for the Kafka Connect worker nodes. Network policy managing access to the Kafka Connect REST API. <connect_cluster_name>-connect-<pod_id> Pods created by the Kafka Connect StrimziPodSet. <connect_cluster_name>-connect-api Service which exposes the REST interface for managing the Kafka Connect cluster. <connect_cluster_name>-connect-config ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka Connect pods. strimzi-<namespace-name>-<connect_cluster_name>-connect-init Cluster role binding used by the Kafka Connect cluster. <connect_cluster_name>-connect-build Pod used to build a new container image with additional connector plugins (only when Kafka Connect Build feature is used). <connect_cluster_name>-connect-dockerfile ConfigMap with the Dockerfile generated to build the new container image with additional connector plugins (only when the Kafka Connect build feature is used). 6.5. Adding Kafka Connect connectors Kafka Connect uses connectors to integrate with other systems to stream data. A connector is an instance of a Kafka Connector class, which can be one of the following type: Source connector A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages. Sink connector A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system. Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems. Add connector plugins to Kafka Connect in one of the following ways: Configure Kafka Connect to build a new container image with plugins automatically Create a Docker image from the base Kafka Connect image (manually or using continuous integration) After plugins have been added to the container image, you can start, stop, and manage connector instances in the following ways: Using Streams for Apache Kafka's KafkaConnector custom resource Using the Kafka Connect API You can also create new connector instances using these options. 6.5.1. Building a new container image with connector plugins automatically Configure Kafka Connect so that Streams for Apache Kafka automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins property of the KafkaConnect custom resource. Streams for Apache Kafka will automatically download and add the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output and automatically used in the Kafka Connect deployment. Prerequisites The Cluster Operator must be deployed. A container registry. You need to provide your own container registry where images can be pushed to, stored, and pulled from. Streams for Apache Kafka supports private container registries as well as public registries such as Quay or Docker Hub . Procedure Configure the KafkaConnect custom resource by specifying the container registry in .spec.build.output , and additional connectors in .spec.build.plugins : apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #... 1 The specification for the Kafka Connect cluster . 2 (Required) Configuration of the container registry where new images are pushed. 3 (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one artifact . Create or update the resource: Wait for the new container image to build, and for the Kafka Connect cluster to be deployed. Use the Kafka Connect REST API or KafkaConnector custom resources to use the connector plugins you added. Additional resources Kafka Connect Build schema reference 6.5.2. Building a new container image with connector plugins from the Kafka Connect base image Create a custom Docker image with connector plugins from the Kafka Connect base image. Add the custom image to the /opt/kafka/plugins directory. You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plugins. At startup, the Streams for Apache Kafka version of Kafka Connect loads any third-party connector plugins contained in the /opt/kafka/plugins directory. Prerequisites The Cluster Operator must be deployed. Procedure Create a new Dockerfile using registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 as the base image: Example plugins file The COPY command points to the plugin files to copy to the container image. This example adds plugins for Debezium connectors (MongoDB, MySQL, and PostgreSQL), though not all files are listed for brevity. Debezium running in Kafka Connect looks the same as any other Kafka Connect task. Build the container image. Push your custom image to your container registry. Point to the new container image. You can point to the image in one of the following ways: Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource. If set, this property overrides the STRIMZI_KAFKA_CONNECT_IMAGES environment variable in the Cluster Operator. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... image: my-new-container-image 2 config: 3 #... 1 The specification for the Kafka Connect cluster . 2 The docker image for Kafka Connect pods. 3 Configuration of the Kafka Connect workers (not connectors). Edit the STRIMZI_KAFKA_CONNECT_IMAGES environment variable in the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to point to the new container image, and then reinstall the Cluster Operator. Additional resources Container image configuration and the KafkaConnect.spec.image property Cluster Operator configuration and the STRIMZI_KAFKA_CONNECT_IMAGES variable 6.5.3. Deploying KafkaConnector resources Deploy KafkaConnector resources to manage connectors. The KafkaConnector custom resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. You don't need to send HTTP requests to manage connectors, as with the Kafka Connect REST API. You manage a running connector instance by updating its corresponding KafkaConnector resource, and then applying the updates. The Cluster Operator updates the configurations of the running connector instances. You remove a connector by deleting its corresponding KafkaConnector . KafkaConnector resources must be deployed to the same namespace as the Kafka Connect cluster they link to. In the configuration shown in this procedure, the autoRestart feature is enabled ( enabled: true ) for automatic restarts of failed connectors and tasks. You can also annotate the KafkaConnector resource to restart a connector or restart a connector task manually. Example connectors You can use your own connectors or try the examples provided by Streams for Apache Kafka. Up until Apache Kafka 3.1.0, example file connector plugins were included with Apache Kafka. Starting from the 3.1.1 and 3.2.0 releases of Apache Kafka, the examples need to be added to the plugin path as any other connector . Streams for Apache Kafka provides an example KafkaConnector configuration file ( examples/connect/source-connector.yaml ) for the example file connector plugins, which creates the following connector instances as KafkaConnector resources: A FileStreamSourceConnector instance that reads each line from the Kafka license file (the source) and writes the data as messages to a single Kafka topic. A FileStreamSinkConnector instance that reads messages from the Kafka topic and writes the messages to a temporary file (the sink). We use the example file to create connectors in this procedure. Note The example connectors are not intended for use in a production environment. Prerequisites A Kafka Connect deployment The Cluster Operator is running Procedure Add the FileStreamSourceConnector and FileStreamSinkConnector plugins to Kafka Connect in one of the following ways: Configure Kafka Connect to build a new container image with plugins automatically Create a Docker image from the base Kafka Connect image (manually or using continuous integration) Set the strimzi.io/use-connector-resources annotation to true in the Kafka Connect configuration. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... With the KafkaConnector resources enabled, the Cluster Operator watches for them. Edit the examples/connect/source-connector.yaml file: Example KafkaConnector source connector configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: "/opt/kafka/LICENSE" 7 topic: my-topic 8 # ... 1 Name of the KafkaConnector resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. 2 Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to. 3 Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster. 4 Maximum number of Kafka Connect tasks that the connector can create. 5 Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the maxRestarts property. 6 Connector configuration as key-value pairs. 7 Location of the external data file. In this example, we're configuring the FileStreamSourceConnector to read from the /opt/kafka/LICENSE file. 8 Kafka topic to publish the source data to. Create the source KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/source-connector.yaml Create an examples/connect/sink-connector.yaml file: touch examples/connect/sink-connector.yaml Paste the following YAML into the sink-connector.yaml file: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: "/tmp/my-file" 3 topics: my-topic 4 1 Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster. 2 Connector configuration as key-value pairs. 3 Temporary file to publish the source data to. 4 Kafka topic to read the source data from. Create the sink KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/sink-connector.yaml Check that the connector resources were created: oc get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector Replace <my_connect_cluster> with the name of your Kafka Connect cluster. In the container, execute kafka-console-consumer.sh to read the messages that were written to the topic by the source connector: oc exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning Replace <my_kafka_cluster> with the name of your Kafka cluster. Source and sink connector configuration options The connector configuration is defined in the spec.config property of the KafkaConnector resource. The FileStreamSourceConnector and FileStreamSinkConnector classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options. Table 6.1. Configuration options for the FileStreamSource connector class Name Type Default value Description file String Null Source file to write messages to. If not specified, the standard input is used. topic List Null The Kafka topic to publish data to. Table 6.2. Configuration options for FileStreamSinkConnector class Name Type Default value Description file String Null Destination file to write messages to. If not specified, the standard output is used. topics List Null One or more Kafka topics to read data from. topics.regex String Null A regular expression matching one or more Kafka topics to read data from. 6.5.4. Exposing the Kafka Connect API Use the Kafka Connect REST API as an alternative to using KafkaConnector resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name> -connect-api:8083 , where <connect_cluster_name> is the name of your Kafka Connect cluster. The service is created when you create a Kafka Connect instance. The operations supported by the Kafka Connect REST API are described in the Apache Kafka Connect API documentation . Note The strimzi.io/use-connector-resources annotation enables KafkaConnectors. If you applied the annotation to your KafkaConnect resource configuration, you need to remove it to use the Kafka Connect API. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. You can add the connector configuration as a JSON object. Example curl request to add connector configuration curl -X POST \ http://my-connect-cluster-connect-api:8083/connectors \ -H 'Content-Type: application/json' \ -d '{ "name": "my-source-connector", "config": { "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "file": "/opt/kafka/LICENSE", "topic":"my-topic", "tasksMax": "4", "type": "source" } }' The API is only accessible within the OpenShift cluster. If you want to make the Kafka Connect API accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features: LoadBalancer or NodePort type services Ingress resources (Kubernetes only) OpenShift routes (OpenShift only) Note The connection is insecure, so allow external access advisedly. If you decide to create services, use the labels from the selector of the <connect_cluster_name> -connect-api service to configure the pods to which the service will route the traffic: Selector configuration for the service # ... selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #... 1 Name of the Kafka Connect custom resource in your OpenShift cluster. 2 Name of the Kafka Connect deployment created by the Cluster Operator. You must also create a NetworkPolicy that allows HTTP requests from external clients. Example NetworkPolicy to allow requests to the Kafka Connect API apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress 1 The label of the pod that is allowed to connect to the API. To add the connector configuration outside the cluster, use the URL of the resource that exposes the API in the curl command. 6.5.5. Limiting access to the Kafka Connect API It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. The Kafka Connect API provides extensive capabilities for altering connector configurations, which makes it all the more important to take security precautions. Someone with access to the Kafka Connect API could potentially obtain sensitive information that an administrator may assume is secure. The Kafka Connect REST API can be accessed by anyone who has authenticated access to the OpenShift cluster and knows the endpoint URL, which includes the hostname/IP address and port number. For example, suppose an organization uses a Kafka Connect cluster and connectors to stream sensitive data from a customer database to a central database. The administrator uses a configuration provider plugin to store sensitive information related to connecting to the customer database and the central database, such as database connection details and authentication credentials. The configuration provider protects this sensitive information from being exposed to unauthorized users. However, someone who has access to the Kafka Connect API can still obtain access to the customer database without the consent of the administrator. They can do this by setting up a fake database and configuring a connector to connect to it. They then modify the connector configuration to point to the customer database, but instead of sending the data to the central database, they send it to the fake database. By configuring the connector to connect to the fake database, the login details and credentials for connecting to the customer database are intercepted, even though they are stored securely in the configuration provider. If you are using the KafkaConnector custom resources, then by default the OpenShift RBAC rules permit only OpenShift cluster administrators to make changes to connectors. You can also designate non-cluster administrators to manage Streams for Apache Kafka resources . With KafkaConnector resources enabled in your Kafka Connect configuration, changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. If you are not using the KafkaConnector resource, the default RBAC rules do not limit access to the Kafka Connect API. If you want to limit direct access to the Kafka Connect REST API using OpenShift RBAC, you need to enable and use the KafkaConnector resources. For improved security, we recommend configuring the following properties for the Kafka Connect API: org.apache.kafka.disallowed.login.modules (Kafka 3.4 or later) Set the org.apache.kafka.disallowed.login.modules Java system property to prevent the use of insecure login modules. For example, specifying com.sun.security.auth.module.JndiLoginModule prevents the use of the Kafka JndiLoginModule . Example configuration for disallowing login modules apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule # ... Only allow trusted login modules and follow the latest advice from Kafka for the version you are using. As a best practice, you should explicitly disallow insecure login modules in your Kafka Connect configuration by using the org.apache.kafka.disallowed.login.modules system property. connector.client.config.override.policy Set the connector.client.config.override.policy property to None to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses. Example configuration to specify connector override policy apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: connector.client.config.override.policy: None # ... 6.5.6. Switching from using the Kafka Connect API to using KafkaConnector custom resources You can switch from using the Kafka Connect API to using KafkaConnector custom resources to manage your connectors. To make the switch, do the following in the order shown: Deploy KafkaConnector resources with the configuration to create your connector instances. Enable KafkaConnector resources in your Kafka Connect configuration by setting the strimzi.io/use-connector-resources annotation to true . Warning If you enable KafkaConnector resources before creating them, you delete all connectors. To switch from using KafkaConnector resources to using the Kafka Connect API, first remove the annotation that enables the KafkaConnector resources from your Kafka Connect configuration. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. When making the switch, check the status of the KafkaConnect resource . The value of metadata.generation (the current version of the deployment) must match status.observedGeneration (the latest reconciliation of the resource). When the Kafka Connect cluster is Ready , you can delete the KafkaConnector resources. 6.6. Deploying Kafka MirrorMaker Kafka MirrorMaker replicates data between two or more Kafka clusters, within or across data centers. This process is called mirroring to avoid confusion with the concept of Kafka partition replication. MirrorMaker consumes messages from a source cluster and republishes those messages to a target cluster. Data replication across clusters supports scenarios that require the following: Recovery of data in the event of a system failure Consolidation of data from multiple source clusters for centralized analysis Restriction of data access to a specific cluster Provision of data at a specific location to improve latency 6.6.1. Deploying Kafka MirrorMaker to your OpenShift cluster This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker or KafkaMirrorMaker2 resource depending on the version of MirrorMaker deployed. MirrorMaker 2 is based on Kafka Connect and uses its configuration properties. Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in Streams for Apache Kafka as well. The KafkaMirrorMaker resource will be removed from Streams for Apache Kafka when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy . Streams for Apache Kafka provides example configuration files . In this procedure, we use the following example files: examples/mirror-maker/kafka-mirror-maker.yaml examples/mirror-maker/kafka-mirror-maker-2.yaml Important If deploying MirrorMaker 2 clusters to run in parallel, using the same target Kafka cluster, each instance must use unique names for internal Kafka Connect topics. To do this, configure each MirrorMaker 2 instance to replace the defaults . Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka MirrorMaker to your OpenShift cluster: For MirrorMaker: oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml For MirrorMaker 2: oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1 my-mirror-maker is the name of the Kafka MirrorMaker cluster. my-mm2-cluster is the name of the Kafka MirrorMaker 2 cluster. A pod ID identifies each pod created. With the default deployment, you install a single MirrorMaker or MirrorMaker 2 pod. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Additional resources Kafka MirrorMaker cluster configuration 6.6.2. List of Kafka MirrorMaker 2 cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: <mirrormaker2_cluster_name>-mirrormaker2 Name given to the following MirrorMaker 2 resources: StrimziPodSet that creates the MirrorMaker 2 worker node pods. Headless service that provides stable DNS names to the MirrorMaker 2 pods. Service account used by the MirrorMaker 2 pods. Pod disruption budget configured for the MirrorMaker 2 worker nodes. Network Policy managing access to the MirrorMaker 2 REST API. <mirrormaker2_cluster_name>-mirrormaker2-<pod_id> Pods created by the MirrorMaker 2 StrimziPodSet. <mirrormaker2_cluster_name>-mirrormaker2-api Service which exposes the REST interface for managing the MirrorMaker 2 cluster. <mirrormaker2_cluster_name>-mirrormaker2-config ConfigMap which contains the MirrorMaker 2 ancillary configuration and is mounted as a volume by the MirrorMaker 2 pods. strimzi-<namespace-name>-<mirrormaker2_cluster_name>-mirrormaker2-init Cluster role binding used by the MirrorMaker 2 cluster. 6.6.3. List of Kafka MirrorMaker cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: <mirrormaker_cluster_name>-mirror-maker Name given to the following MirrorMaker resources: Deployment which is responsible for creating the MirrorMaker pods. Service account used by the MirrorMaker nodes. Pod Disruption Budget configured for the MirrorMaker worker nodes. <mirrormaker_cluster_name>-mirror-maker-config ConfigMap which contains ancillary configuration for MirrorMaker, and is mounted as a volume by the MirrorMaker pods. 6.7. Deploying Kafka Bridge Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster. 6.7.1. Deploying Kafka Bridge to your OpenShift cluster This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaBridge resource. Streams for Apache Kafka provides example configuration files . In this procedure, we use the following example file: examples/bridge/kafka-bridge.yaml Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka Bridge to your OpenShift cluster: oc apply -f examples/bridge/kafka-bridge.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0 my-bridge is the name of the Kafka Bridge cluster. A pod ID identifies each pod created. With the default deployment, you install a single Kafka Bridge pod. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running . Additional resources Kafka Bridge cluster configuration Using the Streams for Apache Kafka Bridge 6.7.2. Exposing the Kafka Bridge service to your local machine Use port forwarding to expose the Streams for Apache Kafka Bridge service to your local machine on http://localhost:8080 . Note Port forwarding is only suitable for development and testing purposes. Procedure List the names of the pods in your OpenShift cluster: oc get pods -o name pod/kafka-consumer # ... pod/my-bridge-bridge-<pod_id> Connect to the Kafka Bridge pod on port 8080 : oc port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 & Note If port 8080 on your local machine is already in use, use an alternative HTTP port, such as 8008 . API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod. 6.7.3. Accessing the Kafka Bridge outside of OpenShift After deployment, the Streams for Apache Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the <kafka_bridge_name> -bridge-service service to access the API. If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features: LoadBalancer or NodePort type services Ingress resources (Kubernetes only) OpenShift routes (OpenShift only) If you decide to create Services, use the labels from the selector of the <kafka_bridge_name> -bridge-service service to configure the pods to which the service will route the traffic: # ... selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #... 1 Name of the Kafka Bridge custom resource in your OpenShift cluster. 6.7.4. List of Kafka Bridge cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: <bridge_cluster_name>-bridge Deployment which is in charge to create the Kafka Bridge worker node pods. <bridge_cluster_name>-bridge-service Service which exposes the REST interface of the Kafka Bridge cluster. <bridge_cluster_name>-bridge-config ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods. <bridge_cluster_name>-bridge Pod Disruption Budget configured for the Kafka Bridge worker nodes. 6.8. Alternative standalone deployment options for Streams for Apache Kafka operators You can perform a standalone deployment of the Topic Operator and User Operator. Consider a standalone deployment of these operators if you are using a Kafka cluster that is not managed by the Cluster Operator. You deploy the operators to OpenShift. Kafka can be running outside of OpenShift. For example, you might be using a Kafka as a managed service. You adjust the deployment configuration for the standalone operator to match the address of your Kafka cluster. 6.8.1. Deploying the standalone Topic Operator This procedure shows how to deploy the Topic Operator in unidirectional mode as a standalone component for topic management. You can use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator. Unidirectional topic management maintains topics solely through KafkaTopic resources. For more information on unidirectional topic management, see Section 10.1, "Topic management modes" . Alternate configuration is also shown for deploying the Topic Operator in bidirectional mode. Standalone deployment files are provided with Streams for Apache Kafka. Use the 05-Deployment-strimzi-topic-operator.yaml deployment file to deploy the Topic Operator. Add or set the environment variables needed to make a connection to a Kafka cluster. The Topic Operator watches for KafkaTopic resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the Topic Operator configuration. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you want to use more than one Topic Operator, configure each of them to watch different namespaces. In this way, you can use Topic Operators with multiple Kafka clusters. Prerequisites You are running a Kafka cluster for the Topic Operator to connect to. As long as the standalone Topic Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service. Procedure Edit the env properties in the install/topic-operator/05-Deployment-strimzi-topic-operator.yaml standalone deployment file. Example standalone Topic Operator deployment configuration apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 4 value: "120000" - name: STRIMZI_LOG_LEVEL 5 value: INFO - name: STRIMZI_TLS_ENABLED 6 value: "false" - name: STRIMZI_JAVA_OPTS 7 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 8 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA 9 value: "false" - name: STRIMZI_TLS_AUTH_ENABLED 10 value: "false" - name: STRIMZI_SASL_ENABLED 11 value: "false" - name: STRIMZI_SASL_USERNAME 12 value: "admin" - name: STRIMZI_SASL_PASSWORD 13 value: "password" - name: STRIMZI_SASL_MECHANISM 14 value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL 15 value: "SSL" - name: STRIMZI_USE_FINALIZERS value: "false" 16 1 The OpenShift namespace for the Topic Operator to watch for KafkaTopic resources. Specify the namespace of the Kafka cluster. 2 The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down. 3 The label to identify the KafkaTopic resources managed by the Topic Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to the KafkaTopic resource. If you deploy more than one Topic Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. 4 The interval between periodic reconciliations, in milliseconds. The default is 120000 (2 minutes). 5 The level for printing logging messages. You can set the level to ERROR , WARNING , INFO , DEBUG , or TRACE . 6 Enables TLS support for encrypted communication with the Kafka brokers. 7 (Optional) The Java options used by the JVM running the Topic Operator. 8 (Optional) The debugging ( -D ) options set for the Topic Operator. 9 (Optional) Skips the generation of trust store certificates if TLS is enabled through STRIMZI_TLS_ENABLED . If this environment variable is enabled, the brokers must use a public trusted certificate authority for their TLS certificates. The default is false . 10 (Optional) Generates key store certificates for mTLS authentication. Setting this to false disables client authentication with mTLS to the Kafka brokers. The default is true . 11 (Optional) Enables SASL support for client authentication when connecting to Kafka brokers. The default is false . 12 (Optional) The SASL username for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . 13 (Optional) The SASL password for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . 14 (Optional) The SASL mechanism for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . You can set the value to plain , scram-sha-256 , or scram-sha-512 . 15 (Optional) The security protocol used for communication with Kafka brokers. The default value is "PLAINTEXT". You can set the value to PLAINTEXT , SSL , SASL_PLAINTEXT , or SASL_SSL . 16 Set STRIMZI_USE_FINALIZERS to false if you do not want to use finalizers to control topic deletion . If you want to connect to Kafka brokers that are using certificates from a public certificate authority, set STRIMZI_PUBLIC_CA to true . Set this property to true , for example, if you are using Amazon AWS MSK service. If you enabled mTLS with the STRIMZI_TLS_ENABLED environment variable, specify the keystore and truststore used to authenticate connection to the Kafka cluster. Example mTLS configuration # .... env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: "/path/to/truststore.p12" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: " TRUSTSTORE-PASSWORD " - name: STRIMZI_KEYSTORE_LOCATION 3 value: "/path/to/keystore.p12" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: " KEYSTORE-PASSWORD " # ... 1 The truststore contains the public keys of the Certificate Authorities used to sign the Kafka and ZooKeeper server certificates. 2 The password for accessing the truststore. 3 The keystore contains the private key for mTLS authentication. 4 The password for accessing the keystore. Apply the changes to the Deployment configuration to deploy the Topic Operator. Check the status of the deployment: oc get deployments Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.8.1.1. Deploying the standalone Topic Operator for bidirectional topic management Bidirectional topic management requires ZooKeeper for cluster management, and maintains topics through KafkaTopic resources and within the Kafka cluster. If you want to switch to using the Topic Operator in this mode, follow these steps to deploy the standalone Topic Operator. Note As the feature gate enabling the Topic Operator to run in unidirectional mode progresses to General Availability, bidirectional mode will be phased out. This transition is aimed at enhancing the user experience, particularly in supporting Kafka in KRaft mode. Undeploy the current standalone Topic Operator. Retain the KafkaTopic resources, which are picked up by the Topic Operator when it is deployed again. Edit the Deployment configuration for the standalone Topic Operator to include ZooKeeper-related environment variables: STRIMZI_ZOOKEEPER_CONNECT STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS TC_ZK_CONNECTION_TIMEOUT_MS STRIMZI_USE_ZOOKEEPER_TOPIC_STORE It is the presence or absence of the ZooKeeper variables that defines whether the bidirectional Topic Operator is used. Unidirectional topic management does not use ZooKeeper. If ZooKeeper environment variables are not present, the unidirectional Topic Operator is used. Otherwise, the bidirectional Topic Operator is used. Other environment variables that are not used in unidirectional mode can be added if required: STRIMZI_REASSIGN_THROTTLE STRIMZI_REASSIGN_VERIFY_INTERVAL_MS STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS STRIMZI_TOPICS_PATH STRIMZI_STORE_TOPIC STRIMZI_STORE_NAME STRIMZI_APPLICATION_ID STRIMZI_STALE_RESULT_TIMEOUT_MS Example standalone Topic Operator deployment configuration for bidirectional topic management apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_ZOOKEEPER_CONNECT 1 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 2 value: "18000" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 3 value: "6" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" - name: STRIMZI_LOG_LEVEL value: INFO - name: STRIMZI_TLS_ENABLED value: "false" - name: STRIMZI_JAVA_OPTS value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA value: "false" - name: STRIMZI_TLS_AUTH_ENABLED value: "false" - name: STRIMZI_SASL_ENABLED value: "false" - name: STRIMZI_SASL_USERNAME value: "admin" - name: STRIMZI_SASL_PASSWORD value: "password" - name: STRIMZI_SASL_MECHANISM value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL value: "SSL" 1 (ZooKeeper) The host and port pair of the address to connect to the ZooKeeper cluster. This must be the same ZooKeeper cluster that your Kafka cluster is using. 2 (ZooKeeper) The ZooKeeper session timeout, in milliseconds. The default is 18000 (18 seconds). 3 The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential backoff. Consider increasing this value when topic creation takes more time due to the number of partitions or replicas. The default is 6 attempts. Apply the changes to the Deployment configuration to deploy the Topic Operator. 6.8.2. Deploying the standalone User Operator This procedure shows how to deploy the User Operator as a standalone component for user management. You can use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator. A standalone deployment can operate with any Kafka cluster. Standalone deployment files are provided with Streams for Apache Kafka. Use the 05-Deployment-strimzi-user-operator.yaml deployment file to deploy the User Operator. Add or set the environment variables needed to make a connection to a Kafka cluster. The User Operator watches for KafkaUser resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the User Operator configuration. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use more than one User Operator, configure each of them to watch different namespaces. In this way, you can use the User Operator with multiple Kafka clusters. Prerequisites You are running a Kafka cluster for the User Operator to connect to. As long as the standalone User Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service. Procedure Edit the following env properties in the install/user-operator/05-Deployment-strimzi-user-operator.yaml standalone deployment file. Example standalone User Operator deployment configuration apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-user-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: "true" - name: STRIMZI_CA_VALIDITY 12 value: "365" - name: STRIMZI_CA_RENEWAL 13 value: "30" - name: STRIMZI_JAVA_OPTS 14 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_SECRET_PREFIX 16 value: "kafka-" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: "true" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000 1 The OpenShift namespace for the User Operator to watch for KafkaUser resources. Only one namespace can be specified. 2 The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down. 3 The OpenShift Secret that contains the public key ( ca.crt ) value of the CA (certificate authority) that signs new user certificates for mTLS authentication. 4 The OpenShift Secret that contains the private key ( ca.key ) value of the CA that signs new user certificates for mTLS authentication. 5 The label to identify the KafkaUser resources managed by the User Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to the KafkaUser resource. If you deploy more than one User Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. 6 The interval between periodic reconciliations, in milliseconds. The default is 120000 (2 minutes). 7 The size of the controller event queue. The size of the queue should be at least as big as the maximal amount of users you expect the User Operator to operate. The default is 1024 . 8 The size of the worker pool for reconciling the users. Bigger pool might require more resources, but it will also handle more KafkaUser resources The default is 50 . 9 The size of the worker pool for Kafka Admin API and OpenShift operations. Bigger pool might require more resources, but it will also handle more KafkaUser resources The default is 4 . 10 The level for printing logging messages. You can set the level to ERROR , WARNING , INFO , DEBUG , or TRACE . 11 Enables garbage collection (GC) logging. The default is true . 12 The validity period for the CA. The default is 365 days. 13 The renewal period for the CA. The renewal period is measured backwards from the expiry date of the current certificate. The default is 30 days to initiate certificate renewal before the old certificates expire. 14 (Optional) The Java options used by the JVM running the User Operator 15 (Optional) The debugging ( -D ) options set for the User Operator 16 (Optional) Prefix for the names of OpenShift secrets created by the User Operator. 17 (Optional) Indicates whether the Kafka cluster supports management of authorization ACL rules using the Kafka Admin API. When set to false , the User Operator will reject all resources with simple authorization ACL rules. This helps to avoid unnecessary exceptions in the Kafka cluster logs. The default is true . 18 (Optional) Semi-colon separated list of Cron Expressions defining the maintenance time windows during which the expiring user certificates will be renewed. 19 (Optional) Configuration options for configuring the Kafka Admin client used by the User Operator in the properties format. If you are using mTLS to connect to the Kafka cluster, specify the secrets used to authenticate connection. Otherwise, go to the step. Example mTLS configuration # .... env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs # ..." 1 The OpenShift Secret that contains the public key ( ca.crt ) value of the CA that signs Kafka broker certificates. 2 The OpenShift Secret that contains the certificate public key ( entity-operator.crt ) and private key ( entity-operator.key ) that is used for mTLS authentication against the Kafka cluster. Deploy the User Operator. oc create -f install/user-operator Check the status of the deployment: oc get deployments Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . | [
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments -n my-cluster-operator-namespace",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3",
"create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments -n my-cluster-operator-namespace",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: # serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: \"*\" #",
"create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments -n my-cluster-operator-namespace",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1",
"apply -f examples/kafka/kraft/kafka-with-dual-role-nodes.yaml",
"apply -f examples/kafka/kraft/kafka.yaml",
"apply -f examples/kafka/kraft/kafka-ephemeral.yaml",
"apply -f examples/kafka/kafka-with-node-pools.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.7.0 # config: # log.message.format.version: \"3.7\" inter.broker.protocol.version: \"3.7\" #",
"apply -f examples/kafka/kafka-ephemeral.yaml",
"apply -f examples/kafka/kafka-persistent.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <kafka_configuration_file>",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <kafka_configuration_file>",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0",
"exec -ti my-cluster -zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls /",
"apply -f examples/connect/kafka-connect.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #",
"oc apply -f <kafka_connect_configuration_file>",
"FROM registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001",
"tree ./ my-plugins / ./ my-plugins / ├── debezium-connector-mongodb │ ├── bson-<version>.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mongodb-driver-core-<version>.jar │ ├── README.md │ └── # ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-<version>.jar │ ├── mysql-connector-java-<version>.jar │ ├── README.md │ └── # └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-<version>.jar ├── debezium-core-<version>.jar ├── LICENSE.txt ├── postgresql-<version>.jar ├── protobuf-java-<version>.jar ├── README.md └── #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # image: my-new-container-image 2 config: 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: \"/opt/kafka/LICENSE\" 7 topic: my-topic 8 #",
"apply -f examples/connect/source-connector.yaml",
"touch examples/connect/sink-connector.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: \"/tmp/my-file\" 3 topics: my-topic 4",
"apply -f examples/connect/sink-connector.yaml",
"get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector",
"exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning",
"curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" } }'",
"selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: connector.client.config.override.policy: None",
"apply -f examples/mirror-maker/kafka-mirror-maker.yaml",
"apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1",
"apply -f examples/bridge/kafka-bridge.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0",
"get pods -o name pod/kafka-consumer pod/my-bridge-bridge-<pod_id>",
"port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &",
"selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 4 value: \"120000\" - name: STRIMZI_LOG_LEVEL 5 value: INFO - name: STRIMZI_TLS_ENABLED 6 value: \"false\" - name: STRIMZI_JAVA_OPTS 7 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 8 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA 9 value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED 10 value: \"false\" - name: STRIMZI_SASL_ENABLED 11 value: \"false\" - name: STRIMZI_SASL_USERNAME 12 value: \"admin\" - name: STRIMZI_SASL_PASSWORD 13 value: \"password\" - name: STRIMZI_SASL_MECHANISM 14 value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL 15 value: \"SSL\" - name: STRIMZI_USE_FINALIZERS value: \"false\" 16",
". env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: \"/path/to/truststore.p12\" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: \" TRUSTSTORE-PASSWORD \" - name: STRIMZI_KEYSTORE_LOCATION 3 value: \"/path/to/keystore.p12\" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: \" KEYSTORE-PASSWORD \"",
"get deployments",
"NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_ZOOKEEPER_CONNECT 1 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 2 value: \"18000\" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 3 value: \"6\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" - name: STRIMZI_LOG_LEVEL value: INFO - name: STRIMZI_TLS_ENABLED value: \"false\" - name: STRIMZI_JAVA_OPTS value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED value: \"false\" - name: STRIMZI_SASL_ENABLED value: \"false\" - name: STRIMZI_SASL_USERNAME value: \"admin\" - name: STRIMZI_SASL_PASSWORD value: \"password\" - name: STRIMZI_SASL_MECHANISM value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL value: \"SSL\"",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-user-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: \"120000\" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: \"true\" - name: STRIMZI_CA_VALIDITY 12 value: \"365\" - name: STRIMZI_CA_RENEWAL 13 value: \"30\" - name: STRIMZI_JAVA_OPTS 14 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_SECRET_PREFIX 16 value: \"kafka-\" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: \"true\" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000",
". env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs ...\"",
"create -f install/user-operator",
"get deployments",
"NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/deploy-tasks_str |
Chapter 7. Advisories related to this release | Chapter 7. Advisories related to this release The following advisories have been issued to document enhancements, bugfixes, and CVE fixes included in this release. RHBA-2023:7664 | null | https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/release_notes_for_spring_boot_2.7/advisories-related-to-current-release-spring-boot |
Chapter 34. Setting up an 802.1x network authentication service for LAN clients by using hostapd with FreeRADIUS backend | Chapter 34. Setting up an 802.1x network authentication service for LAN clients by using hostapd with FreeRADIUS backend The IEEE 802.1X standard defines secure authentication and authorization methods to protect networks from unauthorized clients. By using the hostapd service and FreeRADIUS, you can provide network access control (NAC) in your network. Note Red Hat supports only FreeRADIUS with Red Hat Identity Management (IdM) as the backend source of authentication. In this documentation, the RHEL host acts as a bridge to connect different clients with an existing network. However, the RHEL host grants only authenticated clients access to the network. 34.1. Prerequisites A clean installation of the freeradius and freeradius-ldap packages. If the packages are already installed, remove the /etc/raddb/ directory, uninstall and then install the packages again. Do not reinstall the packages by using the yum reinstall command, because the permissions and symbolic links in the /etc/raddb/ directory are then different. The host on which you want to configure FreeRADIUS is a client in an IdM domain . 34.2. Setting up the bridge on the authenticator A network bridge is a link-layer device which forwards traffic between hosts and networks based on a table of MAC addresses. If you set up RHEL as an 802.1X authenticator, add both the interfaces on which to perform authentication and the LAN interface to the bridge. Prerequisites The server has multiple Ethernet interfaces. Procedure If the bridge interface does not exist, create it: Assign the Ethernet interfaces to the bridge: Enable the bridge to forward extensible authentication protocol over LAN (EAPOL) packets: Configure the connection to automatically activate the ports: Activate the connection: Verification Display the link status of Ethernet devices that are ports of a specific bridge: Verify if forwarding of EAPOL packets is enabled on the br0 device: If the command returns 0x8 , forwarding is enabled. Additional resources nm-settings(5) man page on your system 34.3. Configuring FreeRADIUS to authenticate network clients securely by using EAP FreeRADIUS supports different methods of the Extensible authentication protocol (EAP). However, for a supported and secure scenario, use EAP-TTLS (tunneled transport layer security). With EAP-TTLS, the clients use a secure TLS connection as the outer authentication protocol to set up the tunnel. The inner authentication then uses LDAP to authenticate to Identity Management. To use EAP-TTLS, you need a TLS server certificate. Note The default FreeRADIUS configuration files serve as documentation and describe all parameters and directives. If you want to disable certain features, comment them out instead of removing the corresponding parts in the configuration files. This enables you to preserve the structure of the configuration files and the included documentation. Prerequisites You installed the freeradius and freeradius-ldap packages. The configuration files in the /etc/raddb/ directory are unchanged and as provided by the freeradius packages. The host is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain. Procedure Create a private key and request a certificate from IdM: The certmonger service stores the private key in the /etc/pki/tls/private/radius.key file and the certificate in the /etc/pki/tls/certs/radius.pem file, and it sets secure permissions. Additionally, certmonger will monitor the certificate, renew it before it expires, and restart the radiusd service after the certificate was renewed. Verify that the CA successfully issued the certificate: Create the /etc/raddb/certs/dh file with Diffie-Hellman (DH) parameters. For example, to create a DH file with a 2048 bits prime, enter: For security reasons, do not use a DH file with less than a 2048 bits prime. Depending on the number of bits, the creation of the file can take several minutes. Edit the /etc/raddb/mods-available/eap file: Configure the TLS-related settings in the tls-config tls-common directive: Set the default_eap_type parameter in the eap directive to ttls : Comment out the md5 directives to disable the insecure EAP-MD5 authentication method: Note that, in the default configuration file, other insecure EAP authentication methods are commented out by default. Edit the /etc/raddb/sites-available/default file, and comment out all authentication methods other than eap : This leaves only EAP enabled for the outer authentication and disables plain-text authentication methods. Edit the /etc/raddb/sites-available/inner-tunnel file, and make the following changes: Comment out the -ldap entry and add the ldap module configuration to the authorize directive: Uncomment the LDAP authentication type in the authenticate directive: Enable the ldap module: Edit the /etc/raddb/mods-available/ldap file, and make the following changes: In the ldap directive, set the IdM LDAP server URL and the base distinguished name (DN): Specify the ldaps protocol in the server URL to use TLS-encrypted connections between the FreeRADIUS host and the IdM server. In the ldap directive, enable TLS certificate validation of the IdM LDAP server: Edit the /etc/raddb/clients.conf file: Set a secure password in the localhost and localhost_ipv6 client directives: Add a client directive for the network authenticator: Optional: If other hosts should also be able to access the FreeRADIUS service, add client directives for them as well, for example: The ipaddr parameter accepts IPv4 and IPv6 addresses, and you can use the optional classless inter-domain routing (CIDR) notation to specify ranges. However, you can set only one value in this parameter. For example, to grant access to both an IPv4 and IPv6 address, you must add two client directives. Use a descriptive name for the client directive, such as a hostname or a word that describes where the IP range is used. Verify the configuration files: Open the RADIUS ports in the firewalld service: Enable and start the radiusd service: Verification Testing EAP-TTLS authentication against a FreeRADIUS server or authenticator Troubleshooting If the radiusd service fails to start, verify that you can resolve the IdM server host name: For other problems, run radiusd in debug mode: Stop the radiusd service: Start the service in debug mode: Perform authentication tests on the FreeRADIUS host, as referenced in the Verification section. steps Disable no longer required authentication methods and other features you do not use. 34.4. Configuring hostapd as an authenticator in a wired network The host access point daemon ( hostapd ) service can act as an authenticator in a wired network to provide 802.1X authentication. For this, the hostapd service requires a RADIUS server that authenticates the clients. The hostapd service provides an integrated RADIUS server. However, use the integrated RADIUS server only for testing purposes. For production environments, use FreeRADIUS server, which supports additional features, such as different authentication methods and access control. Important The hostapd service does not interact with the traffic plane. The service acts only as an authenticator. For example, use a script or service that uses the hostapd control interface to allow or deny traffic based on the result of authentication events. Prerequisites You installed the hostapd package. The FreeRADIUS server has been configured, and it is ready to authenticate clients. Procedure Create the /etc/hostapd/hostapd.conf file with the following content: For further details about the parameters used in this configuration, see their descriptions in the /usr/share/doc/hostapd/hostapd.conf example configuration file. Enable and start the hostapd service: Verification Testing EAP-TTLS authentication against a FreeRADIUS server or authenticator Troubleshooting If the hostapd service fails to start, verify that the bridge interface you use in the /etc/hostapd/hostapd.conf file is present on the system: For other problems, run hostapd in debug mode: Stop the hostapd service: Start the service in debug mode: Perform authentication tests on the FreeRADIUS host, as referenced in the Verification section. Additional resources hostapd.conf(5) man page on your system /usr/share/doc/hostapd/hostapd.conf file 34.5. Testing EAP-TTLS authentication against a FreeRADIUS server or authenticator To test if authentication by using extensible authentication protocol (EAP) over tunneled transport layer security (EAP-TTLS) works as expected, run this procedure: After you set up the FreeRADIUS server After you set up the hostapd service as an authenticator for 802.1X network authentication. The output of the test utilities used in this procedure provide additional information about the EAP communication and help you to debug problems. Prerequisites When you want to authenticate to: A FreeRADIUS server: The eapol_test utility, provided by the hostapd package, is installed. The client, on which you run this procedure, has been authorized in the FreeRADIUS server's client databases. An authenticator, the wpa_supplicant utility, provided by the same-named package, is installed. You stored the certificate authority (CA) certificate in the /etc/ipa/ca.cert file. Procedure Optional: Create a user in Identity Management (IdM): Create the /etc/wpa_supplicant/wpa_supplicant-TTLS.conf file with the following content: To authenticate to: A FreeRADIUS server, enter: The -a option defines the IP address of the FreeRADIUS server, and the -s option specifies the password for the host on which you run the command in the FreeRADIUS server's client configuration. An authenticator, enter: The -i option specifies the network interface name on which wpa_supplicant sends out extended authentication protocol over LAN (EAPOL) packets. For more debugging information, pass the -d option to the command. Additional resources /usr/share/doc/wpa_supplicant/wpa_supplicant.conf file 34.6. Blocking and allowing traffic based on hostapd authentication events The hostapd service does not interact with the traffic plane. The service acts only as an authenticator. However, you can write a script to allow and deny traffic based on the result of authentication events. Important This procedure is not supported and is no enterprise-ready solution. It only demonstrates how to block or allow traffic by evaluating events retrieved by hostapd_cli . When the 802-1x-tr-mgmt systemd service starts, RHEL blocks all traffic on the listen port of hostapd except extensible authentication protocol over LAN (EAPOL) packets and uses the hostapd_cli utility to connect to the hostapd control interface. The /usr/local/bin/802-1x-tr-mgmt script then evaluates events. Depending on the different events received by hostapd_cli , the script allows or blocks traffic for MAC addresses. Note that, when the 802-1x-tr-mgmt service stops, all traffic is automatically allowed again. Perform this procedure on the hostapd server. Prerequisites The hostapd service has been configured, and the service is ready to authenticate clients. Procedure Create the /usr/local/bin/802-1x-tr-mgmt file with the following content: #!/bin/sh TABLE="tr-mgmt-USD{1}" read -r -d '' TABLE_DEF << EOF table bridge USD{TABLE} { set allowed_macs { type ether_addr } chain accesscontrol { ether saddr @allowed_macs accept ether daddr @allowed_macs accept drop } chain forward { type filter hook forward priority 0; policy accept; meta ibrname "br0" jump accesscontrol } } EOF case USD{2:-NOTANEVENT} in block_all) nft destroy table bridge "USDTABLE" printf "USDTABLE_DEF" | nft -f - echo "USD1: All the bridge traffic blocked. Traffic for a client with a given MAC will be allowed after 802.1x authentication" ;; AP-STA-CONNECTED | CTRL-EVENT-EAP-SUCCESS | CTRL-EVENT-EAP-SUCCESS2) nft add element bridge tr-mgmt-br0 allowed_macs { USD3 } echo "USD1: Allowed traffic from USD3" ;; AP-STA-DISCONNECTED | CTRL-EVENT-EAP-FAILURE) nft delete element bridge tr-mgmt-br0 allowed_macs { USD3 } echo "USD1: Denied traffic from USD3" ;; allow_all) nft destroy table bridge "USDTABLE" echo "USD1: Allowed all bridge traffice again" ;; NOTANEVENT) echo "USD0 was called incorrectly, usage: USD0 interface event [mac_address]" ;; esac Create the /etc/systemd/system/[email protected] systemd service file with the following content: Reload systemd: Enable and start the 802-1x-tr-mgmt service with the interface name hostapd is listening on: Verification Authenticate with a client to the network. See Testing EAP-TTLS authentication against a FreeRADIUS server or authenticator . Additional resources systemd.service(5) man page on your system | [
"nmcli connection add type bridge con-name br0 ifname br0",
"nmcli connection add type ethernet slave-type bridge con-name br0-port1 ifname enp1s0 master br0 nmcli connection add type ethernet slave-type bridge con-name br0-port2 ifname enp7s0 master br0 nmcli connection add type ethernet slave-type bridge con-name br0-port3 ifname enp8s0 master br0 nmcli connection add type ethernet slave-type bridge con-name br0-port4 ifname enp9s0 master br0",
"nmcli connection modify br0 group-forward-mask 8",
"nmcli connection modify br0 connection.autoconnect-slaves 1",
"nmcli connection up br0",
"ip link show master br0 3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff",
"cat /sys/class/net/br0/bridge/group_fwd_mask 0x8",
"ipa-getcert request -w -k /etc/pki/tls/private/radius.key -f /etc/pki/tls/certs/radius.pem -o \"root:radiusd\" -m 640 -O \"root:radiusd\" -M 640 -T caIPAserviceCert -C 'systemctl restart radiusd.service' -N freeradius.idm.example.com -D freeradius.idm.example.com -K radius/ freeradius.idm.example.com",
"ipa-getcert list -f /etc/pki/tls/certs/radius.pem Number of certificates and requests being tracked: 1. Request ID '20240918142211': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/radius.key' certificate: type=FILE,location='/etc/pki/tls/certs/radius.crt'",
"openssl dhparam -out /etc/raddb/certs/dh 2048",
"eap { tls-config tls-common { private_key_file = /etc/pki/tls/private/radius.key certificate_file = /etc/pki/tls/certs/radius.pem ca_file = /etc/ipa/ca.crt } }",
"eap { default_eap_type = ttls }",
"eap { # md5 { # } }",
"authenticate { # Auth-Type PAP { # pap # } # Auth-Type CHAP { # chap # } # Auth-Type MS-CHAP { # mschap # } # mschap # digest }",
"authorize { #-ldap ldap if ((ok || updated) && User-Password) { update { control:Auth-Type := ldap } } }",
"authenticate { Auth-Type LDAP { ldap } }",
"ln -s /etc/raddb/mods-available/ldap /etc/raddb/mods-enabled/ldap",
"ldap { server = 'ldaps:// idm_server.idm.example.com ' base_dn = 'cn=users,cn=accounts, dc=idm,dc=example,dc=com ' }",
"tls { require_cert = 'demand' }",
"client localhost { ipaddr = 127.0.0.1 secret = localhost_client_password } client localhost_ipv6 { ipv6addr = ::1 secret = localhost_client_password }",
"client hostapd.example.org { ipaddr = 192.0.2.2/32 secret = hostapd_client_password }",
"client <hostname_or_description> { ipaddr = <IP_address_or_range> secret = <client_password> }",
"radiusd -XC Configuration appears to be OK",
"firewall-cmd --permanent --add-service=radius firewall-cmd --reload",
"systemctl enable --now radiusd",
"host -v idm_server.idm.example.com",
"systemctl stop radiusd",
"radiusd -X Ready to process requests",
"General settings of hostapd =========================== Control interface settings ctrl_interface= /var/run/hostapd ctrl_interface_group= wheel Enable logging for all modules logger_syslog= -1 logger_stdout= -1 Log level logger_syslog_level= 2 logger_stdout_level= 2 Wired 802.1X authentication =========================== Driver interface type driver=wired Enable IEEE 802.1X authorization ieee8021x=1 Use port access entry (PAE) group address (01:80:c2:00:00:03) when sending EAPOL frames use_pae_group_addr=1 Network interface for authentication requests interface= br0 RADIUS client configuration =========================== Local IP address used as NAS-IP-Address own_ip_addr= 192.0.2.2 Unique NAS-Identifier within scope of RADIUS server nas_identifier= hostapd.example.org RADIUS authentication server auth_server_addr= 192.0.2.1 auth_server_port= 1812 auth_server_shared_secret= hostapd_client_password RADIUS accounting server acct_server_addr= 192.0.2.1 acct_server_port= 1813 acct_server_shared_secret= hostapd_client_password",
"systemctl enable --now hostapd",
"ip link show br0",
"systemctl stop hostapd",
"hostapd -d /etc/hostapd/hostapd.conf",
"ipa user-add --first \" Test \" --last \" User \" idm_user --password",
"ap_scan=0 network={ eap=TTLS eapol_flags=0 key_mgmt=IEEE8021X # Anonymous identity (sent in unencrypted phase 1) # Can be any string anonymous_identity=\" anonymous \" # Inner authentication (sent in TLS-encrypted phase 2) phase2=\"auth= PAP \" identity=\" idm_user \" password=\" idm_user_password \" # CA certificate to validate the RADIUS server's identity ca_cert=\" /etc/ipa/ca.crt \" }",
"eapol_test -c /etc/wpa_supplicant/wpa_supplicant-TTLS.conf -a 192.0.2.1 -s <client_password> EAP: Status notification: remote certificate verification (param=success) CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully SUCCESS",
"wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant-TTLS.conf -D wired -i enp0s31f6 enp0s31f6: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully",
"#!/bin/sh TABLE=\"tr-mgmt-USD{1}\" read -r -d '' TABLE_DEF << EOF table bridge USD{TABLE} { set allowed_macs { type ether_addr } chain accesscontrol { ether saddr @allowed_macs accept ether daddr @allowed_macs accept drop } chain forward { type filter hook forward priority 0; policy accept; meta ibrname \"br0\" jump accesscontrol } } EOF case USD{2:-NOTANEVENT} in block_all) nft destroy table bridge \"USDTABLE\" printf \"USDTABLE_DEF\" | nft -f - echo \"USD1: All the bridge traffic blocked. Traffic for a client with a given MAC will be allowed after 802.1x authentication\" ;; AP-STA-CONNECTED | CTRL-EVENT-EAP-SUCCESS | CTRL-EVENT-EAP-SUCCESS2) nft add element bridge tr-mgmt-br0 allowed_macs { USD3 } echo \"USD1: Allowed traffic from USD3\" ;; AP-STA-DISCONNECTED | CTRL-EVENT-EAP-FAILURE) nft delete element bridge tr-mgmt-br0 allowed_macs { USD3 } echo \"USD1: Denied traffic from USD3\" ;; allow_all) nft destroy table bridge \"USDTABLE\" echo \"USD1: Allowed all bridge traffice again\" ;; NOTANEVENT) echo \"USD0 was called incorrectly, usage: USD0 interface event [mac_address]\" ;; esac",
"[Unit] Description=Example 802.1x traffic management for hostapd After=hostapd.service After=sys-devices-virtual-net-%i.device [Service] Type=simple ExecStartPre=bash -c '/usr/sbin/hostapd_cli ping | grep PONG' ExecStartPre=/usr/local/bin/802-1x-tr-mgmt %i block_all ExecStart=/usr/sbin/hostapd_cli -i %i -a /usr/local/bin/802-1x-tr-mgmt ExecStopPost=/usr/local/bin/802-1x-tr-mgmt %i allow_all [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl enable --now [email protected]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_setting-up-an-802-1x-network-authentication-service-for-lan-clients-using-hostapd-with-freeradius-backend_configuring-and-managing-networking |
probe::signal.wakeup | probe::signal.wakeup Name probe::signal.wakeup - Sleeping process being wakened for signal Synopsis signal.wakeup Values pid_name Name of the process to wake resume Indicates whether to wake up a task in a STOPPED or TRACED state state_mask A string representation indicating the mask of task states to wake. Possible values are TASK_INTERRUPTIBLE, TASK_STOPPED, TASK_TRACED, TASK_WAKEKILL, and TASK_INTERRUPTIBLE. sig_pid The PID of the process to wake | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-wakeup |
Chapter 18. Managing host groups using Ansible playbooks | Chapter 18. Managing host groups using Ansible playbooks To learn more about host groups in Identity Management (IdM) and using Ansible to perform operations involving host groups in Identity Management (IdM), see the following: Host groups in IdM Ensuring the presence of IdM host groups Ensuring the presence of hosts in IdM host groups Nesting IdM host groups Ensuring the presence of member managers in IdM host groups Ensuring the absence of hosts from IdM host groups Ensuring the absence of nested host groups from IdM host groups Ensuring the absence of member managers from IdM host groups 18.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 18.2. Ensuring the presence of IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are created in IdM using the ipa hostgroup-add command. The result of adding a host group to IdM is the state of the host group being present in IdM. Because of the Ansible reliance on idempotence, to add a host group to IdM using Ansible, you must create a playbook in which you define the state of the host group as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. For example, to ensure the presence of a host group named databases , specify name: databases in the - ipahostgroup task. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-present.yml file. In the playbook, state: present signifies a request to add the host group to IdM unless it already exists there. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose presence in IdM you wanted to ensure: The databases host group exists in IdM. 18.3. Ensuring the presence of hosts in IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of hosts in host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file have been added to IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host with the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This playbook adds the db.idm.example.com host to the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself. Instead, only an attempt is made to add db.idm.example.com to databases . Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about a host group to see which hosts are present in it: The db.idm.example.com host is present as a member of the databases host group. 18.4. Nesting IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of nested host groups in Identity Management (IdM) host groups using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To ensure that a nested host group A exists in a host group B : in the Ansible playbook, specify, among the - ipahostgroup variables, the name of the host group B using the name variable. Specify the name of the nested hostgroup A with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This Ansible playbook ensures the presence of the myqsl-server and oracle-server host groups in the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself to IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group in which nested host groups are present: The mysql-server and oracle-server host groups exist in the databases host group. 18.5. Ensuring the presence of member managers in IDM host groups using Ansible Playbooks The following procedure describes ensuring the presence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the host or host group you are adding as member managers and the name of the host group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group contains example_member and project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. 18.6. Ensuring the absence of hosts from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of hosts from host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host and host group information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host whose absence from the host group you want to ensure using the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook ensures the absence of the db.idm.example.com host from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to remove the databases group itself. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group and the hosts it contains: The db.idm.example.com host does not exist in the databases host group. 18.7. Ensuring the absence of nested host groups from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of nested host groups from outer host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. Specify, among the - ipahostgroup variables, the name of the outer host group using the name variable. Specify the name of the nested hostgroup with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook makes sure that the mysql-server and oracle-server host groups are absent from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to ensure the databases group itself is deleted from IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group from which nested host groups should be absent: The output confirms that the mysql-server and oracle-server nested host groups are absent from the outer databases host group. 18.8. Ensuring the absence of IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are removed from IdM using the ipa hostgroup-del command. The result of removing a host group from IdM is the state of the host group being absent from IdM. Because of the Ansible reliance on idempotence, to remove a host group from IdM using Ansible, you must create a playbook in which you define the state of the host group as absent: state: absent . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-absent.yml file. This playbook ensures the absence of the databases host group from IdM. The state: absent means a request to delete the host group from IdM unless it is already deleted. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose absence you ensured: The databases host group does not exist in IdM. 18.9. Ensuring the absence of member managers from IdM host groups using Ansible playbooks The following procedure describes ensuring the absence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or user group you are removing as member managers and the name of the host group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group does not contain example_member or project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. | [
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-present.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are present in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user example_member is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member - name: Ensure member manager group project_admins is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_group: project_admins",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-host-groups.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2 Membership managed by groups: project_admins Membership managed by users: example_member",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is absent - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are absent in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - Ensure host-group databases is absent ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases ipa: ERROR: databases: host group not found",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager host and host group members are absent for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member membermanager_group: project_admins action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-host-groups-are-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/managing-host-groups-using-Ansible-playbooks_using-ansible-to-install-and-manage-idm |
Chapter 5. Installing the multi-model serving platform | Chapter 5. Installing the multi-model serving platform For deploying small and medium-sized models, OpenShift AI includes a multi-model serving platform that is based on the ModelMesh component. On the multi-model serving platform, multiple models can be deployed from the same model server and share the server resources. To install the multi-model serving platform or ModelMesh, follow the steps described in Installing Red Hat OpenShift AI components by using the web console . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_and_uninstalling_openshift_ai_cloud_service/installing-the-multi-model-serving-platform_component-install |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.