title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 5. Configuration options
|
Chapter 5. Configuration options This chapter lists the available configuration options for AMQ OpenWire JMS. JMS configuration options are set as query parameters on the connection URI. For more information, see Section 4.3, "Connection URIs" . 5.1. JMS options jms.username The user name the client uses to authenticate the connection. jms.password The password the client uses to authenticate the connection. jms.clientID The client ID that the client applies to the connection. jms.closeTimeout The timeout in milliseconds for JMS close operations. The default is 15000 (15 seconds). jms.connectResponseTimeout The timeout in milliseconds for JMS connect operations. The default is 0, meaning no timeout. jms.sendTimeout The timeout in milliseconds for JMS send operations. The default is 0, meaning no timeout. jms.checkForDuplicates If enabled, ignore duplicate messages. It is enabled by default. jms.disableTimeStampsByDefault If enabled, do not timestamp messages. It is disabled by default. jms.useAsyncSend If enabled, send messages without waiting for acknowledgment. It is disabled by default. jms.alwaysSyncSend If enabled, send waits for acknowledgment in all delivery modes. It is disabled by default. jms.useCompression If enabled, compress message bodies. It is disabled by default. jms.useRetroactiveConsumer If enabled, non-durable subscribers can receive messages that were published before the subscription started. It is disabled by default. Prefetch policy options Prefetch policy determines how many messages each MessageConsumer fetches from the remote peer and holds in a local "prefetch" buffer. jms.prefetchPolicy.queuePrefetch The number of messages to prefetch for queues. The default is 1000. jms.prefetchPolicy.queueBrowserPrefetch The number of messages to prefetch for queue browsers. The default is 500. jms.prefetchPolicy.topicPrefetch The number of messages to prefetch for non-durable topics. The default is 32766. jms.prefetchPolicy.durableTopicPrefetch The number of messages to prefetch for durable topics. The default is 100. jms.prefetchPolicy.all This can be used to set all prefetch values at once. The value of prefetch can affect the distribution of messages to multiple consumers on a queue. A higher value can result in larger batches sent at once to each consumer. To achieve more even round-robin distribution when consumers operate at different rates, use a lower value. Redelivery policy options Redelivery policy controls how redelivered messages are handled on the client. jms.redeliveryPolicy.maximumRedeliveries The number of times redelivery is attempted before the message is sent to the dead letter queue. The default is 6. -1 means no limit. jms.redeliveryPolicy.redeliveryDelay The time in milliseconds between redelivery attempts. This is used if initialRedeliveryDelay is 0. The default is 1000 (1 second). jms.redeliveryPolicy.initialRedeliveryDelay The time in milliseconds before the first redelivery attempt. The default is 1000 (1 second). jms.redeliveryPolicy.maximumRedeliveryDelay The maximum time in milliseconds between redelivery attempts. This is used if useExponentialBackOff is enabled. The default is 1000 (1 second). -1 means no limit. jms.redeliveryPolicy.useExponentialBackOff If enabled, increase redelivery delay with each subsequent attempt. It is disabled by default. jms.redeliveryPolicy.backOffMultiplier The multiplier for increasing the redelivery delay. The default is 5. jms.redeliveryPolicy.useCollisionAvoidance If enabled, adjust the redelivery delay slightly up or down to avoid collisions. It is disabled by default. jms.redeliveryPolicy.collisionAvoidanceFactor The multiplier for adjusting the redelivery delay. The default is 0.15. nonBlockingRedelivery If enabled, allow out of order redelivery, to avoid head-of-line blocking. It is disabled by default. 5.2. TCP options closeAsync If enabled, close the socket in a separate thread. It is enabled by default. connectionTimeout The timeout in milliseconds for TCP connect operations. The default is 30000 (30 seconds). 0 means no timeout. dynamicManagement If enabled, allow JMX management of the transport logger. It is disabled by default. ioBufferSize The I/O buffer size in bytes. The default is 8192 (8 KiB). jmxPort The port for JMX management. The default is 1099. keepAlive If enabled, use TCP keepalive. This is distinct from the keepalive mechanism based on KeepAliveInfo messages. It is disabled by default. logWriterName The name of the org.apache.activemq.transport.LogWriter implementation. Name-to-class mappings are stored in the resources/META-INF/services/org/apache/activemq/transport/logwriters directory. The default is default . soLinger The socket linger option. The default is 0. soTimeout The timeout in milliseconds for socket read operations. The default is 0, meaning no timeout. soWriteTimeout The timeout in milliseconds for socket write operations. The default is 0, meaning no timeout. startLogging If enabled, and the trace option is also enabled, log transport startup events. It is enabled by default. tcpNoDelay If enabled, do not delay and buffer TCP sends. It is disabled by default. threadName If set, the name assigned to the transport thread. The remote address is appended to the name. It is unset by default. trace If enabled, log transport events to log4j.logger.org.apache.activemq.transport.TransportLogger . It is disabled by default. useInactivityMonitor If enabled, time out connections that fail to send KeepAliveInfo messages. It is enabled by default. useKeepAlive If enabled, periodically send KeepAliveInfo messages to prevent the connection from timing out. It is enabled by default. useLocalHost If enabled, make local connections using the name localhost instead of the current hostname. It is disabled by default. 5.3. SSL/TLS options socket.keyStore The path to the SSL/TLS key store. A key store is required for mutual SSL/TLS authentication. If unset, the value of the javax.net.ssl.keyStore system property is used. socket.keyStorePassword The password for the SSL/TLS key store. If unset, the value of the javax.net.ssl.keyStorePassword system property is used. socket.keyStoreType The string name of the trust store type. The default is the value of java.security.KeyStore.getDefaultType() . socket.trustStore The path to the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStore system property is used. socket.trustStorePassword The password for the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStorePassword system property is used. socket.trustStoreType The string name of the trust store type. The default is the value of java.security.KeyStore.getDefaultType() . socket.enabledCipherSuites A comma-separated list of cipher suites to enable. If unset, the JVM default ciphers are used. socket.enabledProtocols A comma-separated list of SSL/TLS protocols to enable. If unset, the JVM default protocols are used. 5.4. OpenWire options wireFormat.cacheEnabled If enabled, avoid excessive marshalling and bandwidth consumption by caching frequently used values. It is enabled by default. wireFormat.cacheSize The number of cache entries. The cache is per connection. The default is 1024. wireFormat.maxInactivityDuration The maximum time in milliseconds before a connection with no activity is considered dead. The default is 30000 (30 seconds). wireFormat.maxInactivityDurationInitalDelay The initial delay in milliseconds before inactivity checking begins. Note that Inital is misspelled. The default is 10000 (10 seconds). wireFormat.maxFrameSize The maximum frame size in bytes. The default is the value of java.lang.Long.MAX_VALUE . wireFormat.sizePrefixDisabled If set true, do not prefix packets with their size. It is false by default. wireFormat.stackTraceEnabled If enabled, send stack traces from exceptions on the server to the client. It is enabled by default. wireFormat.tcpNoDelayEnabled If enabled, tell the server to activate TCP_NODELAY . It is enabled by default. wireFormat.tightEncodingEnabled If enabled, optimize for smaller encoding on the wire. This increases CPU usage. It is enabled by default. 5.5. Failover options maxReconnectAttempts The number of reconnect attempts allowed before reporting the connection as failed. The default is -1, meaning no limit. 0 disables reconnect. maxReconnectDelay The maximum time in milliseconds between the second and subsequent reconnect attempts. The default is 30000 (30 seconds). randomize If enabled, randomly select one of the failover endpoints. It is enabled by default. reconnectDelayExponent The multiplier for increasing the reconnect delay backoff. The default is 2.0. useExponentialBackOff If enabled, increase the reconnect delay with each subsequent attempt. It is enabled by default. timeout The timeout in milliseconds for send operations waiting for reconnect. The default is -1, meaning no timeout.
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/configuration_options
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_firewalls_and_packet_filters/proc_providing-feedback-on-red-hat-documentation_firewall-packet-filters
|
8.2. Interface Configuration Files
|
8.2. Interface Configuration Files Interface configuration files control the software interfaces for individual network devices. As the system boots, it uses these files to determine what interfaces to bring up and how to configure them. These files are usually named ifcfg- <name> , where <name> refers to the name of the device that the configuration file controls. 8.2.1. Ethernet Interfaces One of the most common interface files is ifcfg-eth0 , which controls the first Ethernet network interface card or NIC in the system. In a system with multiple NICs, there are multiple ifcfg-eth <X> files (where <X> is a unique number corresponding to a specific interface). Because each device has its own configuration file, an administrator can control how each interface functions individually. The following is a sample ifcfg-eth0 file for a system using a fixed IP address: The values required in an interface configuration file can change based on other values. For example, the ifcfg-eth0 file for an interface using DHCP looks quite a bit different because IP information is provided by the DHCP server: The Network Administration Tool ( system-config-network ) is an easy way to make changes to the various network interface configuration files (refer to the chapter titled Network Configuration in the System Administrators Guide for detailed instructions on using this tool). However, it is also possible to edit the configuration files for a given network interface manually. Below is a listing of the configurable parameters in an Ethernet interface configuration file: BOOTPROTO= <protocol> , where <protocol> is one of the following: none - No boot-time protocol should be used. bootp - The BOOTP protocol should be used. dhcp - The DHCP protocol should be used. BROADCAST= <address> , where <address> is the broadcast address. This directive is deprecated, as the value is calculated automatically with ipcalc . DEVICE= <name> , where <name> is the name of the physical device (except for dynamically-allocated PPP devices where it is the logical name ). DHCP_HOSTNAME - Only use this option if the DHCP server requires the client to specify a hostname before receiving an IP address. (The DHCP server daemon in Red Hat Enterprise Linux does not support this feature.) DNS {1,2} = <address> , where <address> is a name server address to be placed in /etc/resolv.conf if the PEERDNS directive is set to yes . ETHTOOL_OPTS= <options> , where <options> are any device-specific options supported by ethtool . For example, if you wanted to force 100Mb, full duplex: Note that changing speed or duplex settings almost always requires disabling autonegotiation with the autoneg off option. This needs to be stated first, as the option entries are order dependent. GATEWAY= <address> , where <address> is the IP address of the network router or gateway device (if any). HWADDR= <MAC-address> , where <MAC-address> is the hardware address of the Ethernet device in the form AA:BB:CC:DD:EE:FF . This directive is useful for machines with multiple NICs to ensure that the interfaces are assigned the correct device names regardless of the configured load order for each NIC's module. This directive should not be used in conjunction with MACADDR . IPADDR= <address> , where <address> is the IP address. MACADDR= <MAC-address> , where <MAC-address> is the hardware address of the Ethernet device in the form AA:BB:CC:DD:EE:FF . This directive is used to assign a MAC address to an interface, overriding the one assigned to the physical NIC. This directive should not be used in conjunction with HWADDR . MASTER= <bond-interface> ,where <bond-interface> is the channel bonding interface to which the interface the Ethernet interface is linked. This directive is used in conjunction with the SLAVE directive. Refer to Section 8.2.3, "Channel Bonding Interfaces" for more about channel bonding interfaces. NETMASK= <mask> , where <mask> is the netmask value. NETWORK= <address> , where <address> is the network address. This directive is deprecated, as the value is calculated automatically with ipcalc . ONBOOT= <answer> , where <answer> is one of the following: yes - This device should be activated at boot-time. no - This device should not be activated at boot-time. PEERDNS= <answer> , where <answer> is one of the following: yes - Modify /etc/resolv.conf if the DNS directive is set. If using DHCP, then yes is the default. no - Do not modify /etc/resolv.conf . SLAVE= <bond-interface> ,where <bond-interface> is one of the following: yes - This device is controlled by the channel bonding interface specified in the MASTER directive. no - This device is not controlled by the channel bonding interface specified in the MASTER directive. This directive is used in conjunction with the MASTER directive. Refer to Section 8.2.3, "Channel Bonding Interfaces" for more about channel bond interfaces. SRCADDR= <address> , where <address> is the specified source IP address for outgoing packets. USERCTL= <answer> , where <answer> is one of the following: yes - Non-root users are allowed to control this device. no - Non-root users are not allowed to control this device.
|
[
"DEVICE=eth0 BOOTPROTO=none ONBOOT=yes NETWORK=10.0.1.0 NETMASK=255.255.255.0 IPADDR=10.0.1.27 USERCTL=no",
"DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes",
"ETHTOOL_OPTS=\"autoneg off speed 100 duplex full\""
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-networkscripts-interfaces
|
Chapter 24. Authentication and Interoperability
|
Chapter 24. Authentication and Interoperability The IdM LDAP server no longer becomes unresponsive when resolving an AD user takes a long time When the System Security Services Daemon (SSSD) took a long time to resolve a user from a trusted Active Directory (AD) domain on the Identity Management (IdM) server, the IdM LDAP server sometimes exhausted its own worker threads. Consequently, the IdM LDAP server was unable to respond to further requests from SSSD clients or other LDAP clients. This update adds a new API to SSSD on the IdM server, which enables identity requests to time out. Also, the IdM LDAP extended identity operations plug-in and the Schema Compatibility plug-in now support this API to enable canceling requests that take too long. As a result, the IdM LDAP server can recover from the described situation and keep responding to further requests. (BZ# 1415162 , BZ# 1473571 , BZ# 1473577 ) Application configuration snippets in /etc/krb5.conf.d/ are now automatically read in existing configurations Previously, Kerberos did not automatically add support for the /etc/krb5.conf.d/ directory to existing configurations. Consequently, application configuration snippets in /etc/krb5.conf.d/ were not read unless the user added the include statement for the directory manually. This update modifies existing configurations to include the appropriate includedir line pointing to /etc/krb5.conf.d/ . As a result, applications can rely on their configuration snippets in /etc/krb5.conf.d . Note that if you manually remove the includedir line after this update, successive updates will not add it again. (BZ# 1431198 ) pam_mkhomedir can now create home directories under / Previously, the pam_mkhomedir module was unable to create subdirectories under the / directory. Consequently, when a user with a home directory in a non-existent directory under / attempted to log in, the attempt failed with this error: This update fixes the described problem, and pam_mkhomedir is now able to create home directories in this situation. Note that even after applying this update, SELinux might still prevent pam_mkhomedir from creating the home directory, which is the expected SELinux behavior. To ensure pam_mkhomedir is allowed to create the home directory, modify the SELinux policy using a custom SELinux module, which enables the required paths to be created with the correct SELinux context. (BZ#1509338) Kerberos operations depending on KVNO in the keytab file no longer fail when a RODC is used The adcli utility did not handle the key version number (KVNO) properly when updating Kerberos keys on a read-only domain controller (RODC). Consequently, some operations, such as validating a Kerberos ticket, failed because no key with a matching KVNO was found in the keytab file. With this update, adcli detects if a RODC is used and handles the KVNO accordingly. As a result, the keytab file contains the right KVNO, and all Kerberos operations depending on this behavior work as expected. (BZ# 1471021 ) krb5 properly displays errors about PKINIT misconfiguration in single-realm KDC environments Previously, when Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) was misconfigured, the krb5 package did not report the incorrect configuration to the administrator. For example, this problem occurred when the deprecated pkinit_kdc_ocsp option was specified in the /etc/krb5.conf file. With this update, krb5 exposes PKINIT initialization failures when only one realm is specified in the Kerberos key distribution center (KDC). As a result, single-realm KDCs report PKINIT misconfiguration properly. (BZ#1460089) Certificate System no longer incorrectly logs ROLE_ASSUME audit events Previously, Certificate System incorrectly generated the ROLE_ASSUME audit event for certain operations even if no privileged access occurred for a user. Consequently, the event was incorrectly logged. The problem has been fixed and ROLE_ASSUME events are no longer logged in the mentioned scenario. (BZ# 1461524 ) Updated attributes in CERT_STATUS_CHANGE_REQUEST_PROCESSED audit log event Previously, the CERT_STATUS_CHANGE_REQUEST_PROCESSED audit event in log files contained the following attributes: ReqID - The requester ID SubjectID - The subject ID of the certificate For consistency with other audit events, the attributes have been modified and now contain the following information: ReqID - The request ID SubjectID - The requester ID (BZ# 1461217 ) Signed audit log verification now works correctly Previously, due to improper logging system initialization and incorrect signature calculation by the verification tool, signed audit log verification could fail on the first log entry and after log rotation. With this update, the logging system and the verification tool have been fixed. As a result, signed audit log verification now works correctly in the mentioned scenarios. (BZ# 1404794 ) Certificate System now validates the banner file A version of Certificate System introduced a configurable access banner - a custom message to be displayed in the PKI console at the start of every secure session. The contents of this banner were not validated, which could cause a JAXBUnmarshalException error if the message contained invalid UTF-8 characters. With this update, the contents of the banner file are validated both on server startup and on client requests. If the file is found to contain invalid UTF-8 characters on server startup, the server will not start. If invalid characters are found when a client requests the banner, the server will return an error message and not send the banner to the client. (BZ#1446579) The TPS subsystem no longer fails when performing a symmetric key changeover on a HSM Previously, attempting to perform a symmetric key changeover with the master key on a Hardware Security Module (HSM) token failed with an error reported by the Certificate System Token Processing System (TPS) subsystem. This update fixes the way the master key on a HSM is used to calculate the new key set, allowing the TPS to successfully upgrade a token key set when the master resides on a HSM. The fix is currently verified with the G&D SmartCafe 6.0 HSM. (BZ#1465142) Certificate System CAs no longer display an error when handing subject DNs without a CN component Previously, an incoming request missing the Common Name (CN) component caused a NullPointerException on the Certificate Authority (CA) because the implementation expected the CN to be present in the subject Distinguished Name (DN) of the Certificate Management over CMS (CMC). This update allows the CA to handle subject DN without a CN component, preventing the exception from being thrown. (BZ# 1474658 ) The pki-server-upgrade utility no longer fails if target files are missing A bug in the pki-server-upgrade utiltiy caused it to attempt to locate a non-existent file. As a consequence, the upgrade process failed to complete, and could possibly leave the PKI deployment in an invalid state. With this update, pki-server-upgrade has been modified to correctly handle cases where target files are missing, and PKI upgrades now work correctly. (BZ# 1479663 ) The Certificate System CA key replication now works correctly A update to one of the key unwrapping functions introduced a requirement for a key usage parameter which was not being supplied at the call site, which caused lightweight Certiciate Authority (CA) key replication to fail. This bug has been fixed by modifying the call site so that it supplies the key usage parameter, and lightweight CA key replication now works as expected. (BZ# 1484359 ) Certificate System no longer fails to import PKCS #12 files An earlier change to PKCS #12 password encoding in the Network Security Services (NSS) caused Certificate System to fail to import PKCS #12 files. As a consequence, the Certificate Authority (CA) clone installation could not be completed. With this update, PKI will retry a failed PKCS #12 decryption with a different password encoding, which allows it to import PKCS #12 files produced by both old and new versions of NSS, and CA clone installation succeeds. (BZ# 1486225 ) The TPS user interface now displays the token type and origin fields Previously, the tps-cert-find and tps-cert-show Token Processing System (TPS) user interface utilites did not display the token type and origin fields which were present in the legacy TPS interface. The interface has been updated and now displays the missing information. (BZ#1491052) Certificate System issued certificates with an expiration date later than the expiration date of the CA certificate Previously, when signing a certificate for an external Certificate Authority (CA), Certificate System used the ValidityConstraint plug-in. Consequently, it was possible to issue certificates with a later expiry date than the expiry date of the issuing CA. This update adds the CAValidityConstraint plug-in to the registry so that it becomes available for the enrollment profiles. In addition, the ValidityConstraint plug-in in the caCMCcaCert profile has been replaced with the CAValidityConstraint plug-in which effectively sets the restrictions. As a result, issuing certificates with an expiry date later than the issuing CA is no longer allowed. (BZ# 1518096 ) CA certificates without SKI extension no longer causes issuance failures A update of Certificate System incorrectly removed a fallback procedure, which generated the Issuer Key Identifier. Consequently, the Certificate Authority (CA) failed to issue certificates if the CA signing certificate does not have the Subject Key Identifier (SKI) extension set. With this update, the missing procedure has been added again. As a result, issuing certificates no longer fails if the CA signing certificate does not contain the SKI extension. (BZ# 1499054 ) Certificate System correctly logs the user name in CMC request audit events Previously, when Certificate System received a Certificate Management over CMS (CMC) request, the server logged an audit event with the SubjectID field set to USDNonRoleUserUSD . As a result, administrators could not verify who issued the request. This update fixes the problem, and Certificate System now correctly logs the user name in the mentioned scenario. (BZ# 1506819 ) The Directory Server trivial word check password policy now works as expected Previously, when you set a userPassword attribute to exactly the same value as an attribute restricted by the passwordTokenMin setting with the same length, Directory Server incorrectly allowed the password update operation. With this update, the trivial word check password policy feature now correctly verifies the entire user attribute value as a whole, and the described problem no longer occurs. (BZ# 1517788 ) The pkidestroy utility now fully removes instances that are started by the pki-tomcatd-nuxwdog service Previously, the pkidestroy utility did not remove Certificate System instances that used the pki-tomcatd-nuxwdog service as a starting mechanism. As a consequence, administrators had to migrate pki-tomcatd-nuxwdog to the service without watchdog before using pkidestroy to fully remove an instance. The utility has been updated, and instances are correctly removed in the mentioned scenario. Note that if you manually removed the password file before running pkidestroy , the utility will ask for the password to update the security domain. (BZ# 1498957 ) The Certificate System deployment archive file no longer contains passwords in plain text Previously, when you created a new Certificate System instance by passing a configuration file with a password in the [DEFAULT] section to the pkispawn utility, the password was visible in the archived deployment file. Although this file has world readable permissions, it is contained within a directory that is only accessible by the Certificate Server instance user, which is pkiuser , by default. With this update, permissions on this file have been restricted to the Certificate Server instance user, and pkispawn now masks the password in the archived deployment file. To restrict access to the password on an existing installation, manually remove the password from the /etc/sysconfig/pki/tomcat/<instance_name>/<subsystem>/deployment.cfg file, and set the file's permissions to 600 . (BZ# 1532759 ) ACIs with the targetfilter keyword work correctly Previously, if an Access Control Instruction (ACI) in Directory Server used the targetfilter keyword, searches containing the geteffective rights control returned before the code was executed for template entries. Consequently, the GetEffectireRights() function could not determine the permissions when creating entries and returned false-negative results when verifying an ACI. With this update, Directory Server creates a template entry based on the provided geteffective attribute and verifies access to this template entry. As a result, ACIs in the mentioned scenario work correctly. (BZ# 1459946 ) Directory Server searches with a scope set to one have been fixed Due to a bug in Directory Server, searches with a scope set to one returned all child entries instead of only the ones that matched the filter. This update fixes the problem. As a result, searches with scope one only return entries which are matching the filter. (BZ# 1511462 ) Clear error message when sending TLS data to a non-LDAPS port Previously, Directory Server decoded TLS protocol handshakes sent to a port that was configured to use plain text as an LDAPMessage data type. However, decoding failed and the server reported the misleading BER was 3 bytes, but actually was <greater> error. With this update, Directory Server detects if TLS data is sent to a port configured for plain text and returns the following error message to the client: As a result, the new error message indicates that an incorrect client configuration causes the problem. (BZ# 1445188 ) Directory Server no longer logs an error if not running the cleanallruv task After removing a replica server from an existing replication topology without running the cleanallruv task, Directory Server previously logged an error about not being able to replace referral entries. This update adds a check for duplicate referrals and removes them. As a result, the error is no longer logged. (BZ# 1434335 ) Using a large number of CoS templates no longer slow down the virtual attribute processing time Due to a bug, using a large number of Class of Service (CoS) templates in Directory Server increased the virtual attribute processing time. This update improves the structure of the CoS storage. As a result, using a large number of CoS templates no longer increases the virtual attribute processing time. (BZ# 1523183 ) Directory Server now handles binds during an online initialization correctly During an online initialization from one Directory Server master to another, the master receiving the changes is temporarily set into a referral mode. While in this mode, the server only returns referrals. Previously, Directory Server incorrectly generated these bind referrals. As a consequence, the server could terminate unexpectedly in the mentioned scenario. With this update, the server correctly generates bind referrals. As a result, the server now correctly handles binds during an online initialization. (BZ# 1483681 ) The [email protected] meta target is now linked to multi-user.target Previously, the [email protected] meta target had the Wants parameter set to dirsrv.target in its systemd file. When you enabled [email protected] , this correctly enabled the service to the dirsrv.target , but dirsrv.target was not enabled. Consequently, Directory Server did not start when the system booted. With this update, the [email protected] meta target is now linked to multi-user.target . As a result, when you enable [email protected] , Directory Server starts automatically when the system boots. (BZ# 1476207 ) The memberOf plug-in now logs all update attempts of the memberOf attribute In certain situations, Directory Server fails to update the memberOf attribute of a user entry. In this case, the memberOf plug-in logs an error message and forces the update. In the Directory Server version, the second try was not logged if it was successful. Consequently, the log entries were misleading, because only the failed attempt was logged. With this update, the memberOf plug-in also logs the successful update if the first try failed. As a result, the plug-in now logs the initial failure, and the subsequent successful retry as well. (BZ# 1533571 ) The Directory Server password policies now work correctly Previously, subtree and user password policies did not use the same default values as the global password policy. As a consequence, Directory Server incorrectly skipped certain syntax checks. This bug has been fixed. As a result, the password policy features work the same for the global configuration and the subtree and user policies. (BZ# 1465600 ) A buffer overflow has been fixed in Directory Server Previously, if you configured an attribute to be indexed and imported an entry that contained a large binary value into this attribute, the server terminated unexpectedly due to an buffer overflow. The buffer has been fixed. As a result, the server works as expected in the mentioned scenario. (BZ# 1498980 ) Directory Server now sends the password expired control during grace logins Previously, Directory Server did not send the expired password control when an expired password had grace logins left. Consequently, clients could not tell the user that the password was expired or how many grace logins were left. The problem has been fixed. As a result, clients can now tell the user if a password is expired and how many grace logins remain. (BZ# 1464505 ) An unnecessary global lock has been removed from Directory Server Previously, when the memberOf plug-in was enabled and users and groups were stored in separate back ends, a deadlock could occur. An unnecessary global lock has been removed and, as a result, the deadlock no longer occurs in the mentioned scenario. (BZ# 1501058 ) Replication now works correctly with TLS client authentication and FIPS mode enabled Previously, if you used TLS client authentication in a Directory Server replication environment with Federal Information Processing Standard (FIPS) mode enabled, the internal Network Security Services (NSS) database token differed from a token on a system with FIPS mode disabled. As a consequence, replication failed. The problem has been fixed, and as a result, replication agreements with TLS client authentication now work correctly if FIPS mode is enabled. (BZ# 1464463 ) Directory Server now correctly sets whether virtual attributes are operational The pwdpolicysubentry subtree password policy attribute in Directory Server is flagged as operational. However, in the version of Directory Server, this flag was incorrectly applied to following virtual attributes that were processed. As a consequence, the search results were not visible to the client. With this update, the server now resets the attribute before processing the virtual attribute and Class of Service (CoS). As a result, the expected virtual attributes and CoS are now returned to the client. (BZ# 1453155 ) Backup now succeeds if replication was enabled and a changelog file existed Previously, if replication was enabled and a changelog file existed, performing a backup on this master server failed. This update sets the internal options for correctly copying a file. As a result, creating a backup now succeeds in the mentioned scenario. (BZ# 1476322 ) Certificate System updates the revocation reason correctly Previously, if a user temporarily lost a smart card token, the administrator of a Certificate System Token Processing System (TPS) in some cases changed the status of the certificate from on hold to permanently lost or damaged . However, the new revocation reason did not get reflected on the CA. With this update, it is possible to change a certificate status from on hold directly to revoked . As a result, the revocation reason is updated correctly. (BZ#1500474) A race condition has been fixed in the Certificate System clone installation process In certain situations, a race condition arose between the LDAP replication of security domain session objects and the execution of an authenticated operation against a clone other than the clone where the login occurred. As a consequence, cloning a Certificate System installation failed. With this update, the clone installation process now waits for the security domain login to finish before it enables the security domain session objects to be replicated to other clones. As a result, the clone installation no longer fails. (BZ#1402280) Certificate System now uses strong ciphers by default With this update, the list of enabled ciphers has been changed. By default, only strong ciphers, which are compliant with the Federal Information Processing Standard (FIPS), are enabled in Certificate System. RSA ciphers enabled by default: TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA Note that the TLS_RSA_WITH_AES_128_CBC_SHA and TLS_RSA_WITH_AES_256_CBC_SHA ciphers need to be enabled to enable the pkispawn utility to connect to the LDAP server during the installation and configuration. ECC ciphers enabled by default: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 In addition, the default ranges of the sslVersionRangeStream and sslVersionRangeDatagram parameters in the /var/lib/pki/<instance_name>/conf/server.xml file now use only TLS 1.1 and TLS 1.2 ciphers. (BZ# 1539125 ) The pkispawn utility no longer displays incorrect errors Previously, during a successful installation of Certificate System, the pkispawn utility incorrectly displayed errors related to deleting temporary certificates. The problem has been fixed, and the error messages no longer display if the installation succeeds. (BZ# 1520277 ) The Certificate System profile configuration update method now correctly handles backslashes Previously, a parser in Certificate System removed backslash characters from the configuration when a user updated a profile. As a consequence, affected profile configurations could not be correctly imported, and issuing certificates failed or the system issued incorrect certificates. Certificate System now uses a parser that handles backslashes correctly. As a result, profile configuration updates import the configuration correctly. (BZ# 1541853 )
|
[
"Unable to create and initialize directory '/<directory_path>'.",
"Incoming BER Element may be misformed. This may indicate an attempt to use TLS on a plaintext port, IE ldaps://localhost:389. Check your client LDAP_URI settings."
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/bug_fixes_authentication_and_interoperability
|
Installing GitOps
|
Installing GitOps Red Hat OpenShift GitOps 1.11 Installing the OpenShift GitOps Operator and logging in to the Argo CD instance Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/installing_gitops/index
|
25.3. XML Representation of a Virtual Machine Creation Event
|
25.3. XML Representation of a Virtual Machine Creation Event In addition to user , an event representation also contains a set of XML element relationships to resources relevant to the event. Example 25.2. An XML representation of a virtual machine creation event This example representation provides XML element relationships to a virtual machine resource and a storage domain resource.
|
[
"<event id=\"635\" href=\"/ovirt-engine/api/events/635\"> <description>VM bar was created by rhevadmin.</description> <code>34</code> <severity>normal</severity> <time>2011-07-11T16:32:03.172+02:00</time> <user id=\"4621b611-43eb-4d2b-ae5f-1180850268c4\" href=\"/ovirt-engine/api/users/4621b611-43eb-4d2b-ae5f-1180850268c4\"/> <vm id=\"9b22d423-e16b-4dd8-9c06-c8e9358fbc66\" href=\"/ovirt-engine/api/vms/9b22d423-e16b-4dd8-9c06-c8e9358fbc66\"/> <storage_domain id=\"a8a0e93d-c570-45ab-9cd6-3c68ab31221f\" href=\"/ovirt-engine/api/storagedomains/a8a0e93d-c570-45ab-9cd6-3c68ab31221f\"/> </event>"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_virtual_machine_creation_event
|
function::sock_state_num2str
|
function::sock_state_num2str Name function::sock_state_num2str - Given a socket state number, return a string representation Synopsis Arguments state The state number
|
[
"sock_state_num2str:string(state:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sock-state-num2str
|
Chapter 17. Workload partitioning in single-node OpenShift
|
Chapter 17. Workload partitioning in single-node OpenShift In resource-constrained environments, such as single-node OpenShift deployments, use workload partitioning to isolate OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. The minimum number of reserved CPUs required for the cluster management in single-node OpenShift is four CPU Hyper-Threads (HTs). With workload partitioning, you annotate the set of cluster management pods and a set of typical add-on Operators for inclusion in the cluster management workload partition. These pods operate normally within the minimum size CPU configuration. Additional Operators or workloads outside of the set of minimum cluster management pods require additional CPUs to be added to the workload partition. Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities. The following is an overview of the configurations required for workload partitioning: Workload partitioning that uses /etc/crio/crio.conf.d/01-workload-partitioning pins the OpenShift Container Platform infrastructure pods to a defined cpuset configuration. The performance profile pins cluster services such as systemd and kubelet to the CPUs that are defined in the spec.cpu.reserved field. Note Using the Node Tuning Operator, you can configure the performance profile to also pin system-level apps for a complete workload partitioning configuration on the node. The CPUs that you specify in the performance profile spec.cpu.reserved field and the workload partitioning cpuset field must match. Workload partitioning introduces an extended <workload-type>.workload.openshift.io/cores resource for each defined CPU pool, or workload type . Kubelet advertises the resources and CPU requests by pods allocated to the pool within the corresponding resource. When workload partitioning is enabled, the <workload-type>.workload.openshift.io/cores resource allows access to the CPU capacity of the host, not just the default CPU pool. Additional resources For the recommended workload partitioning configuration for single-node OpenShift clusters, see Workload partitioning .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/sno-du-enabling-workload-partitioning-on-single-node-openshift
|
Upgrading
|
Upgrading OpenShift Dedicated 4 Upgrading OpenShift Dedicated Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/upgrading/index
|
Chapter 10. availability
|
Chapter 10. availability This chapter describes the commands under the availability command. 10.1. availability zone list List availability zones and their status Usage: Table 10.1. Command arguments Value Summary -h, --help Show this help message and exit --compute List compute availability zones --network List network availability zones --volume List volume availability zones --long List additional fields in output Table 10.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 10.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack availability zone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--compute] [--network] [--volume] [--long]"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/availability
|
Chapter 2. Externalizing sessions with Spring Session
|
Chapter 2. Externalizing sessions with Spring Session Store session data for Spring applications in Data Grid caches and independently of the container. 2.1. Externalizing Sessions with Spring Session Use the Spring Session API to externalize session data to Data Grid. Procedure Add dependencies to your pom.xml . Embedded caches: infinispan-spring6-embedded Remote caches: infinispan-spring6-remote The following example is for remote caches: <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> </dependency> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-spring6-remote</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>USD{version.spring}</version> </dependency> <dependency> <groupId>org.springframework.session</groupId> <artifactId>spring-session-core</artifactId> <version>USD{version.spring}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>USD{version.spring}</version> </dependency> </dependencies> Specify the appropriate FactoryBean to expose a CacheManager instance. Embedded caches: SpringEmbeddedCacheManagerFactoryBean Remote caches: SpringRemoteCacheManagerFactoryBean Enable Spring Session with the appropriate annotation. Embedded caches: @EnableInfinispanEmbeddedHttpSession Remote caches: @EnableInfinispanRemoteHttpSession These annotations have optional parameters: maxInactiveIntervalInSeconds sets session expiration time in seconds. The default is 1800 . cacheName specifies the name of the cache that stores sessions. The default is sessions . The following example shows a complete, annotation-based configuration: @EnableInfinispanEmbeddedHttpSession @Configuration public class Config { @Bean public SpringEmbeddedCacheManagerFactoryBean springCacheManager() { return new SpringEmbeddedCacheManagerFactoryBean(); } //An optional configuration bean responsible for replacing the default //cookie that obtains configuration. //For more information refer to the Spring Session documentation. @Bean public HttpSessionIdResolver httpSessionIdResolver() { return HeaderHttpSessionIdResolver.xAuthToken(); } }
|
[
"<dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> </dependency> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-spring6-remote</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>USD{version.spring}</version> </dependency> <dependency> <groupId>org.springframework.session</groupId> <artifactId>spring-session-core</artifactId> <version>USD{version.spring}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>USD{version.spring}</version> </dependency> </dependencies>",
"@EnableInfinispanEmbeddedHttpSession @Configuration public class Config { @Bean public SpringEmbeddedCacheManagerFactoryBean springCacheManager() { return new SpringEmbeddedCacheManagerFactoryBean(); } //An optional configuration bean responsible for replacing the default //cookie that obtains configuration. //For more information refer to the Spring Session documentation. @Bean public HttpSessionIdResolver httpSessionIdResolver() { return HeaderHttpSessionIdResolver.xAuthToken(); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_data_grid_with_spring/externalizing-sessions-spring
|
Chapter 13. Managing container storage interface (CSI) component placements
|
Chapter 13. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Note Ensure that all non-string values in the Tolerations value field has double quotation marks. For example, the values true which is of type boolean, and 1 which is of type int must be input as "true" and "1". Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes.
|
[
"oc edit configmap rook-ceph-operator-config -n openshift-storage",
"oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml",
"apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]",
"oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>",
"oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_and_allocating_storage_resources/managing-container-storage-interface-component-placements_rhodf
|
2.2.10.2. Installation
|
2.2.10.2. Installation Perl's capabilities can be extended by installing additional modules. These modules come in the following forms: Official Red Hat RPM The official module packages can be installed with yum or rpm from the Red Hat Enterprise Linux repositories. They are installed to /usr/share/perl5 and either /usr/lib/perl5 for 32bit architectures or /usr/lib64/perl5 for 64bit architectures. Modules from CPAN Use the cpan tool provided by the perl-CPAN package to install modules directly from the CPAN website. They are installed to /usr/local/share/perl5 and either /usr/local/lib/perl5 for 32bit architectures or /usr/local/lib64/perl5 for 64bit architectures. Third party module package Third party modules are installed to /usr/share/perl5/vendor_perl and either /usr/lib/perl5/vendor_perl for 32bit architectures or /usr/lib64/perl5/vendor_perl for 64bit architectures. Custom module package / manually installed module These should be placed in the same directories as third-party modules. That is, /usr/share/perl5/vendor_perl and either /usr/lib/perl5/vendor_perl for 32bit architectures or /usr/lib64/perl5/vendor_perl for 64bit architectures. Warning If an official version of a module is already installed, installing its non-official version can create conflicts in the /usr/share/man directory.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/installation
|
Chapter 2. BrokerTemplateInstance [template.openshift.io/v1]
|
Chapter 2. BrokerTemplateInstance [template.openshift.io/v1] Description BrokerTemplateInstance holds the service broker-related state associated with a TemplateInstance. BrokerTemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BrokerTemplateInstanceSpec describes the state of a BrokerTemplateInstance. 2.1.1. .spec Description BrokerTemplateInstanceSpec describes the state of a BrokerTemplateInstance. Type object Required templateInstance secret Property Type Description bindingIDs array (string) bindingids is a list of 'binding_id's provided during successive bind calls to the template service broker. secret ObjectReference secret is a reference to a Secret object residing in a namespace, containing the necessary template parameters. templateInstance ObjectReference templateinstance is a reference to a TemplateInstance object residing in a namespace. 2.2. API endpoints The following API endpoints are available: /apis/template.openshift.io/v1/brokertemplateinstances DELETE : delete collection of BrokerTemplateInstance GET : list or watch objects of kind BrokerTemplateInstance POST : create a BrokerTemplateInstance /apis/template.openshift.io/v1/watch/brokertemplateinstances GET : watch individual changes to a list of BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/brokertemplateinstances/{name} DELETE : delete a BrokerTemplateInstance GET : read the specified BrokerTemplateInstance PATCH : partially update the specified BrokerTemplateInstance PUT : replace the specified BrokerTemplateInstance /apis/template.openshift.io/v1/watch/brokertemplateinstances/{name} GET : watch changes to an object of kind BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/template.openshift.io/v1/brokertemplateinstances HTTP method DELETE Description delete collection of BrokerTemplateInstance Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind BrokerTemplateInstance Table 2.3. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstanceList schema 401 - Unauthorized Empty HTTP method POST Description create a BrokerTemplateInstance Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body BrokerTemplateInstance schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 202 - Accepted BrokerTemplateInstance schema 401 - Unauthorized Empty 2.2.2. /apis/template.openshift.io/v1/watch/brokertemplateinstances HTTP method GET Description watch individual changes to a list of BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/template.openshift.io/v1/brokertemplateinstances/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the BrokerTemplateInstance HTTP method DELETE Description delete a BrokerTemplateInstance Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BrokerTemplateInstance Table 2.11. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BrokerTemplateInstance Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BrokerTemplateInstance Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body BrokerTemplateInstance schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 401 - Unauthorized Empty 2.2.4. /apis/template.openshift.io/v1/watch/brokertemplateinstances/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the BrokerTemplateInstance HTTP method GET Description watch changes to an object of kind BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/template_apis/brokertemplateinstance-template-openshift-io-v1
|
Chapter 1. Update options with Red Hat build of MicroShift and Red Hat Device Edge
|
Chapter 1. Update options with Red Hat build of MicroShift and Red Hat Device Edge Updates are supported on Red Hat build of MicroShift beginning with the General Availability version 4.14. 1.1. Red Hat Device Edge updates You can update Red Hat Enterprise Linux for Edge (RHEL for Edge) images or Red Hat Enterprise Linux (RHEL) with or without updating the Red Hat build of MicroShift version if the version combination is supported. See the following table for details: Red Hat Device Edge release compatibility matrix Red Hat Enterprise Linux (RHEL) and MicroShift work together as a single solution for device-edge computing. You can update each component separately, but the product versions must be compatible. Supported configurations of Red Hat Device Edge use verified releases for each together as listed in the following table: RHEL Version(s) MicroShift Version Supported MicroShift Version -> Version Updates 9.4 4.18 4.18.0 -> 4.18.z 9.4 4.17 4.17.1 -> 4.17.z, 4.17 -> 4.18 9.4 4.16 4.16.0 -> 4.16.z, 4.16 -> 4.17, 4.16 -> 4.18 9.2, 9.3 4.15 4.15.0 -> 4.15.z, 4.15 -> 4.16 on RHEL 9.4 9.2, 9.3 4.14 4.14.0 -> 4.14.z, 4.14 -> 4.15, 4.14 -> 4.16 on RHEL 9.4 Warning Keeping component versions in a supported configuration of Red Hat Device Edge can require updating MicroShift and RHEL at the same time. Ensure that your version of RHEL is compatible with the version of MicroShift you are updating to, especially if you are updating MicroShift across two minor versions. Otherwise, you can create an unsupported configuration, break your cluster, or both. For more information, see the Red Hat Device Edge release compatibility matrix . 1.2. Standalone MicroShift updates Consider the following when planning to update MicroShift: You can potentially update MicroShift without reinstalling your applications and Operators. RHEL or RHEL for Edge updates are only required to update MicroShift if the existing operating system is not compatible with the new version of MicroShift that you want to use. MicroShift operates as an in-place update and does not require removal of the version. Data backups beyond those required for the usual functioning of your applications are also not required. Note Only rpm-ostree updates include automatic rollbacks. 1.2.1. RPM-OSTree updates Using the RHEL for Edge rpm-ostree update path allows for automated backup and system rollback in case any part of the update fails. You can update MicroShift on an rpm-ostree system such as RHEL for Edge by building a new system image containing the new version of MicroShift. The rpm-ostree image can be the same version or an updated version, but the versions of RHEL for Edge and MicroShift must be compatible. The following features are available in the RHEL for Edge update path: The system automatically rolls back to a healthy system state if the update fails. Applications do not need to be reinstalled. Operators do not need to be reinstalled. You can update an application without updating MicroShift using this update type. The image you build can contain other updates as needed. To begin a MicroShift update by embedding in a RHEL for Edge image, use the procedures in the following documentation: Applying updates on an OSTree system To understand more about Greenboot, see the following documentation: The Greenboot health check Greenboot workload health check scripts 1.2.2. Manual RPM updates You can update MicroShift manually on a non-OSTree system such as Red Hat Enterprise Linux (RHEL) by updating the RPMs. To complete this update type, use the subscription manager to enable the repository that contains the new RPMs. Use manual processes to ensure system health and complete additional system backups. To begin a manual RPM update, use the procedures in the following documentation: About updating MicroShift RPMs manually 1.2.2.1. Keeping MicroShift and RHEL in a supported configuration When using RPM updates, avoid creating an unsupported configuration or breaking your cluster by carefully managing your RHEL repositories. Prerequisites You understand the support status of the version of MicroShift you are using. You have root-user access to your build host. You reviewed the Red Hat Device Edge release compatibility matrix . Procedure Avoid unintended updates by locking your operating system version by running the following command: USD sudo subscription-manager release --set=<x.y> 1 1 Replace <x.y> with the major and minor version of your compatible RHEL system. For example, 9.4 . Update both MicroShift and RHEL versions by running the following command: USD sudo subscription-manager release --set=<9.4> command. 1 1 Replace <9.4> with the major and minor version of your compatible RHEL system. If you are using an EUS MicroShift release, disable the RHEL standard-support-scope repositories by running the following command: USD sudo subscription-manager repos \ --disable=rhel-<9>-for-x86_64-appstream-rpms \ 1 --disable=rhel-<9>-for-x86_64-baseos-rpms 1 Replace <9> with the major version of your compatible RHEL system. After you disable the standard-support repositories, enable the RHEL EUS repos by running the following command: USD sudo subscription-manager repos \ --enable rhel-<9>-for-x86_64-appstream-eus-rpms \ 1 --enable rhel-<9>-for-x86_64-baseos-eus-rpms` 1 Replace <9> with the major version of your compatible RHEL system. Verification List the repositories you have enabled for RHEL by running the following command: USD sudo subscription-manager repos --list-enabled Example output +----------------------------------------------------------+ Available Repositories in /etc/yum.repos.d/redhat.repo +----------------------------------------------------------+ Repo ID: rhel-9-for-x86_64-baseos-eus-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - BaseOS - Extended Update Support (RPMs) Repo URL: https://cdn.redhat.com/content/eus/rhel9/USDreleasever/x86_64/baseos/os Enabled: 1 Repo ID: rhel-9-for-x86_64-appstream-eus-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - AppStream - Extended Update Support (RPMs) Repo URL: https://cdn.redhat.com/content/eus/rhel9/USDreleasever/x86_64/appstream/os Enabled: 1 1.3. Standalone RHEL for Edge updates You can update RHEL for Edge or RHEL without updating MicroShift, on the condition that the two versions are compatible. Check compatibilities before beginning an update. Use the RHEL for Edge documentation specific to your update path. Additional resources Managing RHEL for Edge images 1.4. Simultaneous MicroShift and operating system updates You can update RHEL for Edge or RHEL and update MicroShift at the same time, on the condition that the versions are compatible. Use the following workflow: Check for compatibility before beginning an update. Use the RHEL for Edge and RHEL documentation specific to your update path to plan and update the operating system. Enable the correct MicroShift repository to ensure alignment between your RHEL and MicroShift versions. Use the MicroShift update type specific to your update path. Additional resources How to Access EUS Composing a customized RHEL system image Applying updates on an OSTree system Applying updates manually with RPMs The Greenboot system health check Greenboot workload scripts
|
[
"sudo subscription-manager release --set=<x.y> 1",
"sudo subscription-manager release --set=<9.4> command. 1",
"sudo subscription-manager repos --disable=rhel-<9>-for-x86_64-appstream-rpms \\ 1 --disable=rhel-<9>-for-x86_64-baseos-rpms",
"sudo subscription-manager repos --enable rhel-<9>-for-x86_64-appstream-eus-rpms \\ 1 --enable rhel-<9>-for-x86_64-baseos-eus-rpms`",
"sudo subscription-manager repos --list-enabled",
"+----------------------------------------------------------+ Available Repositories in /etc/yum.repos.d/redhat.repo +----------------------------------------------------------+ Repo ID: rhel-9-for-x86_64-baseos-eus-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - BaseOS - Extended Update Support (RPMs) Repo URL: https://cdn.redhat.com/content/eus/rhel9/USDreleasever/x86_64/baseos/os Enabled: 1 Repo ID: rhel-9-for-x86_64-appstream-eus-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - AppStream - Extended Update Support (RPMs) Repo URL: https://cdn.redhat.com/content/eus/rhel9/USDreleasever/x86_64/appstream/os Enabled: 1"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/updating/microshift-update-options
|
Chapter 4. Setting up an authentication method for RHOSP
|
Chapter 4. Setting up an authentication method for RHOSP The high availability fence agents and resource agents support three authentication methods for communicating with RHOSP: Authentication with a clouds.yaml configuration file Authentication with an OpenRC environment script Authentication with a username and password through Pacemaker After determining the authentication method to use for the cluster, specify the appropriate authentication parameters when creating a fencing or cluster resource. 4.1. Authenticating with RHOSP by using a clouds.yaml file The procedures in this document that use a a clouds.yaml file for authentication use the clouds.yaml file shown in this procedure. Those procedures specify ha-example for the cloud= parameter , as defined in this file. Procedure On each node that will be part of your cluster, create a clouds.yaml file, as in the following example. For information about creating a clouds.yaml file, see Users and Identity Management Guide . Test whether authentication is successful and you have access to the RHOSP API with the following basic RHOSP command, substituting the name of the cloud you specified in the clouds.yaml file you created for ha-example . If this command does not display a server list, contact your RHOSP administrator. Specify the cloud parameter when creating a cluster resource or a fencing resource. 4.2. Authenticating with RHOSP by using an OpenRC environment script To use an OpenRC environment script to authenticate with RHOSP, perform the following steps. Procedure On each node that will be part of your cluster, configure an OpenRC environment script. For information about creating an OpenRC environment script, see Set environment variables using the OpenStack RC file . Test whether authentication is successful and you have access to the RHOSP API with the following basic RHOSP command. If this command does not display a server list, contact your RHOSP administrator. Specify the openrc parameter when creating a cluster resource or a fencing resource. 4.3. Authenticating with RHOSP by means of a username and password To authenticate with RHOSP by means of a username and password, specify the username , password , and auth_url parameters for a cluster resource or a fencing resource when you create the resource. Additional authentication parameters may be required, depending on the RHOSP configuration. The RHOSP administrator provides the authentication parameters to use.
|
[
"cat .config/openstack/clouds.yaml clouds: ha-example: auth: auth_url: https://<ip_address>:13000/ project_name: rainbow username: unicorns password: <password> user_domain_name: Default project_domain_name: Default <. . . additional options . . .> region_name: regionOne verify: False",
"openstack --os-cloud=ha-example server list",
"openstack server list"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/authentication-methods-for-rhosp_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
|
1.6. Kernel
|
1.6. Kernel Support for Fiber Channel over Ethernet (FCoE) target mode Red Hat Enterprise Linux 6.2 includes support for Fiber Channel over Ethernet (FCoE) target mode as a Technology Preview . This kernel feature is configurable via targetadmin , supplied by the fcoe-target-utils package. FCoE is designed to be used on a network supporting Data Center Bridging (DCB). Further details are available in the dcbtool(8) and targetadmin(8) man pages. Important This feature uses the new SCSI target layer, which falls under this Technology Preview, and should not be used independently from the FCoE target support. This package contains the AGPL license. Kernel Media support The following features are presented as Technology Previews: The latest upstream video4linux Digital video broadcasting Primarily infrared remote control device support Various webcam support fixes and improvements Remote audit logging The audit package contains the user space utilities for storing and searching the audit records generated by the audit subsystem in the Linux 2.6 kernel. Within the audispd-plugins subpackage is a utility that allows for the transmission of audit events to a remote aggregating machine. This remote audit logging application, audisp-remote , is considered a Technology Preview in Red Hat Enterprise Linux 6. Linux (NameSpace) Container [LXC] Linux containers provide a flexible approach to application runtime containment on bare-metal systems without the need to fully virtualize the workload. Red Hat Enterprise Linux 6.2 provides application level containers to separate and control the application resource usage policies via cgroup and namespaces. This release introduces basic management of container life-cycle by allowing creation, editing and deletion of containers via the libvirt API and the virt-manager GUI. Linux Containers are a Technology Preview. Diagnostic pulse for the fence_ipmilan agent, BZ# 655764 A diagnostic pulse can now be issued on the IPMI interface using the fence_ipmilan agent. This new Technology Preview is used to force a kernel dump of a host if the host is configured to do so. Note that this feature is not a substitute for the off operation in a production cluster. EDAC driver support, BZ# 647700 Red Hat Enterprise Linux 6.2's EDAC driver support for the latest Intel chipset is available as a Technical Preview.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/kernel_tp
|
Chapter 6. Designing the Directory Topology
|
Chapter 6. Designing the Directory Topology Chapter 4, Designing the Directory Tree covers how the directory service stores entries. Because Red Hat Directory Server can store a large number of entries, it is possible to distribute directory entries across more than one server. The directory's topology describes how the directory tree is divided among multiple physical Directory Servers and how these servers link with one another. This chapter describes planning the topology of the directory service. 6.1. Topology Overview Directory Server can support a distributed directory , where the directory tree (designed in Chapter 4, Designing the Directory Tree ) is spread across multiple physical Directory Servers. The way the directory is divided across those servers helps accomplish the following: Achieve the best possible performance for directory-enabled applications. Increase the availability of the directory service. Improve the management of the directory service. The database is the basic unit for jobs such as replication, performing backups, and restoring data. A single directory can be divided into manageable pieces and assigned to separate databases. These databases can then be distributed between a number of servers, reducing the workload for each server. More than one database can be located on a single server. For example, one server might contain three different databases. When the directory tree is divided across several databases, each database contains a portion of the directory tree, called a suffix . For example, one database can be used to store only entries in the ou=people,dc=example,dc=com suffix, or branch, of the directory tree. When the directory is divided between several servers, each server is responsible for only a part of the directory tree. The distributed directory service works similarly to the Domain Name Service (DNS), which assigns each portion of the DNS namespace to a particular DNS server. Likewise, the directory namespace can be distributed across servers while maintaining a directory service that, from a client's point of view, appears to be a single directory tree. The Directory Server also provides knowledge references , mechanisms for linking directory data stored in different databases. Directory Server includes two types of knowledge references; referrals and chaining . The remainder of this chapter describes databases and knowledge references, explains the differences between the two types of knowledge references, and describes how to design indexes to improve the performance of the databases.
| null |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Directory_Topology
|
Chapter 3. Using the web console to analyze applications
|
Chapter 3. Using the web console to analyze applications You can create a project in the web console to analyze your applications. Each project groups the applications for a specific analysis, which you can configure with custom rules and labels. The analysis process generates reports that describe the readiness of your applications for migration or modernization. 3.1. Creating a project You can create a project in the web console with the Create project wizard. Procedure In the web console, click Projects . Click Create project . Enter a unique name for the project, an optional description, and click . To upload applications, click the Upload tab, click Browse , select the application files you want to upload, and click Close . Uploading applications stores them directly on the MTR server. To register a server path, click the Server path tab and enter the Server-side path of the application in the field. Registering the server path of an application ensures that MTR always uses the latest version. Click . Click one or more transformation targets. Click . Select packages and click > to include them in the analysis. Click . If you want to add a custom rule, click Add rule . See the Rule Development Guide for more information. To upload a ruleset file, click the Upload tab, click Browse , select one or more files, and click Close . A ruleset file must have a .windup.xml extension. The uploaded file is stored on the MTR server. To register the server path of a ruleset file, click the Server path tab, enter the Rules path, and click Save . Registering the server path ensures that the MTR server always uses the latest version of the ruleset files. Click . If you want to add a custom label, click Add label . To upload a labelset file, click the Upload tab, click Browse , select one or more files, and click Close . A labelset file must have a .windup.label.xml extension. The uploaded file is stored on the MTR server. To register a server path, click the Server path tab, enter the Labels path of the label files in the field, and click Save . Registering the server path ensures that the MTR server always uses the latest version of the labelset files. Click . Review the following Advanced options and make any necessary changes: Target Source Exclude tags : Rules with these tags are not processed. Additional classpath : Enter a space-delimited list of additional .jar files or directories so that they are available for decompilation or other analysis. Application name Mavenize group ID Ignore path : Enter a path for files to exclude from analysis. Export CSV : Exports the report data as a CSV file. Export Summary : Generates an analysisSummary.json export file in the output directory. The file contains analysis summary information for each application analyzed, including the number of incidents and story points by category, and all of the technology tags associated with the analyzed applications. Export reports zip : Creates a reports.zip file in the output folder. The file contains the analysis output, typically, the reports. If requested, it can also contain the CSV export files. Disable Tattletale : Disables generation of a Tattletale report for each application. Class Not Found analysis : Enables analysis of Java files that are not available on the class path. Note This option should not be used if some classes are unavailable for analysis. Compatible Files report : Generating a Compatible Files report might take a long time for large applications. Exploded app : The input directory contains the unpackaged source files of an application. Keep work dirs : Retains temporary files, for example, the graph database or extracted archive files, for debugging purposes. Skip reports : HTML reports are not generated. Must be enabled if you enabled Export CSV . Allow network access : Validates any XML files within the analyzed applications against their schema. Note This option might reduce performance. Mavenize : Creates a Maven project directory structure based on the structure and content of the application. Source mode : Indicates that the application files are raw source files, not compiled binaries. The sourceMode argument has been deprecated. There is no longer the need to specify it. MTR can intuitively process any inputs that are presented to it. In addition, project source folders can be analyzed with binary inputs within the same analysis execution. Analyze known libraries : Analyze known software artifacts embedded within your application. By default, MTR only analyzes application code. Note This option might result in a longer execution time and a large number of migration issues being reported. Transaction analysis : [Tech Preview] Generates a Transactions report that displays the call stack, which executes operations on relational database tables. The Enable Transaction Analysis feature supports Spring Data JPA and the traditional preparedStatement() method for SQL statement execution. It does not support ORM frameworks, such as Hibernate. Note Transaction analysis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Skip source code reports: Adding this option skips generating a Source code report when generating the application analysis report. Use this advanced option when concerned about source code information appearing in the application analysis. Click . Review your project and click Save or Save and run . The project is displayed in the Projects screen. 3.2. Running a saved analysis You can run a saved analysis. Procedure In the web console, click Analysis results . Select a project. Click Run analysis . A progress bar displays the progress of your analysis. 3.3. Viewing analysis results The results of all analyses are grouped and listed by project on the Analysis results screen. Procedure In the web console, click Analysis results . Select a project from the list. The Analysis Reports can be accessed via the Reports bar chart icon or by the Reports action on the right-hand side of the screen. Click on the number of the analysis you want to review to see details of the analysis configuration settings and the analysis execution statistics. The results are displayed in the Results screen, which contains two tabs: Details and Logs . The Details tab displays important details of the analysis, such as status, start date, duration, and configuration settings. Figure 3.1. Analysis details screen The Logs tab displays the logs generated during the analysis. Figure 3.2. Analysis logs screen 3.4. Reviewing reports The MTR web console provides a set of detailed reports that can help you decide if you need to make any changes to your applications. You access these reports from the Analysis results screen. The reports are described in detail in Reviewing the reports in the CLI Guide . Procedure In the web console, click Analysis results . Click the Reports icon beside the analysis you want to investigate. The All applications screen of the reports is displayed. 3.5. Updating an analysis configuration You can update an analysis configuration, for example, with a different transformation target, advanced option, or a custom rule. Then you can run the updated analysis in your project. Procedure In the web console, click Analysis configuration . Select a Project . Click the appropriate tabs and make your changes. Click Save or Save and run . The project is displayed in the Projects screen. 3.6. Adding global custom rules MTR includes a preconfigured set of global rules, which apply to all projects. You can define your own custom global rules. For information on writing custom MTR rules, see the MTR Rule Development Guide . Procedure In the web console, click Rules configuration . Click Add rules . To upload a ruleset file, click the Upload tab, click Browse , select one or more files, and click Close . A ruleset file must have a .windup.xml extension. The uploaded file is stored on the MTR server. To register the server path of a ruleset file, click the Server path tab, enter the Rules path, and click Save . Registering the server path ensures that the MTR server always uses the latest version of the ruleset files. The Custom rules list displays the rules. 3.7. Adding global custom labels MTR includes a preconfigured set of global labels, which apply to all projects. You can define your own custom global labels. Procedure In the web console, click Labels configuration . Click Add label . To upload a labelset file, click the Upload tab, click Browse , select one or more files, and click Close . A labelset file must have a .windup.label.xml extension. The uploaded file is stored on the MTR server. To register the server path of a labelset file, click the Server path tab, enter the Labels path, and click Save . Registering the server path ensures that the MTR server always uses the latest version of the labelset files. The Custom labels list displays the labels.
| null |
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/web_console_guide/using_the_web_console_to_analyze_applications
|
Chapter 14. Key Manager (barbican) Parameters
|
Chapter 14. Key Manager (barbican) Parameters You can modify the barbican service with key manager parameters. Parameter Description ApacheCertificateKeySize Override the private key size used when creating the certificate for this service. ApacheTimeout The timeout in seconds for Apache, which defines duration Apache waits for I/O operations. The default value is 90 . ATOSVars Hash of atos-hsm role variables used to install ATOS client software. BarbicanDogtagStoreGlobalDefault Whether this plugin is the global default plugin. The default value is false . BarbicanDogtagStoreHost Hostname of the Dogtag server. BarbicanDogtagStoreNSSPassword Password for the NSS DB. BarbicanDogtagStorePEMPath Path for the PEM file used to authenticate requests. The default value is /etc/barbican/kra_admin_cert.pem . BarbicanDogtagStorePort Port for the Dogtag server. The default value is 8443 . BarbicanKmipStoreGlobalDefault Whether this plugin is the global default plugin. The default value is false . BarbicanKmipStoreHost Host for KMIP device. BarbicanKmipStorePassword Password to connect to KMIP device. BarbicanKmipStorePort Port for KMIP device. BarbicanKmipStoreUsername Username to connect to KMIP device. BarbicanPassword The password for the OpenStack Key Manager (barbican) service account. BarbicanPkcs11AlwaysSetCkaSensitive Always set CKA_SENSITIVE=CK_TRUE. The default value is true . BarbicanPkcs11CryptoAESGCMGenerateIV Generate IVs for CKM_AES_GCM encryption mechanism. The default value is true . BarbicanPkcs11CryptoATOSEnabled Enable ATOS for PKCS11. The default value is false . BarbicanPkcs11CryptoEnabled Enable PKCS11. The default value is false . BarbicanPkcs11CryptoEncryptionMechanism Cryptoki Mechanism used for encryption. The default value is CKM_AES_CBC . BarbicanPkcs11CryptoGlobalDefault Whether this plugin is the global default plugin. The default value is false . BarbicanPkcs11CryptoHMACKeygenMechanism Cryptoki Mechanism used to generate Master HMAC Key. The default value is CKM_AES_KEY_GEN . BarbicanPkcs11CryptoHMACKeyType Cryptoki Key Type for Master HMAC key. The default value is CKK_AES . BarbicanPkcs11CryptoHMACLabel Label for the HMAC key. BarbicanPkcs11CryptoLibraryPath Path to vendor PKCS11 library. BarbicanPkcs11CryptoLogin Password (PIN) to login to PKCS#11 session. BarbicanPkcs11CryptoLunasaEnabled Enable Luna SA HSM for PKCS11. The default value is false . BarbicanPkcs11CryptoMKEKLabel Label for Master KEK. BarbicanPkcs11CryptoMKEKLength Length of Master KEK in bytes. The default value is 256 . BarbicanPkcs11CryptoOsLockingOk Set CKF_OS_LOCKING_OK flag when initializing the client library. The default value is false . BarbicanPkcs11CryptoRewrapKeys Cryptoki Mechanism used to generate Master HMAC Key. The default value is false . BarbicanPkcs11CryptoSlotId Slot Id for the PKCS#11 token to be used. The default value is 0 . BarbicanPkcs11CryptoThalesEnabled Enable Thales for PKCS11. The default value is false . BarbicanPkcs11CryptoTokenLabel (DEPRECATED) Use BarbicanPkcs11CryptoTokenLabels instead. BarbicanPkcs11CryptoTokenLabels List of comma separated labels for the tokens to be used. This is typically a single label, but some devices may require more than one label for Load Balancing and High Availability configurations. BarbicanPkcs11CryptoTokenSerialNumber Serial number for PKCS#11 token to be used. BarbicanSimpleCryptoGlobalDefault Whether this plugin is the global default plugin. The default value is false . BarbicanSimpleCryptoKek KEK used to encrypt secrets. BarbicanWorkers Set the number of workers for barbican::wsgi::apache. The default value is %{::processorcount} . CertificateKeySize Specifies the private key size used when creating the certificate. The default value is 2048 . EnableSQLAlchemyCollectd Set to true to enable the SQLAlchemy-collectd server plugin. The default value is false . LunasaClientIPNetwork (Optional) When set OpenStack Key Manager (barbican) nodes will be registered with the HSMs using the IP from this network instead of the FQDN. LunasaVars Hash of lunasa-hsm role variables used to install Lunasa client software. MemcacheUseAdvancedPool Use the advanced (eventlet safe) memcached client pool. The default value is true . NotificationDriver Driver or drivers to handle sending notifications. The default value is noop . ThalesHSMNetworkName The network that the HSM is listening on. The default value is internal_api . ThalesVars Hash of thales_hsm role variables used to install Thales client software.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_key-manager-barbican-parameters_overcloud_parameters
|
Chapter 6. Using Ansible to automount NFS shares for IdM users
|
Chapter 6. Using Ansible to automount NFS shares for IdM users Automount is a way to manage, organize, and access directories across multiple systems. Automount automatically mounts a directory whenever access to it is requested. This works well within an Identity Management (IdM) domain as it allows you to share directories on clients within the domain easily. You can use Ansible to configure NFS shares to be mounted automatically for IdM users logged in to IdM clients in an IdM location. The example in this chapter uses the following scenario: nfs-server.idm.example.com is the fully-qualified domain name (FQDN) of a Network File System (NFS) server. nfs-server.idm.example.com is an IdM client located in the raleigh automount location. The NFS server exports the /exports/project directory as read-write. Any IdM user belonging to the developers group can access the contents of the exported directory as /devel/project/ on any IdM client that is located in the same raleigh automount location as the NFS server. idm-client.idm.example.com is an IdM client located in the raleigh automount location. Important If you want to use a Samba server instead of an NFS server to provide the shares for IdM clients, see the Red Hat Knowledgebase solution How do I configure kerberized CIFS mounts with Autofs in an IPA environment? . The chapter contains the following sections: Autofs and automount in IdM Setting up an NFS server with Kerberos in IdM Configuring automount locations, maps, and keys in IdM by using Ansible Using Ansible to add IdM users to a group that owns NFS shares Configuring automount on an IdM client Verifying that an IdM user can access NFS shares on an IdM client 6.1. Autofs and automount in IdM The autofs service automates the mounting of directories, as needed, by directing the automount daemon to mount directories when they are accessed. In addition, after a period of inactivity, autofs directs automount to unmount auto-mounted directories. Unlike static mounting, on-demand mounting saves system resources. Automount maps On a system that utilizes autofs , the automount configuration is stored in several different files. The primary automount configuration file is /etc/auto.master , which contains the master mapping of automount mount points, and their associated resources, on a system. This mapping is known as automount maps . The /etc/auto.master configuration file contains the master map . It can contain references to other maps. These maps can either be direct or indirect. Direct maps use absolute path names for their mount points, while indirect maps use relative path names. Automount configuration in IdM While automount typically retrieves its map data from the local /etc/auto.master and associated files, it can also retrieve map data from other sources. One common source is an LDAP server. In the context of Identity Management (IdM), this is a 389 Directory Server. If a system that uses autofs is a client in an IdM domain, the automount configuration is not stored in local configuration files. Instead, the autofs configuration, such as maps, locations, and keys, is stored as LDAP entries in the IdM directory. For example, for the idm.example.com IdM domain, the default master map is stored as follows: Additional resources Mounting file systems on demand 6.2. Setting up an NFS server with Kerberos in a Red Hat Enterprise Linux Identity Management domain If you use Red Hat Enterprise Linux Identity Management (IdM), you can join your NFS server to the IdM domain. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption. Prerequisites The NFS server is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain. The NFS server is running and configured. Procedure Obtain a kerberos ticket as an IdM administrator: Create a nfs/<FQDN> service principal: Retrieve the nfs service principal from IdM, and store it in the /etc/krb5.keytab file: Optional: Display the principals in the /etc/krb5.keytab file: By default, the IdM client adds the host principal to the /etc/krb5.keytab file when you join the host to the IdM domain. If the host principal is missing, use the ipa-getkeytab -s idm_server.idm.example.com -p host/nfs_server.idm.example.com -k /etc/krb5.keytab command to add it. Use the ipa-client-automount utility to configure mapping of IdM IDs: Update your /etc/exports file, and add the Kerberos security method to the client options. For example: If you want that your clients can select from multiple security methods, specify them separated by colons: Reload the exported file systems: 6.3. Configuring automount locations, maps, and keys in IdM by using Ansible As an Identity Management (IdM) system administrator, you can configure automount locations and maps in IdM so that IdM users in the specified locations can access shares exported by an NFS server by navigating to specific mount points on their hosts. Both the exported NFS server directory and the mount points are specified in the maps. In LDAP terms, a location is a container for such map entries. The example describes how to use Ansible to configure the raleigh location and a map that mounts the nfs-server.idm.example.com:/exports/project share on the /devel/project mount point on the IdM client as a read-write directory. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure On your Ansible control node, navigate to your ~/ MyPlaybooks / directory: Copy the automount-location-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automount/ directory: Open the automount-location-map-and-key-present.yml file for editing. Adapt the file by setting the following variables in the ipaautomountlocation task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to raleigh . Ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Continue editing the automount-location-map-and-key-present.yml file: In the tasks section, add a task to ensure the presence of an automount map: Add another task to add the mount point and NFS server information to the map: Add another task to ensure auto.devel is connected to auto.master : Save the file. Run the Ansible playbook and specify the playbook and inventory files: 6.4. Using Ansible to add IdM users to a group that owns NFS shares As an Identity Management (IdM) system administrator, you can use Ansible to create a group of users that is able to access NFS shares, and add IdM users to this group. This example describes how to use an Ansible playbook to ensure that the idm_user account belongs to the developers group, so that idm_user can access the /exports/project NFS share. Prerequisites You have root access to the nfs-server.idm.example.com NFS server, which is an IdM client located in the raleigh automount location. On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. In ~/ MyPlaybooks / , you have created the automount-location-map-and-key-present.yml file that already contains tasks from Configuring automount locations, maps, and keys in IdM by using Ansible . Procedure On your Ansible control node, navigate to the ~/ MyPlaybooks / directory: Open the automount-location-map-and-key-present.yml file for editing. In the tasks section, add a task to ensure that the IdM developers group exists and idm_user is added to this group: Save the file. Run the Ansible playbook and specify the playbook and inventory files: On the NFS server, change the group ownership of the /exports/project directory to developers so that every IdM user in the group can access the directory: 6.5. Configuring automount on an IdM client As an Identity Management (IdM) system administrator, you can configure automount services on an IdM client so that NFS shares configured for a location to which the client has been added are accessible to an IdM user automatically when the user logs in to the client. The example describes how to configure an IdM client to use automount services that are available in the raleigh location. Prerequisites You have root access to the IdM client. You are logged in as IdM administrator. The automount location exists. The example location is raleigh . Procedure On the IdM client, enter the ipa-client-automount command and specify the location. Use the -U option to run the script unattended: Stop the autofs service, clear the SSSD cache, and start the autofs service to load the new configuration settings: 6.6. Verifying that an IdM user can access NFS shares on an IdM client As an Identity Management (IdM) system administrator, you can test if an IdM user that is a member of a specific group can access NFS shares when logged in to a specific IdM client. In the example, the following scenario is tested: An IdM user named idm_user belonging to the developers group can read and write the contents of the files in the /devel/project directory automounted on idm-client.idm.example.com , an IdM client located in the raleigh automount location. Prerequisites You have set up an NFS server with Kerberos on an IdM host . You have configured automount locations, maps, and mount points in IdM in which you configured how IdM users can access the NFS share. You have used Ansible to add IdM users to the developers group that owns the NFS shares . You have configured automount on the IdM client . Procedure Verify that the IdM user can access the read-write directory: Connect to the IdM client as the IdM user: Obtain the ticket-granting ticket (TGT) for the IdM user: Optional: View the group membership of the IdM user: Navigate to the /devel/project directory: List the directory contents: Add a line to the file in the directory to test the write permission: Optional: View the updated contents of the file: The output confirms that idm_user can write into the file.
|
[
"dn: automountmapname=auto.master,cn=default,cn=automount,dc=idm,dc=example,dc=com objectClass: automountMap objectClass: top automountMapName: auto.master",
"kinit admin",
"ipa service-add nfs/nfs_server.idm.example.com",
"ipa-getkeytab -s idm_server.idm.example.com -p nfs/nfs_server.idm.example.com -k /etc/krb5.keytab",
"klist -k /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 nfs/[email protected] 1 nfs/[email protected] 1 nfs/[email protected] 1 nfs/[email protected] 7 host/[email protected] 7 host/[email protected] 7 host/[email protected] 7 host/[email protected]",
"ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/idmapd.conf Restarting sssd, waiting for it to become available. Started autofs",
"/nfs/projects/ 192.0.2.0/24(rw, sec=krb5i )",
"/nfs/projects/ 192.0.2.0/24(rw, sec=krb5:krb5i:krb5p )",
"exportfs -r",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automount/automount-location-present.yml automount-location-map-and-key-present.yml",
"--- - name: Automount location present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure automount location is present ipaautomountlocation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: raleigh state: present",
"[...] vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: [...] - name: ensure map named auto.devel in location raleigh is created ipaautomountmap: ipaadmin_password: \"{{ ipaadmin_password }}\" name: auto.devel location: raleigh state: present",
"[...] vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: [...] - name: ensure automount key /devel/project is present ipaautomountkey: ipaadmin_password: \"{{ ipaadmin_password }}\" location: raleigh mapname: auto.devel key: /devel/project info: nfs-server.idm.example.com:/exports/project state: present",
"[...] vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: [...] - name: Ensure auto.devel is connected in auto.master: ipaautomountkey: ipaadmin_password: \"{{ ipaadmin_password }}\" location: raleigh mapname: auto.map key: /devel info: auto.devel state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory automount-location-map-and-key-present.yml",
"cd ~/ MyPlaybooks /",
"[...] vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: [...] - ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: developers user: - idm_user state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory automount-location-map-and-key-present.yml",
"chgrp developers /exports/project",
"ipa-client-automount --location raleigh -U",
"systemctl stop autofs ; sss_cache -E ; systemctl start autofs",
"ssh [email protected] Password:",
"kinit idm_user",
"ipa user-show idm_user User login: idm_user [...] Member of groups: developers, ipausers",
"cd /devel/project",
"ls rw_file",
"echo \"idm_user can write into the file\" > rw_file",
"cat rw_file this is a read-write file idm_user can write into the file"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_external_red_hat_utilities_with_identity_management/using-ansible-to-automount-nfs-shares-for-idm-users_using-external-red-hat-utilities-with-idm
|
Getting started with Red Hat Developer Hub
|
Getting started with Red Hat Developer Hub Red Hat Developer Hub 1.3 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/getting_started_with_red_hat_developer_hub/index
|
5.146. libevent
|
5.146. libevent 5.146.1. RHBA-2012:0968 - libevent bug fix update Updated libevent packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Bug Fix BZ# 658051 Prior to this update, several multilib files in the optional repositories could cause conflicts. As a consequence, these files could not use both a primary and a secondary architecture. This update modifies the underlying source code so that no more multilib file conflicts occur. All users of libevent or applications that require the libevent library are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libevent
|
2.4. Cluster Utilization
|
2.4. Cluster Utilization The Cluster Utilization section shows the cluster utilization for the CPU and memory in a heatmap. Figure 2.5. Cluster Utilization 2.4.1. CPU The heatmap of the CPU utilization for a specific cluster that shows the average utilization of the CPU for the last 24 hours. Hovering over the heatmap displays the cluster name. Clicking on the heatmap navigates to Compute Hosts and displays the results of a search on a specific cluster sorted by CPU utilization. The formula used to calculate the usage of the CPU by the cluster is the average host CPU utilization in the cluster. This is calculated by using the average host CPU utilization for each host over the last 24 hours to find the total average usage of the CPU by the cluster. 2.4.2. Memory The heatmap of the memory utilization for a specific cluster that shows the average utilization of the memory for the last 24 hours. Hovering over the heatmap displays the cluster name. Clicking on the heatmap navigates to Compute Hosts and displays the results of a search on a specific cluster sorted by memory usage. The formula used to calculate the memory usage by the cluster is the total utilization of the memory in the cluster in GB. This is calculated by using the average host memory utilization for each host over the last 24 hours to find the total average usage of memory by the cluster.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-cluster_utilization
|
Installing on IBM Cloud
|
Installing on IBM Cloud OpenShift Container Platform 4.14 Installing OpenShift Container Platform IBM Cloud Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_cloud/index
|
function::cpu_clock_ms
|
function::cpu_clock_ms Name function::cpu_clock_ms - Number of milliseconds on the given cpu's clock Synopsis Arguments cpu Which processor's clock to read Description This function returns the number of milliseconds on the given cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy).
|
[
"cpu_clock_ms:long(cpu:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-cpu-clock-ms
|
Chapter 6. Known issues
|
Chapter 6. Known issues This section describes the known issues in Red Hat OpenShift Data Foundation 4.17. 6.1. Disaster recovery ceph df reports an invalid MAX AVAIL value when the cluster is in stretch mode When a crush rule for a Red Hat Ceph Storage cluster has multiple "take" steps, the ceph df report shows the wrong maximum available size for the map. The issue will be fixed in an upcoming release. ( BZ#2100920 ) Both the DRPCs protect all the persistent volume claims created on the same namespace The namespaces that host multiple disaster recovery (DR) protected workloads, protect all the persistent volume claims (PVCs) within the namespace for each DRPlacementControl resource in the same namespace on the hub cluster that does not specify and isolate PVCs based on the workload using its spec.pvcSelector field. This results in PVCs that match the DRPlacementControl spec.pvcSelector across multiple workloads. Or, if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Workaround: Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. Result: PVCs are no longer managed by multiple DRPlacementControl resources and do not cause any operation and data inconsistencies. ( BZ#2128860 ) MongoDB pod is in CrashLoopBackoff because of permission errors reading data in cephrbd volume The OpenShift projects across different managed clusters have different security context constraints (SCC), which specifically differ in the specified UID range and/or FSGroups . This leads to certain workload pods and containers failing to start post failover or relocate operations within these projects, due to filesystem access errors in their logs. Workaround: Ensure workload projects are created on all managed clusters with the same project-level SCC labels, allowing them to use the same filesystem context when failed over or relocated. Pods will no longer fail post-DR actions on filesystem-related access errors. ( BZ#2081855 ) Disaster recovery workloads remain stuck when deleted When deleting a workload from a cluster, the corresponding pods might not terminate with events such as FailedKillPod . This might cause delay or failure in garbage collecting dependent DR resources such as the PVC , VolumeReplication , and VolumeReplicationGroup . It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected. Workaround: Reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected. ( BZ#2159791 ) Regional DR CephFS based application failover show warning about subscription After the application is failed over or relocated, the hub subscriptions show up errors stating, "Some resources failed to deploy. Use View status YAML link to view the details." This is because the application persistent volume claims (PVCs) that use CephFS as the backing storage provisioner, deployed using Red Hat Advanced Cluster Management for Kubernetes (RHACM) subscriptions, and are DR protected are owned by the respective DR controllers. Workaround: There are no workarounds to rectify the errors in the subscription status. However, the subscription resources that failed to deploy can be checked to make sure they are PVCs. This ensures that the other resources do not have problems. If the only resources in the subscription that fail to deploy are the ones that are DR protected, the error can be ignored. ( BZ-2264445 ) Disabled PeerReady flag prevents changing the action to Failover The DR controller executes full reconciliation as and when needed. When a cluster becomes inaccessible, the DR controller performs a sanity check. If the workload is already relocated, this sanity check causes the PeerReady flag associated with the workload to be disabled, and the sanity check does not complete due to the cluster being offline. As a result, the disabled PeerReady flag prevents you from changing the action to Failover. Workaround: Use the command-line interface to change the DR action to Failover despite the disabled PeerReady flag. ( BZ-2264765 ) Ceph becomes inaccessible and IO is paused when connection is lost between the two data centers in stretch cluster When two data centers lose connection with each other but are still connected to the Arbiter node, there is a flaw in the election logic that causes an infinite election between the monitors. As a result, the monitors are unable to elect a leader and the Ceph cluster becomes unavailable. Also, IO is paused during the connection loss. Workaround: Shutdown the monitors of any one of the data zone by bringing down the zone nodes. Additionally, you can reset the connection scores of surviving mon pods. As a result, monitors can form a quorum and Ceph becomes available again and IOs resume. ( Partner BZ#2265992 ) RBD applications fail to Relocate when using stale Ceph pool IDs from replacement cluster For the applications created before the new peer cluster is created, it is not possible to mount the RBD PVC because when a peer cluster is replaced, it is not possible to update the CephBlockPoolID's mapping in the CSI configmap. Workaround: Update the rook-ceph-csi-mapping-config configmap with cephBlockPoolID's mapping on the peer cluster that is not replaced. This enables mounting the RBD PVC for the application. ( BZ#2267731 ) Information about lastGroupSyncTime is lost after hub recovery for the workloads which are primary on the unavailable managed cluster Applications that are previously failed over to a managed cluster do not report a lastGroupSyncTime , thereby causing the trigger of the alert VolumeSynchronizationDelay . This is because when the ACM hub and a managed cluster that are part of the DRPolicy are unavailable, a new ACM hub cluster is reconstructed from the backup. Workaround: If the managed cluster to which the workload was failed over is unavailable, you can still failover to a surviving managed cluster. ( BZ#2275320 ) MCO operator reconciles the veleroNamespaceSecretKeyRef and CACertificates fields When the OpenShift Data Foundation operator is upgraded, the CACertificates and veleroNamespaceSecretKeyRef fields under s3StoreProfiles in the Ramen config are lost. Workaround: If the Ramen config has the custom values for the CACertificates and veleroNamespaceSecretKeyRef fields, then set those custom values after the upgrade is performed. ( BZ#2277941 ) Instability of the token-exchange-agent pod after upgrade The token-exchange-agent pod on the managed cluster is unstable as the old deployment resources are not cleaned up properly. This might cause application failover action to fail. Workaround: Refer the knowledgebase article, "token-exchange-agent" pod on managed cluster is unstable after upgrade to ODF 4.17.0 . Result: If the workaround is followed, "token-exchange-agent" pod is stabilized and failover action works as expected. ( BZ#2293611 ) virtualmachines.kubevirt.io resource fails restore due to mac allocation failure on relocate When a virtual machine is relocated to the preferred cluster, it might fail to complete relocation due to unavailability of the mac address. This happens if the virtual machine is not fully cleaned up on the preferred cluster when it is failed over to the failover cluster. Ensure that the workload is completely removed from the preferred cluster before relocating the workload. ( BZ#2295404 ) Relocating of CephFS gets stuck in WaitForReadiness There is a scenario where the DRPC progression gets stuck in WaitForReadiness. If it remains in this state for an extended period, it's possible that a known issue has occurred, preventing Ramen from updating the PlacementDecision with the new Primary. As a result, the relocation process will not complete, leaving the workload undeployed on the new primary cluster. This can cause delays in recovery until the user intervenes. Workaround: Manually update the PlacementDecision to point to the new Primary. For workload using PlacementRule: Edit the PlacementRule: For example: Add the following to the placementrule status: For workload using Placement: Edit the PlacementRule: For example: Add the following to the placementrule status: As a result, the PlacementDecision is updated and the workload is deployed on the Primary cluster. ( BZ#2319334 ) Failover process fails when the ReplicationDestination resource has not been created yet If the user initiates a failover before the LastGroupSyncTime is updated, the failover process might fail. This failure is accompanied by an error message indicating that the ReplicationDestination does not exist. Workaround: Edit the ManifestWork for the VRG on the hub cluster. Delete the following section from the manifest: Save the changes. Applying this workaround correctly ensures that the VRG skips attempting to restore the PVC using the ReplicationDestination resource. If the PVC already exists, the application uses it as is. If the PVC does not exist, a new PVC is created. ( BZ#2283038 ) 6.2. Multicloud Object Gateway NooBaa Core cannot assume role with web identity due to a missing entry in the role's trust policy For OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS), you need to add another entry in the trust policy for noobaa-core account. This is because with the release of OpenShift Data Foundation 4.17, the service account has changed from noobaa to noobaa-core . For instructions to add an entry in the trust policy for noobaa-core account, see the final bullet in the prerequisites section of Updating Red Hat OpenShift Data Foundation 4.16 to 4.17. ( BZ#2322124 ) Multicloud Object Gateway instance fails to finish initialization Due to a race in timing between the pod code run and OpenShift loading the Certificate Authority (CA) bundle into the pod, the pod is unable to communicate with the cloud storage service. As a result, the default backing store cannot be created. Workaround: Restart the Multicloud Object Gateway (MCG) operator pod: With the workaround the backing store is reconciled and works. ( BZ#2271580 ) Upgrade to OpenShift Data Foundation 4.17 results in noobaa-db pod CrashLoopBackOff state Upgrading to OpenShift Data Foundation 4.17 from OpenShift Data Foundation 4.15 fails when the PostgreSQL upgrade fails in Multicloud Object Gateway which always start with PostgresSQL version 15. If there is a PostgreSQL upgrade failure, the NooBaa-db-pg-0 pod fails to start. Workaround: Refer to the knowledgebase article Recover NooBaa's PostgreSQL upgrade failure in OpenShift Data Foundation 4.17 . ( BZ#2298152 ) 6.3. Ceph Poor performance of the stretch clusters on CephFS Workloads with many small metadata operations might exhibit poor performance because of the arbitrary placement of metadata server (MDS) on multi-site Data Foundation clusters. ( BZ#1982116 ) SELinux relabelling issue with a very high number of files When attaching volumes to pods in Red Hat OpenShift Container Platform, the pods sometimes do not start or take an excessive amount of time to start. This behavior is generic and it is tied to how SELinux relabelling is handled by the Kubelet. This issue is observed with any filesystem based volumes having very high file counts. In OpenShift Data Foundation, the issue is seen when using CephFS based volumes with a very high number of files. There are different ways to workaround this issue. Depending on your business needs you can choose one of the workarounds from the knowledgebase solution https://access.redhat.com/solutions/6221251 . ( Jira#3327 ) 6.4. CSI Driver Automatic flattening of snapshots is not working When there is a single common parent RBD PVC, if volume snapshot, restore, and delete snapshot are performed in a sequence more than 450 times, it is further not possible to take volume snapshot or clone of the common parent RBD PVC. To workaround this issue, instead of performing volume snapshot, restore, and delete snapshot in a sequence, you can use PVC to PVC clone to completely avoid this issue. If you hit this issue, contact customer support to perform manual flattening of the final restored PVCs to continue to take volume snapshot or clone of the common parent PVC again. ( BZ#2232163 ) 6.5. OpenShift Data Foundation console Optimize DRPC creation when multiple workloads are deployed in a single namespace When multiple applications refer to the same placement, then enabling DR for any of the applications enables it for all the applications that refer to the placement. If the applications are created after the creation of the DRPC, the PVC label selector in the DRPC might not match the labels of the newer applications. Workaround: In such cases, disabling DR and enabling it again with the right label selector is recommended. ( BZ#2294704 ) 6.6. OCS operator Incorrect unit for the ceph_mds_mem_rss metric in the graph When you search for the ceph_mds_mem_rss metrics in the OpenShift user interface (UI), the graphs show the y-axis in Megabytes (MB), as Ceph returns ceph_mds_mem_rss metric in Kilobytes (KB). This can cause confusion while comparing the results for the MDSCacheUsageHigh alert. Workaround: Use ceph_mds_mem_rss * 1000 while searching this metric in the OpenShift UI to see the y-axis of the graph in GB. This makes it easier to compare the results shown in the MDSCacheUsageHigh alert. ( BZ#2261881 ) Increasing MDS memory is erasing CPU values when pods are in CLBO state When the metadata server (MDS) memory is increased while the MDS pods are in a crash loop back off (CLBO) state, CPU request or limit for the MDS pods is removed. As a result, the CPU request or the limit that is set for the MDS changes. Workaround: Run the oc patch command to adjust the CPU limits. For example: ( BZ#2265563 )
|
[
"oc edit placementrule --subresource=status -n [namespace] [name of the placementrule]7",
"oc edit placementrule --subresource=status -n busybox-workloads-cephfs-2 busybox-placement",
"status: decisions: - clusterName: [primary cluster name] reason: [primary cluster name]",
"oc edit placementdecision --subresource=status -n [namespace] [name of the placementdecision]",
"oc get placementdecision --subresource=status -n openshift-gitops busybox-3-placement-cephfs-decision-1",
"status: decisions: - clusterName: [primary cluster name] reason: [primary cluster name]",
"/spec/workload/manifests/0/spec/volsync",
"oc delete pod noobaa-operator-<ID>",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"3\"}, \"requests\": {\"cpu\": \"3\"}}}}}'"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/4.17_release_notes/known-issues
|
Chapter 2. Composable services and custom roles
|
Chapter 2. Composable services and custom roles The overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core heat template collection on the director node. However, you can also create custom roles that contain specific sets of services. You can use this flexibility to create different combinations of services on different roles. This chapter explores the architecture of custom roles, composable services, and methods for using them. 2.1. Supported role architecture The following architectures are available when you use custom roles and composable services: Default architecture Uses the default roles_data files. All controller services are contained within one Controller role. Custom composable services Create your own roles and use them to generate a custom roles_data file. Note that only a limited number of composable service combinations have been tested and verified and Red Hat cannot support all composable service combinations. 2.2. Examining the roles_data file The roles_data file contains a YAML-formatted list of the roles that director deploys onto nodes. Each role contains definitions of all of the services that comprise the role. Use the following example snippet to understand the roles_data syntax: The core heat template collection contains a default roles_data file located at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml . The default file contains definitions of the following role types: Controller Compute BlockStorage ObjectStorage CephStorage . The openstack overcloud deploy command includes the default roles_data.yaml file during deployment. However, you can use the -r argument to override this file with a custom roles_data file: 2.3. Creating a roles_data file Although you can create a custom roles_data file manually, you can also generate the file automatically using individual role templates. Director provides the openstack overcloud role generate command to join multiple predefined roles and automatically generate a custom roles_data file. Procedure List the default role templates: View the role definition: Generate a custom roles_data.yaml file that contains the Controller , Compute , and Networker roles: Replace <custom_role_file> with the name and location of the new role file to generate, for example, /home/stack/templates/roles_data.yaml . Note The Controller and Networker roles contain the same networking agents. This means that the networking services scale from the Controller role to the Networker role and the overcloud balances the load for networking services between the Controller and Networker nodes. To make this Networker role standalone, you can create your own custom Controller role, as well as any other role that you require. This allows you to generate a roles_data.yaml file from your own custom roles. Copy the roles directory from the core heat template collection to the home directory of the stack user: Add or modify the custom role files in this directory. Use the --roles-path option with any of the role sub-commands to use this directory as the source for your custom roles: This command generates a single my_roles_data.yaml file from the individual roles in the ~/roles directory. Note The default roles collection also contains the ControllerOpenstack role, which does not include services for Networker , Messaging , and Database roles. You can use the ControllerOpenstack in combination with the standalone Networker , Messaging , and Database roles. 2.4. Supported custom roles The following table contains information about the available custom roles. You can find custom role templates in the /usr/share/openstack-tripleo-heat-templates/roles directory. Role Description File BlockStorage OpenStack Block Storage (cinder) node. BlockStorage.yaml CephAll Full standalone Ceph Storage node. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. CephAll.yaml CephFile Standalone scale-out Ceph Storage file role. Includes OSD and Object Operations (MDS). CephFile.yaml CephObject Standalone scale-out Ceph Storage object role. Includes OSD and Object Gateway (RGW). CephObject.yaml CephStorage Ceph Storage OSD node role. CephStorage.yaml ComputeAlt Alternate Compute node role. ComputeAlt.yaml ComputeDVR DVR enabled Compute node role. ComputeDVR.yaml ComputeHCI Compute node with hyper-converged infrastructure. Includes Compute and Ceph OSD services. ComputeHCI.yaml ComputeInstanceHA Compute Instance HA node role. Use in conjunction with the environments/compute-instanceha.yaml` environment file. ComputeInstanceHA.yaml ComputeLiquidio Compute node with Cavium Liquidio Smart NIC. ComputeLiquidio.yaml ComputeOvsDpdkRT Compute OVS DPDK RealTime role. ComputeOvsDpdkRT.yaml ComputeOvsDpdk Compute OVS DPDK role. ComputeOvsDpdk.yaml ComputeRealTime Compute role optimized for real-time behaviour. When using this role, it is mandatory that an overcloud-realtime-compute image is available and the role specific parameters IsolCpusList , NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet are set according to the hardware of the real-time compute nodes. ComputeRealTime.yaml ComputeSriovRT Compute SR-IOV RealTime role. ComputeSriovRT.yaml ComputeSriov Compute SR-IOV role. ComputeSriov.yaml Compute Standard Compute node role. Compute.yaml ControllerAllNovaStandalone Controller role that does not contain the database, messaging, networking, and OpenStack Compute (nova) control components. Use in combination with the Database , Messaging , Networker , and Novacontrol roles. ControllerAllNovaStandalone.yaml ControllerNoCeph Controller role with core Controller services loaded but no Ceph Storage (MON) components. This role handles database, messaging, and network functions but not any Ceph Storage functions. ControllerNoCeph.yaml ControllerNovaStandalone Controller role that does not contain the OpenStack Compute (nova) control component. Use in combination with the Novacontrol role. ControllerNovaStandalone.yaml ControllerOpenstack Controller role that does not contain the database, messaging, and networking components. Use in combination with the Database , Messaging , and Networker roles. ControllerOpenstack.yaml ControllerStorageNfs Controller role with all core services loaded and uses Ceph NFS. This roles handles database, messaging, and network functions. ControllerStorageNfs.yaml Controller Controller role with all core services loaded. This roles handles database, messaging, and network functions. Controller.yaml ControllerSriov (ML2/OVN) Same as the normal Controller role but with the OVN Metadata agent deployed. ControllerSriov.yaml Database Standalone database role. Database managed as a Galera cluster using Pacemaker. Database.yaml HciCephAll Compute node with hyper-converged infrastructure and all Ceph Storage services. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. HciCephAll.yaml HciCephFile Compute node with hyper-converged infrastructure and Ceph Storage file services. Includes OSD and Object Operations (MDS). HciCephFile.yaml HciCephMon Compute node with hyper-converged infrastructure and Ceph Storage block services. Includes OSD, MON, and Manager. HciCephMon.yaml HciCephObject Compute node with hyper-converged infrastructure and Ceph Storage object services. Includes OSD and Object Gateway (RGW). HciCephObject.yaml IronicConductor Ironic Conductor node role. IronicConductor.yaml Messaging Standalone messaging role. RabbitMQ managed with Pacemaker. Messaging.yaml Networker Standalone networking role. Runs OpenStack networking (neutron) agents on their own. If your deployment uses the ML2/OVN mechanism driver, see additional steps in Deploying a Custom Role with ML2/OVN . Networker.yaml NetworkerSriov Same as the normal Networker role but with the OVN Metadata agent deployed. See additional steps in Deploying a Custom Role with ML2/OVN . NetworkerSriov.yaml Novacontrol Standalone nova-control role to run OpenStack Compute (nova) control agents on their own. Novacontrol.yaml ObjectStorage Swift Object Storage node role. ObjectStorage.yaml Telemetry Telemetry role with all the metrics and alarming services. Telemetry.yaml 2.5. Examining role parameters Each role contains the following parameters: name (Mandatory) The name of the role, which is a plain text name with no spaces or special characters. Check that the chosen name does not cause conflicts with other resources. For example, use Networker as a name instead of Network . description (Optional) A plain text description for the role. tags (Optional) A YAML list of tags that define role properties. Use this parameter to define the primary role with both the controller and primary tags together: Important If you do not tag the primary role, the first role that you define becomes the primary role. Ensure that this role is the Controller role. networks A YAML list or dictionary of networks that you want to configure on the role. If you use a YAML list, list each composable network: If you use a dictionary, map each network to a specific subnet in your composable networks. Default networks include External , InternalApi , Storage , StorageMgmt , Tenant , and Management . CountDefault (Optional) Defines the default number of nodes that you want to deploy for this role. HostnameFormatDefault (Optional) Defines the default hostname format for the role. The default naming convention uses the following format: For example, the default Controller nodes are named: disable_constraints (Optional) Defines whether to disable OpenStack Compute (nova) and OpenStack Image Storage (glance) constraints when deploying with director. Use this parameter when you deploy an overcloud with pre-provisioned nodes. For more information, see Configuring a basic overcloud with pre-provisioned nodes in the Director installation and usage guide. update_serial (Optional) Defines how many nodes to update simultaneously during the OpenStack update options. In the default roles_data.yaml file: The default is 1 for Controller, Object Storage, and Ceph Storage nodes. The default is 25 for Compute and Block Storage nodes. If you omit this parameter from a custom role, the default is 1 . ServicesDefault (Optional) Defines the default list of services to include on the node. For more information, see Section 2.10, "Examining composable service architecture" . You can use these parameters to create new roles and also define which services to include in your roles. The openstack overcloud deploy command integrates the parameters from the roles_data file into some of the Jinja2-based templates. For example, at certain points, the overcloud.j2.yaml heat template iterates over the list of roles from roles_data.yaml and creates parameters and resources specific to each respective role. For example, the following snippet contains the resource definition for each role in the overcloud.j2.yaml heat template: This snippet shows how the Jinja2-based template incorporates the {{role.name}} variable to define the name of each role as an OS::Heat::ResourceGroup resource. This in turn uses each name parameter from the roles_data file to name each respective OS::Heat::ResourceGroup resource. 2.6. Creating a new role You can use the composable service architecture to assign roles to bare-metal nodes according to the requirements of your deployment. For example, you might want to create a new Horizon role to host only the OpenStack Dashboard ( horizon ). Procedure Log in to the undercloud as the stack user. Source the stackrc file: Copy the roles directory from the core heat template collection to the home directory of the stack user: Create a new file named Horizon.yaml in home/stack/templates/roles . Add the following configuration to Horizon.yaml to create a new Horizon role that contains the base and core OpenStack Dashboard services: 1 Set the name parameter to the name of the custom role. Custom role names have a maximum length of 47 characters. 2 Set the CountDefault parameter to 1 so that a default overcloud always includes the Horizon node. Optional: If you want to scale the services in an existing overcloud, retain the existing services on the Controller role. If you want to create a new overcloud and you want the OpenStack Dashboard to remain on the standalone role, remove the OpenStack Dashboard components from the Controller role definition: Generate a new roles data file named roles_data_horizon.yaml that includes the Controller , Compute , and Horizon roles: Optional: Edit the overcloud-baremetal-deploy.yaml node definition file to configure the placement of the Horizon node: 2.7. Guidelines and limitations Note the following guidelines and limitations for the composable role architecture. For services not managed by Pacemaker: You can assign services to standalone custom roles. You can create additional custom roles after the initial deployment and deploy them to scale existing services. For services managed by Pacemaker: You can assign Pacemaker-managed services to standalone custom roles. Pacemaker has a 16 node limit. If you assign the Pacemaker service ( OS::TripleO::Services::Pacemaker ) to 16 nodes, subsequent nodes must use the Pacemaker Remote service ( OS::TripleO::Services::PacemakerRemote ) instead. You cannot have the Pacemaker service and Pacemaker Remote service on the same role. Do not include the Pacemaker service ( OS::TripleO::Services::Pacemaker ) on roles that do not contain Pacemaker-managed services. You cannot scale up or scale down a custom role for a controller that contains OS::TripleO::Services::Pacemaker or OS::TripleO::Services::PacemakerRemote services. General limitations: You cannot change custom roles and composable services during a major version upgrade. You cannot modify the list of services for any role after deploying an overcloud. Modifying the service lists after Overcloud deployment can cause deployment errors and leave orphaned services on nodes. 2.8. Containerized service architecture Director installs the core OpenStack Platform services as containers on the overcloud. The templates for the containerized services are located in the /usr/share/openstack-tripleo-heat-templates/deployment/ . You must enable the OS::TripleO::Services::Podman service in the role for all nodes that use containerized services. When you create a roles_data.yaml file for your custom roles configuration, include the OS::TripleO::Services::Podman service along with the base composable services. For example, the IronicConductor role uses the following role definition: 2.9. Containerized service parameters Each containerized service template contains an outputs section that defines a data set passed to the OpenStack Orchestration (heat) service. In addition to the standard composable service parameters, the template contains a set of parameters specific to the container configuration. puppet_config Data to pass to Puppet when configuring the service. In the initial overcloud deployment steps, director creates a set of containers used to configure the service before the actual containerized service runs. This parameter includes the following sub-parameters: config_volume - The mounted volume that stores the configuration. puppet_tags - Tags to pass to Puppet during configuration. OpenStack uses these tags to restrict the Puppet run to the configuration resource of a particular service. For example, the OpenStack Identity (keystone) containerized service uses the keystone_config tag to ensure that all require only the keystone_config Puppet resource run on the configuration container. step_config - The configuration data passed to Puppet. This is usually inherited from the referenced composable service. config_image - The container image used to configure the service. kolla_config A set of container-specific data that defines configuration file locations, directory permissions, and the command to run on the container to launch the service. docker_config Tasks to run on the configuration container for the service. All tasks are grouped into the following steps to help director perform a staged deployment: Step 1 - Load balancer configuration Step 2 - Core services (Database, Redis) Step 3 - Initial configuration of OpenStack Platform service Step 4 - General OpenStack Platform services configuration Step 5 - Service activation host_prep_tasks Preparation tasks for the bare metal node to accommodate the containerized service. 2.10. Examining composable service architecture The core heat template collection contains two sets of composable service templates: deployment contains the templates for key OpenStack services. puppet/services contains legacy templates for configuring composable services. In some cases, the composable services use templates from this directory for compatibility. In most cases, the composable services use the templates in the deployment directory. Each template contains a description that identifies its purpose. For example, the deployment/time/ntp-baremetal-puppet.yaml service template contains the following description: These service templates are registered as resources specific to a Red Hat OpenStack Platform deployment. This means that you can call each resource using a unique heat resource namespace defined in the overcloud-resource-registry-puppet.j2.yaml file. All services use the OS::TripleO::Services namespace for their resource type. Some resources use the base composable service templates directly: However, core services require containers and use the containerized service templates. For example, the keystone containerized service uses the following resource: These containerized templates usually reference other templates to include dependencies. For example, the deployment/keystone/keystone-container-puppet.yaml template stores the output of the base template in the ContainersCommon resource: The containerized template can then incorporate functions and data from the containers-common.yaml template. The overcloud.j2.yaml heat template includes a section of Jinja2-based code to define a service list for each custom role in the roles_data.yaml file: For the default roles, this creates the following service list parameters: ControllerServices , ComputeServices , BlockStorageServices , ObjectStorageServices , and CephStorageServices . You define the default services for each custom role in the roles_data.yaml file. For example, the default Controller role contains the following content: These services are then defined as the default list for the ControllerServices parameter. Note You can also use an environment file to override the default list for the service parameters. For example, you can define ControllerServices as a parameter_default in an environment file to override the services list from the roles_data.yaml file. 2.11. Adding and removing services from roles The basic method of adding or removing services involves creating a copy of the default service list for a node role and then adding or removing services. For example, you might want to remove OpenStack Orchestration (heat) from the Controller nodes. Procedure Create a custom copy of the default roles directory: Edit the ~/roles/Controller.yaml file and modify the service list for the ServicesDefault parameter. Scroll to the OpenStack Orchestration services and remove them: Generate the new roles_data file: Include this new roles_data file when you run the openstack overcloud deploy command: This command deploys an overcloud without OpenStack Orchestration services installed on the Controller nodes. Note You can also disable services in the roles_data file using a custom environment file. Redirect the services to disable to the OS::Heat::None resource. For example: 2.12. Enabling disabled services Some services are disabled by default. These services are registered as null operations ( OS::Heat::None ) in the overcloud-resource-registry-puppet.j2.yaml file. For example, the Block Storage backup service ( cinder-backup ) is disabled: To enable this service, include an environment file that links the resource to its respective heat templates in the puppet/services directory. Some services have predefined environment files in the environments directory. For example, the Block Storage backup service uses the environments/cinder-backup.yaml file, which contains the following entry: Procedure Add an entry in an environment file that links the CinderBackup service to the heat template that contains the cinder-backup configuration: This entry overrides the default null operation resource and enables the service. Include this environment file when you run the openstack overcloud deploy command:
|
[
"- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - name: Compute description: | Basic Compute Node role ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient",
"openstack overcloud deploy --templates -r ~/templates/roles_data-custom.yaml",
"openstack overcloud role list BlockStorage CephStorage Compute ComputeHCI ComputeOvsDpdk Controller",
"openstack overcloud role show Compute",
"openstack overcloud roles generate -o <custom_role_file> Controller Compute Networker",
"cp -r /usr/share/openstack-tripleo-heat-templates/roles/. /home/stack/templates/roles/",
"openstack overcloud role generate -o my_roles_data.yaml --roles-path /home/stack/templates/roles Controller Compute Networker",
"- name: Controller tags: - primary - controller",
"networks: - External - InternalApi - Storage - StorageMgmt - Tenant",
"networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet",
"[STACK NAME]-[ROLE NAME]-[NODE ID]",
"overcloud-controller-0 overcloud-controller-1 overcloud-controller-2",
"{{role.name}}: type: OS::Heat::ResourceGroup depends_on: Networks properties: count: {get_param: {{role.name}}Count} removal_policies: {get_param: {{role.name}}RemovalPolicies} resource_def: type: OS::TripleO::{{role.name}} properties: CloudDomain: {get_param: CloudDomain} ServiceNetMap: {get_attr: [ServiceNetMap, service_net_map]} EndpointMap: {get_attr: [EndpointMap, endpoint_map]}",
"[stack@director ~]USD source ~/stackrc",
"cp -r /usr/share/openstack-tripleo-heat-templates/roles/. /home/stack/templates/roles/",
"- name: Horizon 1 CountDefault: 1 2 HostnameFormatDefault: '%stackname%-horizon-%index%' ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::Kernel - OS::TripleO::Services::Ntp - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::SensuClient - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::AuditD - OS::TripleO::Services::Collectd - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::Apache - OS::TripleO::Services::Horizon",
"- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::HAproxy - OS::TripleO::Services::HeatApi - OS::TripleO::Services::HeatApiCfn - OS::TripleO::Services::HeatApiCloudwatch - OS::TripleO::Services::HeatEngine # - OS::TripleO::Services::Horizon # Remove this service - OS::TripleO::Services::IronicApi - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Keepalived",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_horizon.yaml --roles-path /home/stack/templates/roles Controller Compute Horizon",
"- name: Controller count: 3 instances: - hostname: overcloud-controller-0 name: node00 - name: Compute count: 3 instances: - hostname: overcloud-novacompute-0 name: node04 - name: Horizon count: 1 instances: - hostname: overcloud-horizon-0 name: node07",
"- name: IronicConductor description: | Ironic Conductor node role networks: InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet HostnameFormatDefault: '%stackname%-ironic-%index%' ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::IronicPxe - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::Podman - OS::TripleO::Services::Rhsm - OS::TripleO::Services::SensuClient - OS::TripleO::Services::Snmp - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned",
"description: > NTP service deployment using puppet, this YAML file creates the interface between the HOT template and the puppet manifest that actually installs and configure NTP.",
"resource_registry: OS::TripleO::Services::Ntp: deployment/time/ntp-baremetal-puppet.yaml",
"resource_registry: OS::TripleO::Services::Keystone: deployment/keystone/keystone-container-puppet.yaml",
"resources: ContainersCommon: type: ../containers-common.yaml",
"{{role.name}}Services: description: A list of service resources (configured in the heat resource_registry) which represent nested stacks for each service that should get installed on the {{role.name}} role. type: comma_delimited_list default: {{role.ServicesDefault|default([])}}",
"- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephRgw - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Core - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry",
"cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.",
"- OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry - OS::TripleO::Services::HeatApi # Remove this service - OS::TripleO::Services::HeatApiCfn # Remove this service - OS::TripleO::Services::HeatApiCloudwatch # Remove this service - OS::TripleO::Services::HeatEngine # Remove this service - OS::TripleO::Services::MySQL - OS::TripleO::Services::NeutronDhcpAgent",
"openstack overcloud roles generate -o roles_data-no_heat.yaml --roles-path ~/roles Controller Compute Networker",
"openstack overcloud deploy --templates -r ~/templates/roles_data-no_heat.yaml",
"resource_registry: OS::TripleO::Services::HeatApi: OS::Heat::None OS::TripleO::Services::HeatApiCfn: OS::Heat::None OS::TripleO::Services::HeatApiCloudwatch: OS::Heat::None OS::TripleO::Services::HeatEngine: OS::Heat::None",
"OS::TripleO::Services::CinderBackup: OS::Heat::None",
"resource_registry: OS::TripleO::Services::CinderBackup: ../podman/services/pacemaker/cinder-backup.yaml",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/customizing_your_red_hat_openstack_platform_deployment/assembly_composable-services-and-custom-roles
|
Managing AMQ Broker
|
Managing AMQ Broker Red Hat AMQ Broker 7.12 For Use with AMQ Broker 7.12
| null |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/managing_amq_broker/index
|
Chapter 118. Salesforce
|
Chapter 118. Salesforce Both producer and consumer are supported This component supports producer and consumer endpoints to communicate with Salesforce using Java DTOs. There is a companion maven plugin Camel Salesforce Plugin that generates these DTOs (see further below). 118.1. Dependencies When using salesforce with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-salesforce-starter</artifactId> </dependency> By default, camel-salesforce-maven-plugin uses TLSv1.3 to interact with salesforce. TLS version is configurable on the plugin. FIPS users can configure the property sslContextParameters.secureSocketProtocol. To use the maven-plugin you must add the following dependency to the pom.xml file. <plugin> <groupId>org.apache.camel.maven</groupId> <artifactId>camel-salesforce-maven-plugin</artifactId> <version>USD{camel-community.version}</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <clientId>USD{camelSalesforce.clientId}</clientId> <clientSecret>USD{camelSalesforce.clientSecret}</clientSecret> <userName>USD{camelSalesforce.userName}</userName> <password>USD{camelSalesforce.password}</password> <sslContextParameters> <secureSocketProtocol>TLSv1.2</secureSocketProtocol> </sslContextParameters> <includes> <include>Contact</include> </includes> </configuration> </execution> </executions> </plugin> Where camel-community.version refers to the corresponding Camel community version that you use when working with camel-salesforce-maven-plugin . For example, for Red Hat build of Camel Spring Boot version 4.4.0 you can use '4.4.0' version of Apache Camel. 118.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 118.2.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 118.2.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 118.3. Component Options The Salesforce component supports 90 options, which are listed below. Name Description Default Type apexMethod (common) APEX method name. String apexQueryParams (common) Query params for APEX method. Map apiVersion (common) Salesforce API version. 53.0 String backoffIncrement (common) Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 1000 long batchId (common) Bulk API Batch ID. String contentType (common) Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. Enum values: XML CSV JSON ZIP_XML ZIP_CSV ZIP_JSON ContentType defaultReplayId (common) Default replayId setting if no value is found in initialReplayIdMap. -1 Long fallBackReplayId (common) ReplayId to fall back to after an Invalid Replay Id response. -1 Long format (common) Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. Enum values: JSON XML PayloadFormat httpClient (common) Custom Jetty Http Client to use to connect to Salesforce. SalesforceHttpClient httpClientConnectionTimeout (common) Connection timeout used by the HttpClient when connecting to the Salesforce server. 60000 long httpClientIdleTimeout (common) Timeout used by the HttpClient when waiting for response from the Salesforce server. 10000 long httpMaxContentLength (common) Max content length of an HTTP response. Integer httpRequestBufferSize (common) HTTP request buffer size. May need to be increased for large SOQL queries. 8192 Integer includeDetails (common) Include details in Salesforce1 Analytics report, defaults to false. Boolean initialReplayIdMap (common) Replay IDs to start from per channel name. Map instanceId (common) Salesforce1 Analytics report execution instance ID. String jobId (common) Bulk API Job ID. String limit (common) Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer locator (common) Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String maxBackoff (common) Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 30000 long maxRecords (common) The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer notFoundBehaviour (common) Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. Enum values: EXCEPTION NULL EXCEPTION NotFoundBehaviour notifyForFields (common) Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. Enum values: ALL REFERENCED SELECT WHERE NotifyForFieldsEnum notifyForOperationCreate (common) Notify for create operation, defaults to false (API version = 29.0). Boolean notifyForOperationDelete (common) Notify for delete operation, defaults to false (API version = 29.0). Boolean notifyForOperations (common) Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). Enum values: ALL CREATE EXTENDED UPDATE NotifyForOperationsEnum notifyForOperationUndelete (common) Notify for un-delete operation, defaults to false (API version = 29.0). Boolean notifyForOperationUpdate (common) Notify for update operation, defaults to false (API version = 29.0). Boolean objectMapper (common) Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. ObjectMapper packages (common) In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma. String pkChunking (common) Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean pkChunkingChunkSize (common) Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer pkChunkingParent (common) Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String pkChunkingStartRow (common) Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String queryLocator (common) Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String rawPayload (common) Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false boolean reportId (common) Salesforce1 Analytics report Id. String reportMetadata (common) Salesforce1 Analytics report metadata for filtering. ReportMetadata resultId (common) Bulk API Result ID. String sObjectBlobFieldName (common) SObject blob field name. String sObjectClass (common) Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String sObjectFields (common) SObject fields to retrieve. String sObjectId (common) SObject ID if required by API. String sObjectIdName (common) SObject external ID field name. String sObjectIdValue (common) SObject external ID field value. String sObjectName (common) SObject name if required or supported by API. String sObjectQuery (common) Salesforce SOQL query string. String sObjectSearch (common) Salesforce SOSL search string. String updateTopic (common) Whether to update an existing Push Topic when using the Streaming API, defaults to false. false boolean config (common (advanced)) Global endpoint configuration - use to set values that are common to all endpoints. SalesforceEndpointConfig httpClientProperties (common (advanced)) Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. Map longPollingTransportProperties (common (advanced)) Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api. Map workerPoolMaxSize (common (advanced)) Maximum size of the thread pool used to handle HTTP responses. 20 int workerPoolSize (common (advanced)) Size of the thread pool used to handle HTTP responses. 10 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean allOrNone (producer) Composite API option to indicate to rollback all records if any are not successful. false boolean apexUrl (producer) APEX method URL. String compositeMethod (producer) Composite (raw) method. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean rawHttpHeaders (producer) Comma separated list of message headers to include as HTTP parameters for Raw operation. String rawMethod (producer) HTTP method to use for the Raw operation. String rawPath (producer) The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String rawQueryParameters (producer) Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean httpProxyExcludedAddresses (proxy) A list of addresses for which HTTP proxy server should not be used. Set httpProxyHost (proxy) Hostname of the HTTP proxy server to use. String httpProxyIncludedAddresses (proxy) A list of addresses for which HTTP proxy server should be used. Set httpProxyPort (proxy) Port number of the HTTP proxy server to use. Integer httpProxySocks4 (proxy) If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. false boolean authenticationType (security) Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. Enum values: USERNAME_PASSWORD REFRESH_TOKEN JWT AuthenticationType clientId (security) Required OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. String clientSecret (security) OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. String httpProxyAuthUri (security) Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. String httpProxyPassword (security) Password to use to authenticate against the HTTP proxy server. String httpProxyRealm (security) Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. String httpProxySecure (security) If set to false disables the use of TLS when accessing the HTTP proxy. true boolean httpProxyUseDigestAuth (security) If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used. false boolean httpProxyUsername (security) Username to use to authenticate against the HTTP proxy server. String instanceUrl (security) URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication. String jwtAudience (security) Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases. String keystore (security) KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. KeyStoreParameters lazyLogin (security) If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. false boolean loginConfig (security) All authentication configuration in one nested bean, all properties set there can be set directly on the component as well. SalesforceLoginConfig loginUrl (security) Required URL of the Salesforce instance used for authentication, by default set to https://login.salesforce.com . https://login.salesforce.com String password (security) Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. String refreshToken (security) Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at https://login.salesforce.com/services/oauth2/success or https://test.salesforce.com/services/oauth2/success and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. String sslContextParameters (security) SSL parameters to use, see SSLContextParameters class for all available options. SSLContextParameters useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean userName (security) Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. String 118.4. Endpoint Options The Salesforce endpoint is configured using URI syntax: with the following path and query parameters: 118.4.1. Path Parameters (2 parameters) Name Description Default Type operationName (producer) The operation to use. Enum values: getVersions getResources getGlobalObjects getBasicInfo getDescription getSObject createSObject updateSObject deleteSObject getSObjectWithId upsertSObject deleteSObjectWithId getBlobField query queryMore queryAll search apexCall recent createJob getJob closeJob abortJob createBatch getBatch getAllBatches getRequest getResults createBatchQuery getQueryResultIds getQueryResult getRecentReports getReportDescription executeSyncReport executeAsyncReport getReportInstances getReportResults limits approval approvals composite-tree composite-batch composite compositeRetrieveSObjectCollections compositeCreateSObjectCollections compositeUpdateSObjectCollections compositeUpsertSObjectCollections compositeDeleteSObjectCollections bulk2GetAllJobs bulk2CreateJob bulk2GetJob bulk2CreateBatch bulk2CloseJob bulk2AbortJob bulk2DeleteJob bulk2GetSuccessfulResults bulk2GetFailedResults bulk2GetUnprocessedRecords bulk2CreateQueryJob bulk2GetQueryJob bulk2GetAllQueryJobs bulk2GetQueryJobResults bulk2AbortQueryJob bulk2DeleteQueryJob raw OperationName topicName (consumer) The name of the topic/channel to use. String 118.4.2. Query Parameters (57 parameters) Name Description Default Type apexMethod (common) APEX method name. String apexQueryParams (common) Query params for APEX method. Map apiVersion (common) Salesforce API version. 53.0 String backoffIncrement (common) Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 1000 long batchId (common) Bulk API Batch ID. String contentType (common) Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. Enum values: XML CSV JSON ZIP_XML ZIP_CSV ZIP_JSON ContentType defaultReplayId (common) Default replayId setting if no value is found in initialReplayIdMap. -1 Long fallBackReplayId (common) ReplayId to fall back to after an Invalid Replay Id response. -1 Long format (common) Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. Enum values: JSON XML PayloadFormat httpClient (common) Custom Jetty Http Client to use to connect to Salesforce. SalesforceHttpClient includeDetails (common) Include details in Salesforce1 Analytics report, defaults to false. Boolean initialReplayIdMap (common) Replay IDs to start from per channel name. Map instanceId (common) Salesforce1 Analytics report execution instance ID. String jobId (common) Bulk API Job ID. String limit (common) Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer locator (common) Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String maxBackoff (common) Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 30000 long maxRecords (common) The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer notFoundBehaviour (common) Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. Enum values: EXCEPTION NULL EXCEPTION NotFoundBehaviour notifyForFields (common) Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. Enum values: ALL REFERENCED SELECT WHERE NotifyForFieldsEnum notifyForOperationCreate (common) Notify for create operation, defaults to false (API version = 29.0). Boolean notifyForOperationDelete (common) Notify for delete operation, defaults to false (API version = 29.0). Boolean notifyForOperations (common) Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). Enum values: ALL CREATE EXTENDED UPDATE NotifyForOperationsEnum notifyForOperationUndelete (common) Notify for un-delete operation, defaults to false (API version = 29.0). Boolean notifyForOperationUpdate (common) Notify for update operation, defaults to false (API version = 29.0). Boolean objectMapper (common) Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. ObjectMapper pkChunking (common) Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean pkChunkingChunkSize (common) Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer pkChunkingParent (common) Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String pkChunkingStartRow (common) Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String queryLocator (common) Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String rawPayload (common) Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false boolean reportId (common) Salesforce1 Analytics report Id. String reportMetadata (common) Salesforce1 Analytics report metadata for filtering. ReportMetadata resultId (common) Bulk API Result ID. String sObjectBlobFieldName (common) SObject blob field name. String sObjectClass (common) Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String sObjectFields (common) SObject fields to retrieve. String sObjectId (common) SObject ID if required by API. String sObjectIdName (common) SObject external ID field name. String sObjectIdValue (common) SObject external ID field value. String sObjectName (common) SObject name if required or supported by API. String sObjectQuery (common) Salesforce SOQL query string. String sObjectSearch (common) Salesforce SOSL search string. String updateTopic (common) Whether to update an existing Push Topic when using the Streaming API, defaults to false. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean replayId (consumer) The replayId value to use when subscribing. Long exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern allOrNone (producer) Composite API option to indicate to rollback all records if any are not successful. false boolean apexUrl (producer) APEX method URL. String compositeMethod (producer) Composite (raw) method. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean rawHttpHeaders (producer) Comma separated list of message headers to include as HTTP parameters for Raw operation. String rawMethod (producer) HTTP method to use for the Raw operation. String rawPath (producer) The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String rawQueryParameters (producer) Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String 118.5. Authenticating to Salesforce The component supports three OAuth authentication flows: OAuth 2.0 Username-Password Flow OAuth 2.0 Refresh Token Flow OAuth 2.0 JWT Bearer Token Flow For each of the flow different set of properties needs to be set: Table 118.1. Table 1. Properties to set for each authentication flow Property Where to find it on Salesforce Flow clientId Connected App, Consumer Key All flows clientSecret Connected App, Consumer Secret Username-Password, Refresh Token userName Salesforce user username Username-Password, JWT Bearer Token password Salesforce user password Username-Password refreshToken From OAuth flow callback Refresh Token keystore Connected App, Digital Certificate JWT Bearer Token The component auto determines what flow you're trying to configure, to be remove ambiguity set the authenticationType property. Note Using Username-Password Flow in production is not encouraged. Note The certificate used in JWT Bearer Token Flow can be a selfsigned certificate. The KeyStore holding the certificate and the private key must contain only single certificate-private key entry. 118.6. URI format When used as a consumer, receiving streaming events, the URI scheme is: When used as a producer, invoking the Salesforce REST APIs, the URI scheme is: 118.7. Passing in Salesforce headers and fetching Salesforce response headers There is support to pass Salesforce headers via inbound message headers, header names that start with Sforce or x-sfdc on the Camel message will be passed on in the request, and response headers that start with Sforce will be present in the outbound message headers. For example to fetch API limits you can specify: // in your Camel route set the header before Salesforce endpoint //... .setHeader("Sforce-Limit-Info", constant("api-usage")) .to("salesforce:getGlobalObjects") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader("Sforce-Limit-Info", String.class); } } In addition, HTTP response status code and text are available as headers Exchange.HTTP_RESPONSE_CODE and Exchange.HTTP_RESPONSE_TEXT . 118.8. Supported Salesforce APIs The component supports the following Salesforce APIs Producer endpoints can use the following APIs. Most of the APIs process one record at a time, the Query API can retrieve multiple Records. 118.8.1. Rest API You can use the following for operationName : getVersions - Gets supported Salesforce REST API versions getResources - Gets available Salesforce REST Resource endpoints getGlobalObjects - Gets metadata for all available SObject types getBasicInfo - Gets basic metadata for a specific SObject type getDescription - Gets comprehensive metadata for a specific SObject type getSObject - Gets an SObject using its Salesforce Id createSObject - Creates an SObject updateSObject - Updates an SObject using Id deleteSObject - Deletes an SObject using Id getSObjectWithId - Gets an SObject using an external (user defined) id field upsertSObject - Updates or inserts an SObject using an external id deleteSObjectWithId - Deletes an SObject using an external id query - Runs a Salesforce SOQL query queryMore - Retrieves more results (in case of large number of results) using result link returned from the 'query' API search - Runs a Salesforce SOSL query limits - fetching organization API usage limits recent - fetching recent items approval - submit a record or records (batch) for approval process approvals - fetch a list of all approval processes composite - submit up to 25 possibly related REST requests and receive individual responses. It's also possible to use "raw" composite without limitation. composite-tree - create up to 200 records with parent-child relationships (up to 5 levels) in one go composite-batch - submit a composition of requests in batch compositeRetrieveSObjectCollections - Retrieve one or more records of the same object type. compositeCreateSObjectCollections - Add up to 200 records, returning a list of SaveSObjectResult objects. compositeUpdateSObjectCollections - Update up to 200 records, returning a list of SaveSObjectResult objects. compositeUpsertSObjectCollections - Create or update (upsert) up to 200 records based on an external ID field. Returns a list of UpsertSObjectResult objects. compositeDeleteSObjectCollections - Delete up to 200 records, returning a list of SaveSObjectResult objects. queryAll - Runs a SOQL query. It returns the results that are deleted because of a merge (merges up to three records into one of the records, deletes the others, and reparents any related records) or delete. Also returns the information about archived Task and Event records. getBlobField - Retrieves the specified blob field from an individual record. apexCall - Executes a user defined APEX REST API call. raw - Send requests to salesforce and have full, raw control over endpoint, parameters, body, etc. For example, the following producer endpoint uses the upsertSObject API, with the sObjectIdName parameter specifying 'Name' as the external id field. The request message body should be an SObject DTO generated using the maven plugin. The response message will either be null if an existing record was updated, or CreateSObjectResult with an id of the new record, or a list of errors while creating the new object. ...to("salesforce:upsertSObject?sObjectIdName=Name")... 118.8.2. Bulk 2.0 API The Bulk 2.0 API has a simplified model over the original Bulk API. Use it to quickly load a large amount of data into salesforce, or query a large amount of data out of salesforce. Data must be provided in CSV format. The minimum API version for Bulk 2.0 is v41.0. The minimum API version for Bulk Queries is v47.0. DTO classes mentioned below are from the org.apache.camel.component.salesforce.api.dto.bulkv2 package. The following operations are supported: bulk2CreateJob - Create a bulk job. Supply an instance of Job in the message body. bulk2GetJob - Get an existing Job. jobId parameter is required. bulk2CreateBatch - Add a Batch of CSV records to a job. Supply CSV data in the message body. The first row must contain headers. jobId parameter is required. bulk2CloseJob - Close a job. You must close the job in order for it to be processed or aborted/deleted. jobId parameter is required. bulk2AbortJob - Abort a job. jobId parameter is required. bulk2DeleteJob - Delete a job. jobId parameter is required. bulk2GetSuccessfulResults - Get successful results for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetFailedResults - Get failed results for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetUnprocessedRecords - Get unprocessed records for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetAllJobs - Get all jobs. Response body is an instance of Jobs . If the done property is false, there are additional pages to fetch, and the nextRecordsUrl property contains the value to be set in the queryLocator parameter on subsequent calls. bulk2CreateQueryJob - Create a bulk query job. Supply an instance of QueryJob in the message body. bulk2GetQueryJob - Get a bulk query job. jobId parameter is required. bulk2GetQueryJobResults - Get bulk query job results. jobId parameter is required. Accepts maxRecords and locator parameters. Response message headers include Sforce-NumberOfRecords and Sforce-Locator headers. The value of Sforce-Locator can be passed into subsequent calls via the locator parameter. bulk2AbortQueryJob - Abort a bulk query job. jobId parameter is required. bulk2DeleteQueryJob - Delete a bulk query job. jobId parameter is required. bulk2GetAllQueryJobs - Get all jobs. Response body is an instance of QueryJobs . If the done property is false, there are additional pages to fetch, and the nextRecordsUrl property contains the value to be set in the queryLocator parameter on subsequent calls. 118.8.3. Rest Bulk (original) API Producer endpoints can use the following APIs. All Job data formats, i.e. xml, csv, zip/xml, and zip/csv are supported. The request and response have to be marshalled/unmarshalled by the route. Usually the request will be some stream source like a CSV file, and the response may also be saved to a file to be correlated with the request. You can use the following for operationName : createJob - Creates a Salesforce Bulk Job. Must supply a JobInfo instance in body. PK Chunking is supported via the pkChunking* options. See an explanation here . getJob - Gets a Job using its Salesforce Id closeJob - Closes a Job abortJob - Aborts a Job createBatch - Submits a Batch within a Bulk Job getBatch - Gets a Batch using Id getAllBatches - Gets all Batches for a Bulk Job Id getRequest - Gets Request data (XML/CSV) for a Batch getResults - Gets the results of the Batch when its complete createBatchQuery - Creates a Batch from an SOQL query getQueryResultIds - Gets a list of Result Ids for a Batch Query getQueryResult - Gets results for a Result Id getRecentReports - Gets up to 200 of the reports you most recently viewed by sending a GET request to the Report List resource. getReportDescription - Retrieves the report, report type, and related metadata for a report, either in a tabular or summary or matrix format. executeSyncReport - Runs a report synchronously with or without changing filters and returns the latest summary data. executeAsyncReport - Runs an instance of a report asynchronously with or without filters and returns the summary data with or without details. getReportInstances - Returns a list of instances for a report that you requested to be run asynchronously. Each item in the list is treated as a separate instance of the report. getReportResults : Contains the results of running a report. For example, the following producer endpoint uses the createBatch API to create a Job Batch. The in message must contain a body that can be converted into an InputStream (usually UTF-8 CSV or XML content from a file, etc.) and header fields 'jobId' for the Job and 'contentType' for the Job content type, which can be XML, CSV, ZIP_XML or ZIP_CSV. The put message body will contain BatchInfo on success, or throw a SalesforceException on error. ...to("salesforce:createBatch").. 118.8.4. Rest Streaming API Consumer endpoints can use the following syntax for streaming endpoints to receive Salesforce notifications on create/update. To create and subscribe to a topic from("salesforce:CamelTestTopic?notifyForFields=ALL¬ifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c")... To subscribe to an existing topic from("salesforce:CamelTestTopic&sObjectName=Merchandise__c")... 118.8.5. Platform events To emit a platform event use createSObject operation. And set the message body can be JSON string or InputStream with key-value data - in that case sObjectName needs to be set to the API name of the event, or a class that extends from AbstractDTOBase with the appropriate class name for the event. For example using a DTO: class Order_Event__e extends AbstractDTOBase { @JsonProperty("OrderNumber") private String orderNumber; // ... other properties and getters/setters } from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + exchange.getProperty(Exchange.TIMER_COUNTER); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to("salesforce:createSObject"); Or using JSON event data: from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + exchange.getProperty(Exchange.TIMER_COUNTER); in.setBody("{\"OrderNumber\":\"" + orderNumber + "\"}"); }) .to("salesforce:createSObject?sObjectName=Order_Event__e"); To receive platform events use the consumer endpoint with the API name of the platform event prefixed with event/ (or /event/ ), e.g.: salesforce:events/Order_Event__e . Processor consuming from that endpoint will receive either org.apache.camel.component.salesforce.api.dto.PlatformEvent object or org.cometd.bayeux.Message in the body depending on the rawPayload being false or true respectively. For example, in the simplest form to consume one event: PlatformEvent event = consumer.receiveBody("salesforce:event/Order_Event__e", PlatformEvent.class); 118.8.6. Change data capture events On the one hand, Salesforce could be configured to emit notifications for record changes of select objects. On the other hand, the Camel Salesforce component could react to such notifications, allowing for instance to synchronize those changes into an external system . The notifications of interest could be specified in the from("salesforce:XXX") clause of a Camel route via the subscription channel, e.g: from("salesforce:data/ChangeEvents?replayId=-1").log("being notified of all change events") from("salesforce:data/AccountChangeEvent?replayId=-1").log("being notified of change events for Account records") from("salesforce:data/Employee__ChangeEvent?replayId=-1").log("being notified of change events for Employee__c custom object") The received message contains either java.util.Map<String,Object> or org.cometd.bayeux.Message in the body depending on the rawPayload being false or true respectively. The CamelSalesforceChangeType header could be valued to one of CREATE , UPDATE , DELETE or UNDELETE . More details about how to use the Camel Salesforce component change data capture capabilities could be found in the ChangeEventsConsumerIntegrationTest . The Salesforce developer guide is a good fit to better know the subtleties of implementing a change data capture integration application. The dynamic nature of change event body fields, high level replication steps as well as security considerations could be of interest. 118.9. Examples 118.9.1. Uploading a document to a ContentWorkspace Create the ContentVersion in Java, using a Processor instance: public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle("test document"); cv.setPathOnClient("test_doc.html"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here ---- } } Give the output from the processor to the Salesforce component: from("file:///home/camel/library") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to("salesforce:createSObject"); 118.10. Using Salesforce Limits API With salesforce:limits operation you can fetch of API limits from Salesforce and then act upon that data received. The result of salesforce:limits operation is mapped to org.apache.camel.component.salesforce.api.dto.Limits class and can be used in a custom processors or expressions. For instance, consider that you need to limit the API usage of Salesforce so that 10% of daily API requests is left for other routes. The body of output message contains an instance of org.apache.camel.component.salesforce.api.dto.Limits object that can be used in conjunction with Content Based Router and Content Based Router and Spring Expression Language (SpEL) to choose when to perform queries. Notice how multiplying 1.0 with the integer value held in body.dailyApiRequests.remaining makes the expression evaluate as with floating point arithmetic, without it - it would end up making integral division which would result with either 0 (some API limits consumed) or 1 (no API limits consumed). from("direct:querySalesforce") .to("salesforce:limits") .choice() .when(spel("#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}")) .to("salesforce:query?...") .otherwise() .setBody(constant("Used up Salesforce API limits, leaving 10% for critical routes")) .endChoice() 118.11. Working with approvals All the properties are named exactly the same as in the Salesforce REST API prefixed with approval. . You can set approval properties by setting approval.PropertyName of the Endpoint these will be used as template - meaning that any property not present in either body or header will be taken from the Endpoint configuration. Or you can set the approval template on the Endpoint by assigning approval property to a reference onto a bean in the Registry. You can also provide header values using the same approval.PropertyName in the incoming message headers. And finally body can contain one AprovalRequest or an Iterable of ApprovalRequest objects to process as a batch. The important thing to remember is the priority of the values specified in these three mechanisms: value in body takes precedence before any other value in message header takes precedence before template value value in template is set if no other value in header or body was given For example to send one record for approval using values in headers use: Given a route: from("direct:example1")// .setHeader("approval.ContextId", simple("USD{body['contextId']}")) .setHeader("approval.NextApproverIds", simple("USD{body['nextApproverIds']}")) .to("salesforce:approval?"// + "approval.actionType=Submit"// + "&approval.comments=this is a test"// + "&approval.processDefinitionNameOrId=Test_Account_Process"// + "&approval.skipEntryCriteria=true"); You could send a record for approval using: final Map<String, String> body = new HashMap<>(); body.put("contextId", accountIds.iterator().()); body.put("nextApproverIds", userId); final ApprovalResult result = template.requestBody("direct:example1", body, ApprovalResult.class); 118.12. Using Salesforce Recent Items API To fetch the recent items use salesforce:recent operation. This operation returns an java.util.List of org.apache.camel.component.salesforce.api.dto.RecentItem objects ( List<RecentItem> ) that in turn contain the Id , Name and Attributes (with type and url properties). You can limit the number of returned items by specifying limit parameter set to maximum number of records to return. For example: from("direct:fetchRecentItems") to("salesforce:recent") .split().body() .log("USD{body.name} at USD{body.attributes.url}"); 118.13. Using Salesforce Composite API to submit SObject tree To create up to 200 records including parent-child relationships use salesforce:composite-tree operation. This requires an instance of org.apache.camel.component.salesforce.api.dto.composite.SObjectTree in the input message and returns the same tree of objects in the output message. The org.apache.camel.component.salesforce.api.dto.AbstractSObjectBase instances within the tree get updated with the identifier values ( Id property) or their corresponding org.apache.camel.component.salesforce.api.dto.composite.SObjectNode is populated with errors on failure. Note that for some records operation can succeed and for some it can fail - so you need to manually check for errors. Easiest way to use this functionality is to use the DTOs generated by the camel-salesforce-maven-plugin , but you also have the option of customizing the references that identify the each object in the tree, for instance primary keys from your database. Lets look at an example: Account account = ... Contact president = ... Contact marketing = ... Account anotherAccount = ... Contact sales = ... Asset someAsset = ... // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody("salesforce:composite-tree", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId(); 118.14. Using Salesforce Composite API to submit multiple requests in a batch The Composite API batch operation ( composite-batch ) allows you to accumulate multiple requests in a batch and then submit them in one go, saving the round trip cost of multiple individual requests. Each response is then received in a list of responses with the order preserved, so that the n-th requests response is in the n-th place of the response. Note The results can vary from API to API so the result of the request is given as a java.lang.Object . In most cases the result will be a java.util.Map with string keys and values or other java.util.Map as value. Requests are made in JSON format and hold some type information (i.e. it is known what values are strings and what values are numbers). Lets look at an example: final String acountId = ... final SObjectBatch batch = new SObjectBatch("38.0"); final Account updates = new Account(); updates.setName("NewName"); batch.addUpdate("Account", accountId, updates); final Account newAccount = new Account(); newAccount.setName("Account created from Composite batch API"); batch.addCreate(newAccount); batch.addGet("Account", accountId, "Name", "BillingPostalCode"); batch.addDelete("Account", accountId); final SObjectBatchResponse response = template.requestBody("salesforce:composite-batch", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings("unchecked") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get("id"); // id of the new account, this is for JSON, for XML it would be createData.get("Result").get("id") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings("unchecked") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get("Name"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("Name") final String accountBillingPostalCode = retrieveData.get("BillingPostalCode"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("BillingPostalCode") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null 118.15. Using Salesforce Composite API to submit multiple chained requests The composite operation allows submitting up to 25 requests that can be chained together, for instance identifier generated in request can be used in subsequent request. Individual requests and responses are linked with the provided reference . Note Composite API supports only JSON payloads. Note As with the batch API the results can vary from API to API so the result of the request is given as a java.lang.Object . In most cases the result will be a java.util.Map with string keys and values or other java.util.Map as value. Requests are made in JSON format hold some type information (i.e. it is known what values are strings and what values are numbers). Lets look at an example: SObjectComposite composite = new SObjectComposite("38.0", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName("Salesforce"); updateAccount.setBillingStreet("Landmark @ 1 Market Street"); updateAccount.setBillingCity("San Francisco"); updateAccount.setBillingState("California"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate("Account", "001xx000003DIpcAAG", updateAccount, "UpdatedAccount"); final Contact newContact = new TestContact(); newContact.setLastName("John Doe"); newContact.setPhone("1234567890"); composite.addCreate(newContact, "NewContact"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c("001xx000003DIpcAAG"); junction.setContactId__c("@{NewContact.id}"); composite.addCreate(junction, "JunctionRecord"); final SObjectCompositeResponse response = template.requestBody("salesforce:composite", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> "UpdatedAccount".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> "JunctionRecord".equals(r.getReferenceId())).findFirst().get() 118.16. Using "raw" Salesforce composite It's possible to directly call Salesforce composite by preparing the Salesforce JSON request in the route thanks to the rawPayload option. For instance, you can have the following route: The route directly creates the body as JSON and directly submit to salesforce endpoint using rawPayload=true option. With this approach, you have the complete control on the Salesforce request. POST is the default HTTP method used to send raw Composite requests to salesforce. Use the compositeMethod option to override to the other supported value, GET , which returns a list of other available composite resources. 118.17. Using Raw Operation Send HTTP requests to salesforce with full, raw control of all aspects of the call. Any serialization or deserialization of request and response bodies must be performed in the route. The Content-Type HTTP header will be automatically set based on the format option, but this can be overridden with the rawHttpHeaders option. Parameter Type Description Default Required request body String or InputStream Body of the HTTP request rawPath String The portion of the endpoint URL after the domain name, e.g., '/services/data/v51.0/sobjects/Account/' x rawMethod String The HTTP method x rawQueryParameters String Comma separated list of message headers to include as query parameters. Do not url-encode values as this will be done automatically. rawHttpHeaders String Comma separated list of message headers to include as HTTP headers 118.17.1. Query example In this example we'll send a query to the REST API. The query must be passed in a URL parameter called "q", so we'll create a message header called q and tell the raw operation to include that message header as a URL parameter: 118.17.2. SObject example In this example, we'll pass a Contact the REST API in a create operation. Since the raw operation does not perform any serialization, we make sure to pass XML in the message body The response is: 118.18. Using Composite SObject Collections The SObject Collections API executes actions on multiple records in one request. Use sObject Collections to reduce the number of round-trips between the client and server. The entire request counts as a single call toward your API limits. This resource is available in API version 42.0 and later. SObject records (aka DTOs) supplied to these operations must be instances of subclasses of AbstractDescribedSObjectBase . See the Maven Plugin section for information on generating these DTO classes. These operations serialize supplied DTOs to JSON. 118.18.1. compositeRetrieveSObjectCollections Retrieve one or more records of the same object type. Parameter Type Description Default Required ids List of String or comma-separated string A list of one or more IDs of the objects to return. All IDs must belong to the same object type. x fields List of String or comma-separated string A list of fields to include in the response. The field names you specify must be valid, and you must have read-level permissions to each field. x sObjectName String Type of SObject, e.g. Account x sObjectClass String Fully-qualified class name of DTO class to use for deserializing the response. Required if sObjectName parameter does not resolve to a class that exists in the package specified by the package option. 118.18.2. compositeCreateSObjectCollections Add up to 200 records, returning a list of SaveSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to create x allOrNone boolean Indicates whether to roll back the entire request when the creation of any object fails (true) or to continue with the independent creation of other objects in the request. false 118.18.3. compositeUpdateSObjectCollections Update up to 200 records, returning a list of SaveSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to update x allOrNone boolean Indicates whether to roll back the entire request when the update of any object fails (true) or to continue with the independent update of other objects in the request. false 118.18.4. compositeUpsertSObjectCollections Create or update (upsert) up to 200 records based on an external ID field, returning a list of UpsertSObjectResult objects. Mixed SObject types is not supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to upsert x allOrNone boolean Indicates whether to roll back the entire request when the upsert of any object fails (true) or to continue with the independent upsert of other objects in the request. false sObjectName String Type of SObject, e.g. Account x sObjectIdName String Name of External ID field x 118.18.5. compositeDeleteSObjectCollections Delete up to 200 records, returning a list of DeleteSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required sObjectIds or request body List of String or comma-separated string A list of up to 200 IDs of objects to be deleted. x allOrNone boolean Indicates whether to roll back the entire request when the deletion of any object fails (true) or to continue with the independent deletion of other objects in the request. false 118.19. Sending null values to salesforce By default, SObject fields with null values are not sent to salesforce. In order to send null values to salesforce, use the fieldsToNull property, as follows: accountSObject.getFieldsToNull().add("Site"); 118.20. Generating SOQL query strings org.apache.camel.component.salesforce.api.utils.QueryHelper contains helper methods to generate SOQL queries. For instance to fetch all custom fields from Account SObject you can simply generate the SOQL SELECT by invoking: String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom); 118.21. Camel Salesforce Maven Plugin This Maven plugin generates DTOs for the Camel. For obvious security reasons it is recommended that the clientId, clientSecret, userName and password fields be not set in the pom.xml. The plugin should be configured for the rest of the properties, and can be executed using the following command: The generated DTOs use Jackson annotations. All Salesforce field types are supported. Date and time fields are mapped to java.time.ZonedDateTime by default, and picklist fields are mapped to generated Java Enumerations. Please refer to README.md for details on how to generate the DTO. 118.22. Spring Boot Auto-Configuration The component supports 91 options, which are listed below. Name Description Default Type camel.component.salesforce.all-or-none Composite API option to indicate to rollback all records if any are not successful. false Boolean camel.component.salesforce.apex-method APEX method name. String camel.component.salesforce.apex-query-params Query params for APEX method. Map camel.component.salesforce.apex-url APEX method URL. String camel.component.salesforce.api-version Salesforce API version. 53.0 String camel.component.salesforce.authentication-type Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. AuthenticationType camel.component.salesforce.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.salesforce.backoff-increment Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. The option is a long type. 1000 Long camel.component.salesforce.batch-id Bulk API Batch ID. String camel.component.salesforce.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.salesforce.client-id OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. String camel.component.salesforce.client-secret OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. String camel.component.salesforce.composite-method Composite (raw) method. String camel.component.salesforce.config Global endpoint configuration - use to set values that are common to all endpoints. The option is a org.apache.camel.component.salesforce.SalesforceEndpointConfig type. SalesforceEndpointConfig camel.component.salesforce.content-type Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. ContentType camel.component.salesforce.default-replay-id Default replayId setting if no value is found in initialReplayIdMap. -1 Long camel.component.salesforce.enabled Whether to enable auto configuration of the salesforce component. This is enabled by default. Boolean camel.component.salesforce.fall-back-replay-id ReplayId to fall back to after an Invalid Replay Id response. -1 Long camel.component.salesforce.format Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. PayloadFormat camel.component.salesforce.http-client Custom Jetty Http Client to use to connect to Salesforce. The option is a org.apache.camel.component.salesforce.SalesforceHttpClient type. SalesforceHttpClient camel.component.salesforce.http-client-connection-timeout Connection timeout used by the HttpClient when connecting to the Salesforce server. 60000 Long camel.component.salesforce.http-client-idle-timeout Timeout used by the HttpClient when waiting for response from the Salesforce server. 10000 Long camel.component.salesforce.http-client-properties Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. Map camel.component.salesforce.http-max-content-length Max content length of an HTTP response. Integer camel.component.salesforce.http-proxy-auth-uri Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. String camel.component.salesforce.http-proxy-excluded-addresses A list of addresses for which HTTP proxy server should not be used. Set camel.component.salesforce.http-proxy-host Hostname of the HTTP proxy server to use. String camel.component.salesforce.http-proxy-included-addresses A list of addresses for which HTTP proxy server should be used. Set camel.component.salesforce.http-proxy-password Password to use to authenticate against the HTTP proxy server. String camel.component.salesforce.http-proxy-port Port number of the HTTP proxy server to use. Integer camel.component.salesforce.http-proxy-realm Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. String camel.component.salesforce.http-proxy-secure If set to false disables the use of TLS when accessing the HTTP proxy. true Boolean camel.component.salesforce.http-proxy-socks4 If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. false Boolean camel.component.salesforce.http-proxy-use-digest-auth If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used. false Boolean camel.component.salesforce.http-proxy-username Username to use to authenticate against the HTTP proxy server. String camel.component.salesforce.http-request-buffer-size HTTP request buffer size. May need to be increased for large SOQL queries. 8192 Integer camel.component.salesforce.include-details Include details in Salesforce1 Analytics report, defaults to false. Boolean camel.component.salesforce.initial-replay-id-map Replay IDs to start from per channel name. Map camel.component.salesforce.instance-id Salesforce1 Analytics report execution instance ID. String camel.component.salesforce.instance-url URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication. String camel.component.salesforce.job-id Bulk API Job ID. String camel.component.salesforce.jwt-audience Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases. String camel.component.salesforce.keystore KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. The option is a org.apache.camel.support.jsse.KeyStoreParameters type. KeyStoreParameters camel.component.salesforce.lazy-login If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. false Boolean camel.component.salesforce.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.salesforce.limit Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer camel.component.salesforce.locator Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String camel.component.salesforce.login-config All authentication configuration in one nested bean, all properties set there can be set directly on the component as well. The option is a org.apache.camel.component.salesforce.SalesforceLoginConfig type. SalesforceLoginConfig camel.component.salesforce.login-url URL of the Salesforce instance used for authentication, by default set to . String camel.component.salesforce.long-polling-transport-properties Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api. Map camel.component.salesforce.max-backoff Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. The option is a long type. 30000 Long camel.component.salesforce.max-records The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer camel.component.salesforce.not-found-behaviour Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. NotFoundBehaviour camel.component.salesforce.notify-for-fields Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. NotifyForFieldsEnum camel.component.salesforce.notify-for-operation-create Notify for create operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-delete Notify for delete operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-undelete Notify for un-delete operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-update Notify for update operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operations Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). NotifyForOperationsEnum camel.component.salesforce.object-mapper Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. The option is a com.fasterxml.jackson.databind.ObjectMapper type. ObjectMapper camel.component.salesforce.packages In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma. String camel.component.salesforce.password Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. String camel.component.salesforce.pk-chunking Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean camel.component.salesforce.pk-chunking-chunk-size Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer camel.component.salesforce.pk-chunking-parent Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String camel.component.salesforce.pk-chunking-start-row Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String camel.component.salesforce.query-locator Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String camel.component.salesforce.raw-http-headers Comma separated list of message headers to include as HTTP parameters for Raw operation. String camel.component.salesforce.raw-method HTTP method to use for the Raw operation. String camel.component.salesforce.raw-path The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String camel.component.salesforce.raw-payload Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false Boolean camel.component.salesforce.raw-query-parameters Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String camel.component.salesforce.refresh-token Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. String camel.component.salesforce.report-id Salesforce1 Analytics report Id. String camel.component.salesforce.report-metadata Salesforce1 Analytics report metadata for filtering. The option is a org.apache.camel.component.salesforce.api.dto.analytics.reports.ReportMetadata type. ReportMetadata camel.component.salesforce.result-id Bulk API Result ID. String camel.component.salesforce.s-object-blob-field-name SObject blob field name. String camel.component.salesforce.s-object-class Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String camel.component.salesforce.s-object-fields SObject fields to retrieve. String camel.component.salesforce.s-object-id SObject ID if required by API. String camel.component.salesforce.s-object-id-name SObject external ID field name. String camel.component.salesforce.s-object-id-value SObject external ID field value. String camel.component.salesforce.s-object-name SObject name if required or supported by API. String camel.component.salesforce.s-object-query Salesforce SOQL query string. String camel.component.salesforce.s-object-search Salesforce SOSL search string. String camel.component.salesforce.ssl-context-parameters SSL parameters to use, see SSLContextParameters class for all available options. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.salesforce.update-topic Whether to update an existing Push Topic when using the Streaming API, defaults to false. false Boolean camel.component.salesforce.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.salesforce.user-name Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. String camel.component.salesforce.worker-pool-max-size Maximum size of the thread pool used to handle HTTP responses. 20 Integer camel.component.salesforce.worker-pool-size Size of the thread pool used to handle HTTP responses. 10 Integer
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-salesforce-starter</artifactId> </dependency>",
"<plugin> <groupId>org.apache.camel.maven</groupId> <artifactId>camel-salesforce-maven-plugin</artifactId> <version>USD{camel-community.version}</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <clientId>USD{camelSalesforce.clientId}</clientId> <clientSecret>USD{camelSalesforce.clientSecret}</clientSecret> <userName>USD{camelSalesforce.userName}</userName> <password>USD{camelSalesforce.password}</password> <sslContextParameters> <secureSocketProtocol>TLSv1.2</secureSocketProtocol> </sslContextParameters> <includes> <include>Contact</include> </includes> </configuration> </execution> </executions> </plugin>",
"salesforce:operationName:topicName",
"salesforce:topic?options",
"salesforce:operationName?options",
"// in your Camel route set the header before Salesforce endpoint // .setHeader(\"Sforce-Limit-Info\", constant(\"api-usage\")) .to(\"salesforce:getGlobalObjects\") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader(\"Sforce-Limit-Info\", String.class); } }",
"...to(\"salesforce:upsertSObject?sObjectIdName=Name\")",
"...to(\"salesforce:createBatch\")..",
"from(\"salesforce:CamelTestTopic?notifyForFields=ALL¬ifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c\")",
"from(\"salesforce:CamelTestTopic&sObjectName=Merchandise__c\")",
"class Order_Event__e extends AbstractDTOBase { @JsonProperty(\"OrderNumber\") private String orderNumber; // ... other properties and getters/setters } from(\"timer:tick\") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = \"ORD\" + exchange.getProperty(Exchange.TIMER_COUNTER); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to(\"salesforce:createSObject\");",
"from(\"timer:tick\") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = \"ORD\" + exchange.getProperty(Exchange.TIMER_COUNTER); in.setBody(\"{\\\"OrderNumber\\\":\\\"\" + orderNumber + \"\\\"}\"); }) .to(\"salesforce:createSObject?sObjectName=Order_Event__e\");",
"PlatformEvent event = consumer.receiveBody(\"salesforce:event/Order_Event__e\", PlatformEvent.class);",
"from(\"salesforce:data/ChangeEvents?replayId=-1\").log(\"being notified of all change events\") from(\"salesforce:data/AccountChangeEvent?replayId=-1\").log(\"being notified of change events for Account records\") from(\"salesforce:data/Employee__ChangeEvent?replayId=-1\").log(\"being notified of change events for Employee__c custom object\")",
"public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle(\"test document\"); cv.setPathOnClient(\"test_doc.html\"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here ---- } }",
"from(\"file:///home/camel/library\") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to(\"salesforce:createSObject\");",
"from(\"direct:querySalesforce\") .to(\"salesforce:limits\") .choice() .when(spel(\"#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}\")) .to(\"salesforce:query?...\") .otherwise() .setBody(constant(\"Used up Salesforce API limits, leaving 10% for critical routes\")) .endChoice()",
"from(\"direct:example1\")// .setHeader(\"approval.ContextId\", simple(\"USD{body['contextId']}\")) .setHeader(\"approval.NextApproverIds\", simple(\"USD{body['nextApproverIds']}\")) .to(\"salesforce:approval?\"// + \"approval.actionType=Submit\"// + \"&approval.comments=this is a test\"// + \"&approval.processDefinitionNameOrId=Test_Account_Process\"// + \"&approval.skipEntryCriteria=true\");",
"final Map<String, String> body = new HashMap<>(); body.put(\"contextId\", accountIds.iterator().next()); body.put(\"nextApproverIds\", userId); final ApprovalResult result = template.requestBody(\"direct:example1\", body, ApprovalResult.class);",
"from(\"direct:fetchRecentItems\") to(\"salesforce:recent\") .split().body() .log(\"USD{body.name} at USD{body.attributes.url}\");",
"Account account = Contact president = Contact marketing = Account anotherAccount = Contact sales = Asset someAsset = // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody(\"salesforce:composite-tree\", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId();",
"final String acountId = final SObjectBatch batch = new SObjectBatch(\"38.0\"); final Account updates = new Account(); updates.setName(\"NewName\"); batch.addUpdate(\"Account\", accountId, updates); final Account newAccount = new Account(); newAccount.setName(\"Account created from Composite batch API\"); batch.addCreate(newAccount); batch.addGet(\"Account\", accountId, \"Name\", \"BillingPostalCode\"); batch.addDelete(\"Account\", accountId); final SObjectBatchResponse response = template.requestBody(\"salesforce:composite-batch\", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings(\"unchecked\") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get(\"id\"); // id of the new account, this is for JSON, for XML it would be createData.get(\"Result\").get(\"id\") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings(\"unchecked\") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get(\"Name\"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get(\"Account\").get(\"Name\") final String accountBillingPostalCode = retrieveData.get(\"BillingPostalCode\"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get(\"Account\").get(\"BillingPostalCode\") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null",
"SObjectComposite composite = new SObjectComposite(\"38.0\", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName(\"Salesforce\"); updateAccount.setBillingStreet(\"Landmark @ 1 Market Street\"); updateAccount.setBillingCity(\"San Francisco\"); updateAccount.setBillingState(\"California\"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate(\"Account\", \"001xx000003DIpcAAG\", updateAccount, \"UpdatedAccount\"); final Contact newContact = new TestContact(); newContact.setLastName(\"John Doe\"); newContact.setPhone(\"1234567890\"); composite.addCreate(newContact, \"NewContact\"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c(\"001xx000003DIpcAAG\"); junction.setContactId__c(\"@{NewContact.id}\"); composite.addCreate(junction, \"JunctionRecord\"); final SObjectCompositeResponse response = template.requestBody(\"salesforce:composite\", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> \"UpdatedAccount\".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> \"JunctionRecord\".equals(r.getReferenceId())).findFirst().get()",
"from(\"timer:fire?period=2000\").setBody(constant(\"{\\n\" + \" \\\"allOrNone\\\" : true,\\n\" + \" \\\"records\\\" : [ { \\n\" + \" \\\"attributes\\\" : {\\\"type\\\" : \\\"FOO\\\"},\\n\" + \" \\\"Name\\\" : \\\"123456789\\\",\\n\" + \" \\\"FOO\\\" : \\\"XXXX\\\",\\n\" + \" \\\"ACCOUNT\\\" : 2100.0\\n\" + \" \\\"ExternalID\\\" : \\\"EXTERNAL\\\"\\n\" \" }]\\n\" + \"}\") .to(\"salesforce:composite?rawPayload=true\") .log(\"USD{body}\");",
"from(\"direct:queryExample\") .setHeader(\"q\", \"SELECT Id, LastName FROM Contact\") .to(\"salesforce:raw?format=JSON&rawMethod=GET&rawQueryParameters=q&rawPath=/services/data/v51.0/query\") // deserialize JSON results or handle in some other way",
"from(\"direct:createAContact\") .setBody(constant(\"<Contact><LastName>TestLast</LastName></Contact>\")) .to(\"salesforce:raw?format=XML&rawMethod=POST&rawPath=/services/data/v51.0/sobjects/Contact\")",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <Result> <id>0034x00000RnV6zAAF</id> <success>true</success> </Result>",
"accountSObject.getFieldsToNull().add(\"Site\");",
"String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom);",
"mvn camel-salesforce:generate -DcamelSalesforce.clientId=<clientid> -DcamelSalesforce.clientSecret=<clientsecret> -DcamelSalesforce.userName=<username> -DcamelSalesforce.password=<password>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-salesforce-component-starter
|
Chapter 10. Monitoring your Ansible Automation Platform
|
Chapter 10. Monitoring your Ansible Automation Platform After a successful installation of Ansible Automation Platform, it is crucial to prioritize the maintenance of its health and the ability to monitor key metrics. This section focuses on how to monitor the API metrics provided by the newly installed Ansible Automation Platform environment that resides within Red Hat OpenShift. 10.1. What will be used to monitor the API metrics? Prometheus and Grafana. Prometheus is an open source monitoring solution for collecting and aggregating metrics. Partner Prometheus' monitoring capabilities with Grafana, an open source solution for running data analytics and pulling up metrics in customizable dashboards, and you get a real-time visualization of metrics to track the status and health of your Ansible Automation Platform. 10.2. What metrics can I expect to see? The Grafana pre-built dashboard displays: Ansible Automation Platform version Number of controller nodes Number of hosts available in the license Number of hosts used Total users Jobs successful Jobs failed Quantity by type of job execution Graphics showing the number of jobs running and pending jobs Graph showing the growth of the tool showing the amount of workflow, hosts, inventories, jobs, projects, organizations, etc. This Grafana dashboard can be customized to capture other metrics you may be interested in. However, customizing the Grafana dashboard is out of scope of this reference architecture. 10.3. Installation via an Ansible Playbook The process to monitor Ansible Automation Platform via Prometheus with a customized Grafana dashboard can be installed in a matter of minutes. The following provides the steps to do just that, taking advantage of the pre-built Ansible playbook. The following steps are required to run the Ansible Playbook successfully: Creation of a custom credential type within automation controller Creation of a kubeconfig credential within automation controller Creation of a project and job template to run the Ansible Playbook 10.3.1. Create a custom credential type Within your Ansible Automation Platform dashboard, Under Administration->Credential Types click the blue Add button. Provide a Name , e.g. Kubeconfig Within the input configuration, input the following YAML: Within the injector configuration, input the following YAML: Click Save . 10.3.2. Create a kubeconfig credential Within your Ansible Automation Platform dashboard, Under Resources->Credentials click the blue Add button. Provide a Name , e.g. OpenShift-Kubeconfig Within the Credential Type dropdown, select Kubeconfig . Within the Type Details text box, insert your kubeconfig file for your Red Hat OpenShift cluster Click Save . 10.3.3. Create a project Within your Ansible Automation Platform dashboard, Under Resources->Projects click the blue Add button. Provide a Name , e.g. Monitoring AAP Project Select Default as the Organization. Select Default execution environment as the Execution Environment . Select Git as the Source Control Credential Type . Within the Type Details , Add the Source Control URL ( https://github.com/ansible/aap_ocp_refarch ) Within Options , Select Clean, Delete, Update Revision on Launch . Click Save . 10.3.4. Create a job template & run the Ansible Playbook Within your Ansible Automation Platform dsahboard, Under Resources->Templates click the blue Add->Add job template Provide a Name , e.g. Monitoring AAP Job Select Run as the Job Type . Select Demo Inventory as the Inventory . Select Monitoring AAP Project as the Project . Select Default execution environment as the Execution Environment . Select aap-prometheus-grafana/playbook.yml as the Playbook . Select Credentials and switch the category from Machine to Kubeconfig . Select the appropriate kubeconfig for access to the Red Hat OpenShift cluster, e.g. OpenShift-Kubeconfig Optional Step : Within the Variables , the following variables may be modified: prometheus_namespace: <your-specified-value> ansible_namespace: <your-specified-value> Click Save . Click Launch to run the Ansible Playbook Details to login to Grafana and Prometheus are shown within the job output
|
[
"fields: - id: kube_config type: string label: kubeconfig secret: true multiline: true",
"env: K8S_AUTH_KUBECONFIG: '{{ tower.filename.kubeconfig }}' file: template.kubeconfig: '{{ kube_config }}'"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_ansible_automation_platform_2_on_red_hat_openshift/monitoring_your_ansible_automation_platform
|
Appendix A. Recommended JGroups Values for JBoss Data Grid
|
Appendix A. Recommended JGroups Values for JBoss Data Grid A.1. Supported JGroups Protocols The table contains a list of the JGroups protocols supported in JBoss Data Grid. Table A.1. Supported JGroups Protocols Protocol Details TCP TCP/IP is a replacement transport for UDP in situations where IP multicast cannot be used, such as operations over a WAN where routers may discard IP multicast packets. TCP is a transport protocol used to send unicast and multicast messages. When sending multicast messages, TCP sends multiple unicast messages. When using TCP, each message to all cluster members is sent as multiple unicast messages, or one to each member. As IP multicasting cannot be used to discover initial members, another mechanism must be used to find initial membership. Red Hat JBoss Data Grid's Hot Rod is a custom TCP client/server protocol. UDP UDP is a transport protocol that uses: IP multicasting to send messages to all members of a cluster. UDP datagrams for unicast messages, which are sent to a single member. When the UDP transport is started, it opens a unicast socket and a multicast socket. The unicast socket is used to send and receive unicast messages, the multicast socket sends and receives multicast sockets. The physical address of the channel with be the same as the address and port number of the unicast socket. PING The PING protocol is used for the initial discovery of members. It is used to detect the coordinator, which is the oldest member, by multicasting PING requests to an IP multicast address. Each member responds to the ping with a packet containing the coordinator's and their own address. After a specified number of milliseconds (N) or replies (M), the joiner determines the coordinator from the responses and sends it a JOIN request (handled by GMS). If there is no response, the joiner is considered the first member of the group. PING differs from TCPPING because it used dynamic discovery, which means that a member does not need to know in advance where the other cluster members are. PING uses the transport's IP multicasting abilities to send a discovery request to the cluster. As a result, PING requires UDP as transport. TCPPING The TCCPING protocol uses a set of known members and pings them for discovery. This protocol has a static configuration. MPING The MPING (Multicast PING) protocol uses IP multicast to discover the initial membership. It can be used with all transports, but is usually used in combination with TCP. S3_PING S3_PING is a discovery protocol that is ideal for use with Amazon's Elastic Compute Cloud (EC2) because EC2 does not allow multicast and therefore MPING is not allowed. Each EC2 instance adds a small file to an S3 data container, known as a bucket. Each instance then reads the files in the bucket to discover the other members of the cluster. JDBC_PING JDBC_PING is a discovery protocol that utilizes a shared database to store information regarding nodes in the cluster. TCPGOSSIP TCPGOSSIP is a discovery protocol that uses one or more configured GossipRouter processes to store information about the nodes in the cluster. MERGE3 The MERGE3 protocol is available in JGroups 3.1 onwards. Unlike MERGE2, in MERGE3, all members periodically send an INFO message with their address (UUID), logical name, physical address and View ID. Periodically, each coordinator reviews the INFO details to ensure that there are no inconsistencies. FD_ALL Used for failure detection, FD_ALL uses a simple heartbeat protocol. Each member maintains a table of all other members (except itself) and periodically multicasts a heartbeat. For example, when data or a heartbeat from P is received, the timestamp for P is set to the current time. Periodically, expired members are identified using the timestamp values. FD_SOCK FD_SOCK is a failure detection protocol based on a ring of TCP sockets created between cluster members. Each cluster member connects to its neighbor (the last member connects to the first member), which forms a ring. Member B is suspected when its neighbor A detects an abnormal closing of its TCP socket (usually due to node B crashing). However, if member B is leaving gracefully, it informs member A and does not become suspected when it does exit. FD_HOST FD_HOST is a failure detection protocol that detects the crashing or hanging of entire hosts and suspects all cluster members of that host through ICMP ping messages or custom commands. FD_HOST does not detect the crashing or hanging of single members on the local hosts, but only checks whether all the other hosts in the cluster are live and available. It is therefore used in conjunction with other failure detection protocols such as FD_ALL and FD_SOCK. This protocol is typically used when multiple cluster members are running on the same physical box. The FD_HOST protocol is supported on Windows for JBoss Data Grid. The cmd parameter must be set to ping.exe and the ping count must be specified. VERIFY_SUSPECT The VERIFY_SUSPECT protocol verifies whether a suspected member is dead by pinging the member before excluding it. If the member responds, the suspect message is discarded. NAKACK2 The NAKACK2 protocol is a successor to the NAKACK protocol and was introduced in JGroups 3.1. The NACKACK2 protocol is used for multicast messages and uses NAK. Each message is tagged with a sequence number. The receiver tracks the sequence numbers and delivers the messages in order. When a gap in the sequence numbers is detected, the receiver asks the sender to retransmit the missing message. UNICAST3 The UNICAST3 protocol provides reliable delivery (no message sent by a sender is lost because they are sent in a numbered sequence) and uses the FIFO (First In First Out) properties for point to point messages between a sender and a receiver. UNICAST3 uses positive acks for retransmission. For example, sender A keeps sending message M until receiver B receives message M and returns an ack to indicate a successful delivery. Sender A keeps resending message M until it receives an ack from B, until B leaves the cluster, or A crashes. STABLE The STABLE protocol is a garbage collector for messages that have been viewed by all members in a cluster. Each member stores all messages because retransmission may be required. A message can only be removed from the retransmission buffers when all members have seen the message. The STABLE protocol periodically gossips its highest and lowest messages seen. The lowest value is used to compute the min (all lowest sequence numbers for all members) and messages with a sequence number below the min value can be discarded GMS The GMS protocol is the group membership protocol. This protocol handles joins/leaves/crashes (suspicions) and emits new views accordingly. MFC MFC is the Multicast version of the flow control protocol. UFC UFC is the Unicast version of the flow control protocol. FRAG2 The FRAG2 protocol fragments large messages into smaller ones and then sends the smaller messages. At the receiver side, the smaller fragments are reassembled into larger, complete messages and delivered to the application. FRAG2 is used for both multicast and unicast messages. ENCRYPT JGroups includes the ENCRYPT protocol to provide encryption for cluster traffic. By default, encryption only encrypts the message body; it does not encrypt message headers. To encrypt the entire message, including all headers, as well as destination and source addresses, the property encrypt_entire_messagemust be true. ENCRYPT must also be below any protocols with headers that must be encrypted. The ENCRYPT layer is used to encrypt and decrypt communication in JGroups. The JGroups ENCRYPT protocol can be used in two ways: Configured with a secretKey in a keystore. Configured with algorithms and key sizes. Each message is identified as encrypted with a specific encryption header identifying the encrypt header and an MD5 digest identifying the version of the key being used to encrypt and decrypt messages. SASL The SASL (Simple Authentication and Security Layer) protocol is a framework that provides authentication and data security services in connection-oriented protocols using replaceable mechanisms. Additionally, SASL provides a structured interface between protocols and mechanisms. RELAY2 The RELAY protocol bridges two remote clusters by creating a connection between one node in each site. This allows multicast messages sent out in one site to be relayed to the other and vice versa. JGroups includes the RELAY2 protocol, which is used for communication between sites in Red Hat JBoss Data Grid's Cross-Site Replication. The RELAY2 protocol works similarly to RELAY but with slight differences. Unlike RELAY, the RELAY2 protocol: connects more than two sites. connects sites that operate autonomously and are unaware of each other. offers both unicast and multicast routing between sites. Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/appe-Recommended_JGroups_Values_for_JBoss_Data_Grid
|
Chapter 4. Running GFS on a GNBD Server Node
|
Chapter 4. Running GFS on a GNBD Server Node You can run GFS on a GNBD server node, with some restrictions. In addition, running GFS on a GNBD server node reduces performance. The following restrictions apply when running GFS on a GNBD server node. Important When running GFS on a GNBD server node you must follow the restrictions listed; otherwise, the GNBD server node will fail. A GNBD server node must have local access to all storage devices needed to mount a GFS file system. The GNBD server node must not import ( gnbd_import command) other GNBD devices to run the file system. The GNBD server must export all the GNBDs in uncached mode, and it must export the raw devices, not logical volume devices. GFS must be run on top of a logical volume device, not raw devices. Note You may need to increase the timeout period on the exported GNBDs to accommodate reduced performance. The need to increase the timeout period depends on the quality of the hardware.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/ch-server-node
|
18.3. Graphical User Interface with X11 or VNC
|
18.3. Graphical User Interface with X11 or VNC To run anaconda with the graphical user interface, use a workstation that has either an X Window System (X11) server or VNC client installed. You can use X11 forwarding with an SSH client or X11 directly. If the installer on your workstation fails because the X11 server does not support required X11 extensions you might have to upgrade the X11 server or use VNC. To use VNC, disable X11 forwarding in your SSH client prior to connecting to the Linux installation system on the mainframe or specify the vnc parameter in your parameter file. Using VNC is recommended for slow or long-distance network connections. Refer to Section 28.2, "Enabling Remote Access to the Installation System" . Table 18.1, "Parameters and SSH login types" shows how the parameters and SSH login type controls which anaconda user interface is used. Table 18.1. Parameters and SSH login types Parameter SSH login User interface none SSH without X11 forwarding VNC or text vnc SSH with or without X11 forwarding VNC none SSH with X11 forwarding X11 display= IP/hostname : display SSH without X11 forwarding X11 18.3.1. Installation using X11 forwarding You can connect a workstation to the Linux installation system on the mainframe and display the graphical installation program using SSH with X11 forwarding. You require an SSH client that allows X11 forwarding. To open the connection, first start the X server on the workstation. Then connect to the Linux installation system. You can enable X11 forwarding in your SSH client when you connect. For example, with OpenSSH enter the following in a terminal window on your workstation: Replace linuxvm.example.com with the hostname or IP address of the system you are installing. The -X option (the capital letter X ) enables X11 forwarding.
|
[
"ssh -X [email protected]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/installation_procedure_overview-s390-gui
|
7.12. bfa-firmware
|
7.12. bfa-firmware 7.12.1. RHBA-2013:0315 - bfa-firmware bug fix and enhancement update Updated bfa-firmware packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The bfa-firmware package contains the Brocade Fibre Channel Host Bus Adapter (HBA) Firmware to run Brocade Fibre Channel and CNA adapters. This package also supports the Brocade BNA network adapter. Note The bfa-firmware packages have been upgraded to upstream version 3.0.3.1, which provides a number of bug fixes and enhancements over the version. (BZ#830015) All users of bfa-firmware are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/bfa-firmware
|
Chapter 15. System and Subscription Management
|
Chapter 15. System and Subscription Management iostat can now print device names longer than 72 characters Previously, device names longer than 72 characters were being truncated in iostat output because the device name field was too short. The allocated space for device names has been increased, and iostat can now print significantly longer device names in the output. (BZ# 1308862 ) Corrupted data files no longer crash sar Previously, the sar command could crash when loading a corrupted system activity data due to localtime() function calls not being properly checked in sysstat . This bug has been fixed and corrupted system activity data files no longer crash sar . (BZ# 887231 ) pidstat no longer outputs values above 100% for certain fields Previously, pidstat could potentially run out of preallocated space for PIDs on systems with many short-lived processes. This could cause pidstat to output nonsensical values (values larger than 100%) in the %CPU , %user , and %sys fields. With this update, pidstat now automatically reallocates space for PIDs, and outputs correct values for all fields. (BZ#1224878) curl no longer requires both private and public SSH keys Previously, the curl tool required a full pair of a private and a public SSH keys for user authentication. If you only provided a private SSH key, which is common when using certain tools such as scp , user authentication failed. An upstream patch has been applied on curl source code to improve SSH user authentication so that the public key does not need to be specified, and curl can now authenticate using only a private SSH key. (BZ# 1260742 ) NSS no longer reuses TLS sessions for servers with different host names Previously, Network Security Services (NSS) could incorrectly reuse an existing TLS session to connect to a server with a different host name. This caused some HTTPS servers to refuse requests made within that session and to respond with HTTP code 400 ( Bad Request ). A patch which prevents reusing TLS sessions for different servers has been applied to libcurl source code, allowing NSS to successfully communicate with servers which require the HTTP host name to match the TLS session host name. (BZ# 1269660 ) Fixed a memory leak in libcurl DNS cache implementation in libcurl could previously fail to remove cache entries which were no longer used. This resulted in a memory leak in applications using this library while resolving host names. This bug has been fixed, and libcurl-based applications no longer leak memory while resolving host names. (BZ#1302893) Enhancements to abrt reporting workflow The problem reporting workflow in abrt has been enhanced to improve the overall crash reporting experience and customer case creation. The enhancements include: The Provide additional information screen now allows you to select whether the problem happens repeatedly, and also contains an additional input field for providing steps to reproduce the problem. A new reporting workflow Submit anonymous report , which should be used when the reported problem is not critical and no Red Hat support team assistance is required. New tests have been added to the internal logic to should ensure that users open cases only for critical problems and software released by Red Hat. Additionally, the client identifier has been updated to abrt_version: 2.0.8.1 . (BZ#1258474) pmap no longer reports incorrect totals With the introduction of VmFlags in the kernel smaps interface, the pmap tool could no longer reliably process the content due to format differences of the VmFlags entry. As a consequence, pmap reported incorrect totals. The underlying source code has been patched, and pmap now works as expected. (BZ# 1262870 ) Fixes in free output With the introduction of the human readable ("-h") switch in the free tool, the layout generator had to be modified to support the new feature. This, however, affected printing of values longer than the column width. The values were truncated to prevent the layout from breaking when the values became longer than the reserved space in the columns. At the same time, the change caused free to insert an unwanted space character at the end of each line. Due to these two changes, the output could not be used in custom scripts. With this update, values longer than the column width are no longer truncated, no extra spaces are inserted at line ends, and the output of the free tool can now be processed without problems. (BZ# 1246379 ) Fixed a race condition when processing of detected problems in abrtd This update fixes a race condition in the abrtd service which was causing a loss of detected problem data, filling system logs with repeated error messages, and causing abrt core dumper processes to hang, which in turn prevented dumped programs from being restarted. (BZ# 1245893 )
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/bug_fixes_system_and_subscription_management
|
Chapter 6. Updating a cluster using the CLI
|
Chapter 6. Updating a cluster using the CLI You can update, or upgrade, an OpenShift Container Platform cluster within a minor version by using the OpenShift CLI ( oc ). You can also update a cluster between minor versions by following the same instructions. 6.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state . Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid upgrade path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Upgrading installed Operators for more information. Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. If your cluster uses manually maintained credentials, ensure that the Cloud Credential Operator (CCO) is in an upgradeable state. For more information, see Upgrading clusters with manually maintained credentials for AWS , Azure , or GCP . Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. Additional resources Support policy for unmanaged Operators 6.2. Updating a cluster by using the CLI If updates are available, you can update your cluster by using the OpenShift CLI ( oc ). You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Install the OpenShift CLI ( oc ) that matches the version for your updated version. Log in to the cluster as user with cluster-admin privileges. Install the jq package. Procedure Ensure that your cluster is available: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.9 True False 158m Cluster version is 4.6.9 Review the current update channel information and confirm that your channel is set to stable-4.7 : USD oc get clusterversion -o json|jq ".items[0].spec" Example output { "channel": "stable-4.7", "clusterID": "990f7ab8-109b-4c95-8480-2bd1deec55ff" } Important For production clusters, you must subscribe to a stable-* or fast-* channel. View the available updates and note the version number of the update that you want to apply: USD oc adm upgrade Example output Cluster version is 4.1.0 Updates: VERSION IMAGE 4.1.2 quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b Apply an update: To update to the latest version: USD oc adm upgrade --to-latest=true 1 To update to a specific version: USD oc adm upgrade --to=<version> 1 1 1 <version> is the update version that you obtained from the output of the command. Review the status of the Cluster Version Operator: USD oc get clusterversion -o json|jq ".items[0].spec" Example output { "channel": "stable-4.7", "clusterID": "990f7ab8-109b-4c95-8480-2bd1deec55ff", "desiredUpdate": { "force": false, "image": "quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b", "version": "4.7.0" 1 } } 1 If the version number in the desiredUpdate stanza matches the value that you specified, the update is in progress. Review the cluster version status history to monitor the status of the update. It might take some time for all the objects to finish updating. USD oc get clusterversion -o json|jq ".items[0].status.history" Example output [ { "completionTime": null, "image": "quay.io/openshift-release-dev/ocp-release@sha256:b8fa13e09d869089fc5957c32b02b7d3792a0b6f36693432acc0409615ab23b7", "startedTime": "2021-01-28T20:30:50Z", "state": "Partial", "verified": true, "version": "4.7.0" }, { "completionTime": "2021-01-28T20:30:50Z", "image": "quay.io/openshift-release-dev/ocp-release@sha256:b8fa13e09d869089fc5957c32b02b7d3792a0b6f36693432acc0409615ab23b7", "startedTime": "2021-01-28T17:38:10Z", "state": "Completed", "verified": false, "version": "4.7.0" } ] The history contains a list of the most recent versions applied to the cluster. This value is updated when the CVO applies an update. The list is ordered by date, where the newest update is first in the list. Updates in the history have state Completed if the rollout completed and Partial if the update failed or did not complete. After the update completes, you can confirm that the cluster version has updated to the new version: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0 True False 2m Cluster version is 4.7.0 If you are upgrading your cluster to the minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are upgraded before deploying workloads that rely on a new feature: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 82m v1.20.0 ip-10-0-179-95.ec2.internal Ready worker 70m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 70m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 82m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 69m v1.20.0 6.3. Changing the update server by using the CLI Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. The default value for upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Procedure Change the upstream parameter value in the cluster version: USD oc patch clusterversion/version --patch '{"spec":{"upstream":"<update-server-url>"}}' --type=merge The <update-server-url> variable specifies the URL for the update server. Example output clusterversion.config.openshift.io/version patched
|
[
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.9 True False 158m Cluster version is 4.6.9",
"oc get clusterversion -o json|jq \".items[0].spec\"",
"{ \"channel\": \"stable-4.7\", \"clusterID\": \"990f7ab8-109b-4c95-8480-2bd1deec55ff\" }",
"oc adm upgrade",
"Cluster version is 4.1.0 Updates: VERSION IMAGE 4.1.2 quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b",
"oc adm upgrade --to-latest=true 1",
"oc adm upgrade --to=<version> 1",
"oc get clusterversion -o json|jq \".items[0].spec\"",
"{ \"channel\": \"stable-4.7\", \"clusterID\": \"990f7ab8-109b-4c95-8480-2bd1deec55ff\", \"desiredUpdate\": { \"force\": false, \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b\", \"version\": \"4.7.0\" 1 } }",
"oc get clusterversion -o json|jq \".items[0].status.history\"",
"[ { \"completionTime\": null, \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:b8fa13e09d869089fc5957c32b02b7d3792a0b6f36693432acc0409615ab23b7\", \"startedTime\": \"2021-01-28T20:30:50Z\", \"state\": \"Partial\", \"verified\": true, \"version\": \"4.7.0\" }, { \"completionTime\": \"2021-01-28T20:30:50Z\", \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:b8fa13e09d869089fc5957c32b02b7d3792a0b6f36693432acc0409615ab23b7\", \"startedTime\": \"2021-01-28T17:38:10Z\", \"state\": \"Completed\", \"verified\": false, \"version\": \"4.7.0\" } ]",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0 True False 2m Cluster version is 4.7.0",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 82m v1.20.0 ip-10-0-179-95.ec2.internal Ready worker 70m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 70m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 82m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 69m v1.20.0",
"oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge",
"clusterversion.config.openshift.io/version patched"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/updating_clusters/updating-cluster-cli
|
Chapter 2. Image [image.openshift.io/v1]
|
Chapter 2. Image [image.openshift.io/v1] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 2.1.1. .dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 2.1.2. .dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 2.1.3. .dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 2.1.4. .dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 2.1.5. .signatures Description Signatures holds all signatures of the image. Type array 2.1.6. .signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 2.1.7. .signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 2.1.8. .signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 2.1.9. .signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 2.1.10. .signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 2.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/images DELETE : delete collection of Image GET : list or watch objects of kind Image POST : create an Image /apis/image.openshift.io/v1/watch/images GET : watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/images/{name} DELETE : delete an Image GET : read the specified Image PATCH : partially update the specified Image PUT : replace the specified Image /apis/image.openshift.io/v1/watch/images/{name} GET : watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/image.openshift.io/v1/images Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Image Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Image Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK ImageList schema 401 - Unauthorized Empty HTTP method POST Description create an Image Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body Image schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 202 - Accepted Image schema 401 - Unauthorized Empty 2.2.2. /apis/image.openshift.io/v1/watch/images Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/image.openshift.io/v1/images/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the Image Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Image Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Image Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Image Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Image Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body Image schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty 2.2.4. /apis/image.openshift.io/v1/watch/images/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the Image Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/image_apis/image-image-openshift-io-v1
|
Chapter 1. OpenShift Container Platform 4.7 Documentation
|
Chapter 1. OpenShift Container Platform 4.7 Documentation Welcome to the official OpenShift Container Platform 4.7 documentation, where you can find information to help you learn about OpenShift Container Platform and start exploring its features. To navigate the OpenShift Container Platform 4.7 documentation, you can either Use the left navigation bar to browse the documentation or Select the activity that interests you from the contents of this Welcome page You can start with Architecture and Security and compliance . Then, see Release notes . 1.1. Cluster installer activities As someone setting out to install an OpenShift Container Platform 4.7 cluster, this documentation helps you: OpenShift Container Platform installation overview : You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The OpenShift Container Platform installation program provides the flexibility to deploy OpenShift Container Platform on a range of different platforms. Install a cluster on AWS : You have the most installation options when you deploy a cluster on Amazon Web Services (AWS). You can deploy clusters with default settings or custom AWS settings . You can also deploy a cluster on AWS infrastructure that you provisioned yourself. You can modify the provided AWS CloudFormation templates to meet your needs. Install a cluster on Azure : You can deploy clusters with default settings , custom Azure settings , or custom networking settings in Microsoft Azure. You can also provision OpenShift Container Platform into an Azure Virtual Network or use Azure Resource Manager Templates to provision your own infrastructure. Install a cluster on GCP : You can deploy clusters with default settings or custom GCP settings on Google Cloud Platform (GCP). You can also perform a GCP installation where you provision your own infrastructure. Install a cluster on VMware vSphere : You can install OpenShift Container Platform on supported versions of vSphere. Install a cluster on bare metal : If none of the available platform and cloud providers meet your needs, you can install OpenShift Container Platform on bare metal. Install an installer-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal with an installer-provisioned architecture. Create Red Hat Enterprise Linux CoreOS (RHCOS) machines on bare metal : You can install RHCOS machines using ISO or PXE in a fully live environment and configure them with kernel arguments, Ignition configs, or the coreos-installer command. Install a cluster on Red Hat OpenStack Platform (RHOSP) : You can install a cluster on RHOSP with customizations . Install a cluster on Red Hat Virtualization (RHV) : You can deploy clusters on Red Hat Virtualization (RHV) with a quick install or an install with customizations . Install a cluster in a restricted network : If your cluster that uses user-provisioned infrastructure on AWS , GCP , vSphere , or bare metal does not have full access to the internet, you can mirror the OpenShift Container Platform installation images and install a cluster in a restricted network. Install a cluster in an existing network : If you use an existing Virtual Private Cloud (VPC) in AWS or GCP or an existing VNet on Azure, you can install a cluster. Install a private cluster : If your cluster does not require external internet access, you can install a private cluster on AWS , Azure , or GCP . Internet access is still required to access the cloud APIs and installation media. Check installation logs : Access installation logs to evaluate issues that occur during OpenShift Container Platform 4.7 installation. Access OpenShift Container Platform : Use credentials output at the end of the installation process to log in to the OpenShift Container Platform cluster from the command line or web console. Install Red Hat OpenShift Container Storage : You can install Red Hat OpenShift Container Storage as an Operator to provide highly integrated and simplified persistent storage management for containers. 1.2. Developer activities Ultimately, OpenShift Container Platform is a platform for developing and deploying containerized applications. As an application developer, OpenShift Container Platform documentation helps you: Understand OpenShift Container Platform development : Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators. Work with projects : Create projects from the web console or CLI to organize and share the software you develop. Work with applications : Use the Developer perspective in the OpenShift Container Platform web console to easily create and deploy applications . Use the Topology view to visually interact with your applications, monitor status, connect and group components, and modify your code base. Use the developer CLI tool (odo) : The odo CLI tool lets developers create single or multi-component applications easily and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and OpenShift Container Platform concepts, allowing developers to focus on developing their applications. Create CI/CD Pipelines : Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. They use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservices-based architecture. Deploy Helm charts : Helm 3 is a package manager that helps developers define, install, and update application packages on Kubernetes. A Helm chart is a packaging format that describes an application that can be deployed using the Helm CLI. Understand Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.7. Learn about the Operator Framework and how to deploy applications using installed Operators into your projects. Understand image builds : Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (from places like Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds. Create container images : A container image is the most basic building block in OpenShift Container Platform (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type, such as Ruby, Node.js, or Python. Create deployments : Use Deployment and DeploymentConfig objects to exert fine-grained management over applications. Use the Workloads page or oc CLI to manage deployments . Learn rolling, recreate, and custom deployment strategies. Create templates : Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built. Develop Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.7. Learn the workflow for building, testing, and deploying Operators. Then create your own Operators based on Ansible or Helm , or configure built-in Prometheus monitoring using the Operator SDK. REST API reference : Lists OpenShift Container Platform application programming interface endpoints. 1.3. Cluster administrator activities Ongoing tasks on your OpenShift Container Platform 4.7 cluster include various activities for managing machines, providing services to users, and following monitoring and logging features that watch over the cluster. As a cluster administrator, this documentation helps you: Understand OpenShift Container Platform management : Learn about components of the OpenShift Container Platform 4.7 control plane. See how OpenShift Container Platform masters and workers are managed and updated through the Machine API and Operators . 1.3.1. Manage cluster components Manage machines : Manage machines in your cluster on AWS , Azure , or GCP by deploying health checks and applying autoscaling to machines . Manage container registries : Each OpenShift Container Platform cluster includes a built-in container registry for storing its images. You can also configure a separate Red Hat Quay registry to use with OpenShift Container Platform. The Quay.io web site provides a public container registry that stores OpenShift Container Platform containers and Operators. Manage users and groups : Add users and groups that have different levels of permissions to use or modify clusters. Manage authentication : Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers, including HTPasswd , Keystone , LDAP , basic authentication , request header , GitHub , GitLab , Google , and OpenID . Manage ingress , API server , and service certificates : OpenShift Container Platform creates certificates by default for the Ingress Operator, the API server, and for services needed by complex middleware applications that require encryption. At some point, you might need to change, add, or rotate these certificates. Manage networking : Networking in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. Using network policy features, you can isolate your pods or permit selected traffic. Manage storage : OpenShift Container Platform allows cluster administrators to configure persistent storage using Red Hat OpenShift Container Storage , AWS Elastic Block Store , NFS , iSCSI , Container Storage Interface (CSI) , and more. As needed, you can expand persistent volumes , configure dynamic provisioning , and use CSI to configure , clone , and use snapshots of persistent storage. Manage Operators : Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters . Once installed, you can run , upgrade , back up or otherwise manage the Operator on your cluster. 1.3.2. Change cluster components Use custom resource definitions (CRDs) to modify the cluster : Cluster features that are implemented with Operators, can be modified with CRDs. Learn to create a CRD and manage resources from CRDs . Set resource quotas : Choose from CPU, memory and other system resources to set quotas . Prune and reclaim resources : You can reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs. Scale and tune clusters : Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment. Update a cluster : To upgrade your OpenShift Container Platform to a later version, use the Cluster Version Operator (CVO). If an update is available from the Container Platform update service, you apply that cluster update from either the web console or the CLI . Understanding the OpenShift Update Service : Learn about installing and managing a local OpenShift Update Service for recommending OpenShift Container Platform updates in restricted network environments. 1.3.3. Monitor the cluster Work with OpenShift Logging : Learn about OpenShift Logging and configure different OpenShift Logging types, such as Elasticsearch, Fluentd, and Kibana. Monitoring overview : Learn to configure the monitoring stack . Once your monitoring is configured, use the Web UI to access monitoring dashboards . In addition to infrastructure metrics, you can also scrape and view metrics for your own services. Remote health monitoring : OpenShift Container Platform collects anonymized aggregated information about your cluster and reports it to Red Hat via Telemetry and the Insights Operator. This information allows Red Hat to improve OpenShift Container Platform and to react to issues that impact customers more quickly. You can view the data collected by remote health monitoring .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/about/welcome-index
|
Chapter 2. Differences from upstream OpenJDK 21
|
Chapter 2. Differences from upstream OpenJDK 21 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 21 changes: FIPS support. Red Hat build of OpenJDK 21 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 21 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 21 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificates from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.6/rn-openjdk-diff-from-upstream
|
Chapter 134. AutoRestartStatus schema reference
|
Chapter 134. AutoRestartStatus schema reference Used in: KafkaConnectorStatus , KafkaMirrorMaker2Status Property Description count The number of times the connector or task is restarted. integer connectorName The name of the connector being restarted. string lastRestartTimestamp The last time the automatic restart was attempted. The required format is 'yyyy-MM-ddTHH:mm:ssZ' in the UTC time zone. string
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-AutoRestartStatus-reference
|
10.2.3. Connecting to a Network Automatically
|
10.2.3. Connecting to a Network Automatically For any connection type you add or configure, you can choose whether you want NetworkManager to try to connect to that network automatically when it is available. Procedure 10.1. Configuring NetworkManager to Connect to a Network Automatically When Detected Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Click the arrow head if necessary to reveal the list of connections. Select the specific connection that you want to configure and click Edit . Check Connect automatically to cause NetworkManager to auto-connect to the connection whenever NetworkManager detects that it is available. Uncheck the check box if you do not want NetworkManager to connect automatically. If the box is unchecked, you will have to select that connection manually in the NetworkManager applet's left-click menu to cause it to connect.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-connecting_to_a_network_automatically
|
18.5. LevelDB Cache Store
|
18.5. LevelDB Cache Store LevelDB is a key-value storage engine that provides an ordered mapping from string keys to string values. The LevelDB Cache Store uses two filesystem directories. Each directory is configured for a LevelDB database. One directory stores the non-expired data and the second directory stores the keys pending to be purged permanently. Report a bug 18.5.1. Configuring LevelDB Cache Store (Remote Client-Server Mode) Procedure 18.1. To configure LevelDB Cache Store: Add the following elements to a cache definition in standalone.xml to configure the database: Note Directories will be automatically created if they do not exist. For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.5.2. LevelDB Cache Store Programmatic Configuration The following is a sample programmatic configuration of LevelDB Cache Store: Procedure 18.2. LevelDB Cache Store programmatic configuration Use the ConfigurationBuilder to create a new configuration object. Add the store using LevelDBCacheStoreConfigurationBuilder class to build its configuration. Set the LevelDB Cache Store location path. The specified path stores the primary cache store data. The directory is automatically created if it does not exist. Specify the location for expired data using the expiredLocation parameter for the LevelDB Store. The specified path stores expired data before it is purged. The directory is automatically created if it does not exist. Note Programmatic configurations can only be used with Red Hat JBoss Data Grid Library mode. Report a bug 18.5.3. LevelDB Cache Store Sample XML Configuration (Library Mode) The following is a sample XML configuration of LevelDB Cache Store: For details about the elements and parameters used in this sample configuration, see Section 18.2, "Cache Store Configuration Details (Library Mode)" . Report a bug 18.5.4. Configure a LevelDB Cache Store Using JBoss Operations Network Use the following procedure to set up a new LevelDB cache store using the JBoss Operations Network. Procedure 18.3. Ensure that Red Hat JBoss Operations Network 3.2 or higher is installed and started. Install the Red Hat JBoss Data Grid Plugin Pack for JBoss Operations Network 3.2.0. Ensure that JBoss Data Grid is installed and started. Import JBoss Data Grid server into the inventory. Configure the JBoss Data Grid connection settings. Create a new LevelDB cache store as follows: Figure 18.1. Create a new LevelDB Cache Store Right-click the default cache. In the menu, mouse over the Create Child option. In the submenu, click LevelDB Store . Name the new LevelDB cache store as follows: Figure 18.2. Name the new LevelDB Cache Store In the Resource Create Wizard that appears, add a name for the new LevelDB Cache Store. Click to continue. Configure the LevelDB Cache Store settings as follows: Figure 18.3. Configure the LevelDB Cache Store Settings Use the options in the configuration window to configure a new LevelDB cache store. Click Finish to complete the configuration. Schedule a restart operation as follows: Figure 18.4. Schedule a Restart Operation In the screen's left panel, expand the JBossAS7 Standalone Servers entry, if it is not currently expanded. Click JDG (0.0.0.0:9990) from the expanded menu items. In the screen's right panel, details about the selected server display. Click the Operations tab. In the Operation drop-down box, select the Restart operation. Select the radio button for the Now entry. Click Schedule to restart the server immediately. Discover the new LevelDB cache store as follows: Figure 18.5. Discover the New LevelDB Cache Store In the screen's left panel, select each of the following items in the specified order to expand them: JBossAS7 Standalong Servers JDG (0.0.0.0:9990) infinispan Cache Containers local Caches default LevelDB Stores Click the name of your new LevelDB Cache Store to view its configuration information in the right panel. Report a bug
|
[
"<leveldb-store path=\"/path/to/leveldb/data\" passivation=\"false\" purge=\"false\" > <expiration path=\"/path/to/leveldb/expires/data\" /> <implementation type=\"JNI\" /> </leveldb-store>",
"Configuration cacheConfig = new ConfigurationBuilder().persistence() .addStore(LevelDBStoreConfigurationBuilder.class) .location(\"/tmp/leveldb/data\") .expiredLocation(\"/tmp/leveldb/expired\").build();",
"<namedCache name=\"vehicleCache\"> <persistence passivation=\"false\"> <leveldbStore xmlns=\"urn:infinispan:config:store:leveldb:6.0 location=\"/path/to/leveldb/data\" expiredLocation=\"/path/to/expired/data\" shared=\"false\" preload=\"true\"/> </persistence> </namedCache>"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-leveldb_cache_store
|
Chapter 6. Kourier and Istio ingresses
|
Chapter 6. Kourier and Istio ingresses OpenShift Serverless supports the following two ingress solutions: Kourier Istio using Red Hat OpenShift Service Mesh The default is Kourier. 6.1. Kourier and Istio ingress solutions 6.1.1. Kourier Kourier is the default ingress solution for OpenShift Serverless. It has the following properties: It is based on envoy proxy. It is simple and lightweight. It provides the basic routing functionality that Serverless needs to provide its set of features. It supports basic observability and metrics. It supports basic TLS termination of Knative Service routing. It provides only limited configuration and extension options. 6.1.2. Istio using OpenShift Service Mesh Using Istio as the ingress solution for OpenShift Serverless enables an additional feature set that is based on what Red Hat OpenShift Service Mesh offers: Native mTLS between all connections Serverless components are part of a service mesh Additional observability and metrics Authorization and authentication support Custom rules and configuration, as supported by Red Hat OpenShift Service Mesh However, the additional features come with a higher overhead and resource consumption. For details, see the Red Hat OpenShift Service Mesh documentation. See the "Integrating Service Mesh with OpenShift Serverless" section of Serverless documentation for Istio requirements and installation instructions. 6.1.3. Traffic configuration and routing Regardless of whether you use Kourier or Istio, the traffic for a Knative Service is configured in the knative-serving namespace by the net-kourier-controller or the net-istio-controller respectively. The controller reads the KnativeService and its child custom resources to configure the ingress solution. Both ingress solutions provide an ingress gateway pod that becomes part of the traffic path. Both ingress solutions are based on Envoy. By default, Serverless has two routes for each KnativeService object: A cluster-external route that is forwarded by the OpenShift router, for example myapp-namespace.example.com . A cluster-local route containing the cluster domain, for example myapp.namespace.svc.cluster.local . This domain can and should be used to call Knative services from Knative or other user workloads. The ingress gateway can forward requests either in the serve mode or the proxy mode: In the serve mode, requests go directly to the Queue-Proxy sidecar container of the Knative service. In the proxy mode, requests first go through the Activator component in the knative-serving namespace. The choice of mode depends on the configuration of Knative, the Knative service, and the current traffic. For example, if a Knative Service is scaled to zero, requests are sent to the Activator component, which acts as a buffer until a new Knative service pod is started.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serving/kourier-and-istio-ingresses
|
Chapter 3. Red Hat build of OpenJDK features
|
Chapter 3. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 17 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 17 releases. Note For all the other changes and security fixes, see OpenJDK 17.0.11 Released . Red Hat build of OpenJDK enhancements Red Hat build of OpenJDK 17 provides enhancements to features originally created in releases of Red Hat build of OpenJDK. XML Security for Java updated to Apache Santuario 3.0.3 In Red Hat build of OpenJDK 17.0.11, the XML signature implementation is based on Apache Santuario 3.0.3. This enhancement introduces the following four SHA-3-based RSA-MGF1 SignatureMethod algorithms: SHA3_224_RSA_MGF1 SHA3_256_RSA_MGF1 SHA3_384_RSA_MGF1 SHA3_512_RSA_MGF1 Because the javax.xml.crypto.dsig.SignatureMethod API cannot be modified in update releases to provide constant values for the new algorithms, use the following equivalent string literal values for these algorithms: http://www.w3.org/2007/05/xmldsig-more#sha3-224-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-256-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-384-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-512-rsa-MGF1 This enhancement also introduces support for the ED25519 and ED448 elliptic curve algorithms, which are both Edwards-curve Digital Signature Algorithm (EdDSA) signature schemes. Note In contrast to the upstream community version of Apache Santuario 3.0.3, the JDK still supports the here() function. However, future support for the here() function is not guaranteed. You should avoid using here() in new XML signatures. You should also update any XML signatures that currently use here() to stop using this function. The here() function is enabled by default. To disable the here() function, ensure that the jdk.xml.dsig.hereFunctionSupported system property is set to false . See JDK-8319124 (JDK Bug System) . Fixed indefinite hanging of jspawnhelper In earlier releases, if the parent JVM process failed before successful completion of the handshake between the JVM and a jspawnhelper process, the jspawnhelper process could remain unresponsive indefinitely. In Red Hat build of OpenJDK 17.0.11, if the parent process fails prematurely, the jspawnhelper process receives an end-of-file (EOF) signal from the communication pipe. This enhancement helps to ensure that the jspawnhelper process shuts down correctly. See JDK-8307990 (JDK Bug System) . SystemTray.isSupported() method returns false on most Linux desktops In Red Hat build of OpenJDK 17.0.11, the java.awt.SystemTray.isSupported() method returns false on systems that do not support the SystemTray API correctly. This enhancement is in accordance with the SystemTray API specification. The SystemTray API is used to interact with the taskbar in the system desktop to provide notifications. SystemTray might also include an icon representing an application. Due to an underlying platform issue, GNOME desktop support for taskbar icons has not worked correctly for several years. This platform issue affects the JDK's ability to provide SystemTray support on GNOME desktops. This issue typically affects systems that use GNOME Shell 44 or earlier. Note Because the lack of correct SystemTray support is a long-standing issue on some systems, this API enhancement to return false on affected systems is likely to have a minimal impact on users. See JDK-8322750 (JDK Bug System) . Certainly R1 and E1 root certificates added In Red Hat build of OpenJDK 17.0.11, the cacerts truststore includes two Certainly root certificates: Certificate 1 Name: Certainly Alias name: certainlyrootr1 Distinguished name: CN=Certainly Root R1, O=Certainly, C=US Certificate 2 Name: Certainly Alias name: certainlyroote1 Distinguished name: CN=Certainly Root E1, O=Certainly, C=US See JDK-8321408 (JDK Bug System) .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.11/rn_openjdk-17011-features_openjdk
|
Chapter 3. Getting started
|
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named queue . For more information, see Creating a queue . 3.2. Running Hello World The Hello World example calls createConnection() for each character of the string "Hello World", transferring one at a time. Because AMQ JMS Pool is in use, each call reuses the same underlying JMS Connection object. Procedure Use Maven to build the examples by running the following command in the <source-dir> /pooled-jms-examples directory. USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.messaginghub.jms.example.HelloWorld On Windows: > java -cp "target\classes;target\dependency\*" org.messaginghub.jms.example.HelloWorld Running it on Linux results in the following output: USD java -cp "target/classes/:target/dependency/*" org.messaginghub.jms.example.HelloWorld 2018-05-17 11:04:23,393 [main ] - INFO JmsPoolConnectionFactory - Provided ConnectionFactory is JMS 2.0+ capable. 2018-05-17 11:04:23,715 [localhost:5672]] - INFO SaslMechanismFinder - Best match for SASL auth was: SASL-ANONYMOUS 2018-05-17 11:04:23,739 [localhost:5672]] - INFO JmsConnection - Connection ID:104dfd29-d18d-4bf5-aab9-a53660f58633:1 connected to remote Broker: amqp://localhost:5672 Hello World The source code for the example is in the <source-dir> /pooled-jms-examples/src/main/java directory. The JNDI and logging configuration is in the <source-dir> /pooled-jms-examples/src/main/resources directory.
|
[
"mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests",
"java -cp \"target/classes:target/dependency/*\" org.messaginghub.jms.example.HelloWorld",
"> java -cp \"target\\classes;target\\dependency\\*\" org.messaginghub.jms.example.HelloWorld",
"java -cp \"target/classes/:target/dependency/*\" org.messaginghub.jms.example.HelloWorld 2018-05-17 11:04:23,393 [main ] - INFO JmsPoolConnectionFactory - Provided ConnectionFactory is JMS 2.0+ capable. 2018-05-17 11:04:23,715 [localhost:5672]] - INFO SaslMechanismFinder - Best match for SASL auth was: SASL-ANONYMOUS 2018-05-17 11:04:23,739 [localhost:5672]] - INFO JmsConnection - Connection ID:104dfd29-d18d-4bf5-aab9-a53660f58633:1 connected to remote Broker: amqp://localhost:5672 Hello World"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_pool_library/getting_started
|
Chapter 7. Performing post-upgrade tasks on the RHEL 9 system
|
Chapter 7. Performing post-upgrade tasks on the RHEL 9 system After the in-place upgrade, clean up your RHEL 9 system by remove unneeded packages, disable incompatible repositories, and update the rescue kernel and initial RAM disk. 7.1. Performing post-upgrade tasks This procedure lists major tasks recommended to perform after an in-place upgrade to RHEL 9. Prerequisites The system has been upgraded following the steps described in Performing the upgrade and you have been able to log in to RHEL 9. The status of the in-place upgrade has been verified following the steps described in Verifying the post-upgrade state . Procedure After performing the upgrade, complete the following tasks: Remove any remaining Leapp packages from the exclude list in the /etc/dnf/dnf.conf configuration file, including the snactor package, which is a tool for upgrade extension development. During the in-place upgrade, Leapp packages that were installed with the Leapp utility are automatically added to the exclude list to prevent critical files from being removed or updated. After the in-place upgrade, these Leapp packages must be removed from the exclude list before they can be removed from the system. To manually remove packages from the exclude list, edit the /etc/dnf/dnf.conf configuration file and remove the desired Leapp packages from the exclude list. To remove all packages from the exclude list: Remove remaining RHEL 8 packages, including remaining Leapp packages. Remove the old kernel packages from the RHEL 9 system. For more information on removing kernel packages, see the Red Hat Knowledgebase solution What is the proper method to remove old kernels from a Red Hat Enterprise Linux system? Locate remaining RHEL 8 packages: Remove remaining RHEL 8 packages from your RHEL 9 system. To ensure that RPM dependencies are maintained, use DNF when performing this action. Review the transaction before accepting to ensure no packages are unintentionally removed. For example: Remove remaining Leapp dependency packages: Optional: Remove all remaining upgrade-related data from the system: Important Removing this data might limit Red Hat Support's ability to investigate and troubleshoot post-upgrade problems. Disable DNF repositories whose packages are not RHEL 9-compatible. Repositories managed by RHSM are handled automatically. To disable these repositories: Replace repository_id with the repository ID. Replace the old rescue kernel and initial RAM disk with the current kernel and disk: Remove the existing rescue kernel and initial RAM disk: Reinstall the rescue kernel and related initial RAM disk: If your system is on the IBM Z architecture, update the zipl boot loader: . Optional: Check existing configuration files: Review, remediate, and then remove the rpmnew , rpmsave , and leappsave files. Note that rpmsave and leappsave are equivalent and can be handled similarly. For more information, see What are rpmnew & rpmsave files? Remove configuration files for RHEL 8 DNF modules from the /etc/dnf/modules.d/ directory that are no longer valid. Note that these files have no effect on the system when related DNF modules do not exist. Re-evaluate and re-apply your security policies. Especially, change the SELinux mode to enforcing. For details, see Applying security policies . Verification Verify that the previously removed rescue kernel and rescue initial RAM disk files have been created for the current kernel: Verify the rescue boot entry refers to the existing rescue files. See the grubby output: Review the grubby output and verify that no RHEL 8 boot entries are configured: Verify that no files related to RHEL are present in the /boot/loader/entries file:
|
[
"dnf config-manager --save --setopt exclude=''",
"rpm -qa | grep -e '\\.el[78]' | grep -vE '^(gpg-pubkey|libmodulemd|katello-ca-consumer)' | sort",
"dnf remove USD(rpm -qa | grep \\.el[78] | grep -vE 'gpg-pubkey|libmodulemd|katello-ca-consumer')",
"dnf remove leapp-deps-el9 leapp-repository-deps-el9",
"rm -rf /var/log/leapp /root/tmp_leapp_py3 /var/lib/leapp",
"dnf config-manager --set-disabled <repository_id>",
"rm /boot/vmlinuz-*rescue* /boot/initramfs-*rescue*",
"/usr/lib/kernel/install.d/51-dracut-rescue.install add \"USD(uname -r)\" /boot \"/boot/vmlinuz-USD(uname -r)\"",
"zipl",
"ls /boot/vmlinuz-*rescue* /boot/initramfs-*rescue* lsinitrd /boot/initramfs-*rescue*.img | grep -qm1 \"USD(uname -r)/kernel/\" && echo \"OK\" || echo \"FAIL\"",
"grubby --info USD(ls /boot/vmlinuz-*rescue*)",
"grubby --info ALL",
"grep -r \".el8\" \"/boot/loader/entries/\" || echo \"Everything seems ok.\""
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/upgrading_from_rhel_8_to_rhel_9/performing-post-upgrade-tasks-on-the-rhel-9-system_upgrading-from-rhel-8-to-rhel-9
|
2.10.2. Whitelisting Parameters
|
2.10.2. Whitelisting Parameters The cgsnapshot utility also allows parameter whitelisting. If a parameter is whitelisted, it appears in the output generated by cgsnapshot . If a parameter is neither blacklisted or whitelisted, a warning appears informing of this: By default, there is no whitelist configuration file. To specify which file to use as a whitelist, use the -w option. For example: Specifying the -t option tells cgsnapshot to generate a configuration with parameters from the whitelist only.
|
[
"~]USD cgsnapshot -f ~/test/cgconfig_test.conf WARNING: variable cpu.rt_period_us is neither blacklisted nor whitelisted WARNING: variable cpu.rt_runtime_us is neither blacklisted nor whitelisted",
"~]USD cgsnapshot -w ~/test/my_whitelist.conf"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-cgsnapshot_whitelist
|
2.8.5. FORWARD and NAT Rules
|
2.8.5. FORWARD and NAT Rules Most ISPs provide only a limited number of publicly routable IP addresses to the organizations they serve. Administrators must, therefore, find alternative ways to share access to Internet services without giving public IP addresses to every node on the LAN. Using private IP addresses is the most common way of allowing all nodes on a LAN to properly access internal and external network services. Edge routers (such as firewalls) can receive incoming transmissions from the Internet and route the packets to the intended LAN node. At the same time, firewalls/gateways can also route outgoing requests from a LAN node to the remote Internet service. This forwarding of network traffic can become dangerous at times, especially with the availability of modern cracking tools that can spoof internal IP addresses and make the remote attacker's machine act as a node on your LAN. To prevent this, iptables provides routing and forwarding policies that can be implemented to prevent abnormal usage of network resources. The FORWARD chain allows an administrator to control where packets can be routed within a LAN. For example, to allow forwarding for the entire LAN (assuming the firewall/gateway is assigned an internal IP address on eth1), use the following rules: This rule gives systems behind the firewall/gateway access to the internal network. The gateway routes packets from one LAN node to its intended destination node, passing all packets through its eth1 device. Note By default, the IPv4 policy in Red Hat Enterprise Linux kernels disables support for IP forwarding. This prevents machines that run Red Hat Enterprise Linux from functioning as dedicated edge routers. To enable IP forwarding, use the following command as the root user: This configuration change is only valid for the current session; it does not persist beyond a reboot or network service restart. To permanently set IP forwarding, edit the /etc/sysctl.conf file as follows: Locate the following line: Edit it to read as follows: As the root user, run the following command to enable the change to the sysctl.conf file: 2.8.5.1. Postrouting and IP Masquerading Accepting forwarded packets via the firewall's internal IP device allows LAN nodes to communicate with each other; however they still cannot communicate externally to the Internet. To allow LAN nodes with private IP addresses to communicate with external public networks, configure the firewall for IP masquerading , which masks requests from LAN nodes with the IP address of the firewall's external device (in this case, eth0): This rule uses the NAT packet matching table ( -t nat ) and specifies the built-in POSTROUTING chain for NAT ( -A POSTROUTING ) on the firewall's external networking device ( -o eth0 ). POSTROUTING allows packets to be altered as they are leaving the firewall's external device. The -j MASQUERADE target is specified to mask the private IP address of a node with the external IP address of the firewall/gateway.
|
[
"~]# iptables -A FORWARD -i eth1 -j ACCEPT ~]# iptables -A FORWARD -o eth1 -j ACCEPT",
"~]# sysctl -w net.ipv4.ip_forward=1 net.ipv4.ip_forward = 1",
"net.ipv4.ip_forward = 0",
"net.ipv4.ip_forward = 1",
"~]# sysctl -p /etc/sysctl.conf net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 [output truncated]",
"~]# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-firewalls-forward_and_nat_rules
|
Configuring authentication and authorization in RHEL
|
Configuring authentication and authorization in RHEL Red Hat Enterprise Linux 8 Using SSSD, authselect, and sssctl to configure authentication and authorization Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/index
|
Chapter 1. Business continuity
|
Chapter 1. Business continuity See the following topics for disaster recovery solutions and for the hub cluster, as well as managed clusters. Backup and restore Backup and restore operator architecture Configuring active passive hub cluster Installing the backup and restore operator Scheduling and restoring backups Replicating persistent volumes with VolSync Replicating persistent volumes with VolSync Converting a replicated image to a usable persistent volume claim Scheduling your synchronization 1.1. Backup and restore The cluster backup and restore operator runs on the hub cluster and provides disaster recovery solutions for Red Hat Advanced Cluster Management for Kubernetes hub cluster failures. When the hub cluster fails, some features, such as policy configuration-based alerting or cluster updates stop working, even if all managed clusters still work. Once the hub cluster is unavailable, you need a recovery plan to decide if recovery is possible, or if the data needs to be recovered on a newly deployed hub cluster. The backup and restore component provides a set of policies for reporting issues with the backup and restore component, such as backup pods or backup schedules not running. You can use the policy templates to create an alert to the hub cluster administrator if the backup solution is not functioning as expected. For more information, see Validating your backup or restore configurations . The cluster backup and restore operator depends on the OADP Operator to install Velero, and to create a connection from the hub cluster to the backup storage location where the data is stored. Velero is the component that runs the backup and restore operations. The cluster backup and restore operator solution provides backup and restore support for all Red Hat Advanced Cluster Management hub cluster resources, including managed clusters, applications, and policies. The cluster backup and restore operator supports backups of any third-party resources that extend the hub cluster installation. With this backup solution, you can define cron-based backup schedules which run at specified time intervals. When the hub cluster fails, a new hub cluster can be deployed and the backed up data is moved to the new hub cluster. Continue reading the following topics to learn more about the backup and restore operator: Backup and restore operator architecture Configuring active passive hub cluster Installing the backup and restore operator Scheduling and restoring backups Restoring a backup Validating your backup or restore configurations Connecting clusters automatically by using a Managed Service Account Backup and restore advanced configuration 1.1.1. Backup and restore operator architecture The operator defines the BackupSchedule.cluster.open-cluster-management.io resource, which is used to set up Red Hat Advanced Cluster Management backup schedules, and the restore.cluster.open-cluster-management.io resource, which is used to process and restore these backups. The operator creates corresponding Velero resources, and defines the options needed to back up remote clusters and any other hub cluster resources that need to be restored. View the following diagram: 1.1.1.1. Resources that are backed up The cluster backup and restore operator solution provides backup and restore support for all hub cluster resources such as managed clusters, applications, and policies. You can use the solution to back up any third-party resources extending the basic hub cluster installation. With this backup solution, you can define a cron-based backup schedule, which runs at specified time intervals and continuously backs up the latest version of the hub cluster content. When the hub cluster needs to be replaced or is in a disaster scenario when the hub cluster fails, a new hub cluster can be deployed and backed up data is moved to the new hub cluster. View the following list of the cluster backup and restore process, and how they back up or exclude resources and groups to identify backup data: Exclude all resources in the MultiClusterHub namespace. This is to avoid backing up installation resources that are linked to the current hub cluster identity and should not be backed up. Back up all resources with an API version suffixed by .open-cluster-management.io and .hive.openshift.io . These suffixes indicate that all Red Hat Advanced Cluster Management resources are backed up. Back up all resources from the following API groups: argoproj.io , app.k8s.io , core.observatorium.io , hive.openshift.io . The resources are backed up within the acm-resources-schedule backup, with the exception of the resources from the agent-install.openshift.io API group. Resources from the agent-install.openshift.io API group are backed up within the acm-managed-clusters-schedule backup. The following resources from hive.openshift.io and hiveinternal.openshift.io API groups are also backed up by the acm-managed-clusters-schedule backup: clusterdeployment.hive.openshift.io machinepool.hive.openshift.io clusterpool.hive.openshift.io clusterclaim.hive.openshift.io clusterimageset.hive.openshift.io clustersync.hiveinternal.openshift.io Exclude all resources from the following API groups: internal.open-cluster-management.io operator.open-cluster-management.io work.open-cluster-management.io search.open-cluster-management.io admission.hive.openshift.io proxy.open-cluster-management.io action.open-cluster-management.io view.open-cluster-management.io clusterview.open-cluster-management.io velero.io Exclude all of the following resources that are a part of the included API groups, but are either not needed or are being recreated by owner resources that are also backed up: clustermanagementaddon.addon.open-cluster-management.io backupschedule.cluster.open-cluster-management.io restore.cluster.open-cluster-management.io clusterclaim.cluster.open-cluster-management.io discoveredcluster.discovery.open-cluster-management.io Back up secrets and ConfigMaps with one of the following labels: cluster.open-cluster-management.io/type , hive.openshift.io/secret-type , cluster.open-cluster-management.io/backup . Use the cluster.open-cluster-management.io/backup label for any other resources that you want to be backed up and are not included in the previously mentioned criteria or are part of the excluded API groups. See the following example: apiVersion: my.group/v1alpha1 kind: MyResource metadata: labels: cluster.open-cluster-management.io/backup: "" Note: Secrets used by the hive.openshift.io.ClusterDeployment resource need to be backed up, and are automatically annotated with the cluster.open-cluster-management.io/backup label only when you create the cluster by using the console. If you deploy the Hive cluster by using GitOps instead, you must manually add the cluster.open-cluster-management.io/backup label to the secrets used by the ClusterDeployment resource. Secret and config map resources with the cluster.open-cluster-management.io/backup: cluster-activation label are restored at cluster activation time. Exclude specific resources that you do not want backed up. See the following example to exclude Velero resources from the backup process: apiVersion: my.group/v1alpha1 kind: MyResource metadata: labels: velero.io/exclude-from-backup: "true" 1.1.1.2. Backup files created by an Red Hat Advanced Cluster Management schedule You can use an Red Hat Advanced Cluster Management schedule to backup hub resources, which are grouped in separate backup files, based on the resource type or label annotations. The BackupSchedule.cluster.open-cluster-management.io resource creates a set of four schedule.velero.io resources. These schedule.velero.io resources generate the backup files, which are also called resources. To view the list of scheduled backup files, run the following command: oc get schedules -A | grep acm . The scheduled backup files are backup.velero.io . See the following table to view the descriptions of these scheduled backup files: Table 1.1. Scheduled backup table Scheduled backup Description Credentials backup Stores the following: Hive credentials, Red Hat Advanced Cluster Management, user-created credentials, and ConfigMaps . The name of this backup file is acm-credentials-schedule-<timestamp> . Resources backup Contains one backup for the Red Hat Advanced Cluster Management resources, acm-resources-schedule-<timestamp> backup and one for generic resources, acm-resources-generic-schedule-<timestamp> . Any resources annotated with the backup label, cluster.open-cluster-management.io/backup , are stored under the backup, acm-resources-generic-schedule- backup . The exceptions are Secrets or ConfigMap resources which are stored under the backup acm-credentials-schedule-<timestamp> . Managed clusters backup Contains only resources that activate the managed cluster connection to the hub cluster, where the backup is restored. The name of this backup file is acm-managed-clusters-schedule-<timestamp> . 1.1.1.3. Resources restored at managed clusters activation time When you add the cluster.open-cluster-management.io/backup label to a resource, the resource is automatically backed up in the acm-resources-generic-schedule backup. You must set the label value to cluster-activation if any of the resources need to be restored only after the managed clusters are moved to the new hub cluster and when the veleroManagedClustersBackupName:latest is used on the restored resource. This ensures that the resource is not restored unless the managed cluster activation is called. View the following example: apiVersion: my.group/v1alpha1 kind: MyResource metadata: labels: cluster.open-cluster-management.io/backup: cluster-activation Note: For any managed cluster namespace, or any resource in it, you must restore either one at the cluster activation step. Therefore, if you need to add to the backup resource created in the managed cluster namespace, then use the cluster-activation value for the cluster.open-cluster-management.io/backup label. To understand the restore process, see the following information: If you restore the namespace, then the managedcluster-import-controller deletes the namespace. If you restore the managedCluster custom resource, then the cluster-manager-registration-controller creates the namespace. Aside from the activation data resources that are identified by using the cluster.open-cluster-management.io/backup: cluster-activation label and stored by the acm-resources-generic-schedule backup, the cluster backup and restore operator includes a few resources in the activation set by default. The following resources are backed up by the acm-managed-clusters-schedule backup: managedcluster.cluster.open-cluster-management.io klusterletaddonconfig.agent.open-cluster-management.io managedclusteraddon.addon.open-cluster-management.io managedclusterset.cluster.open-cluster-management.io managedclusterset.clusterview.open-cluster-management.io managedclustersetbinding.cluster.open-cluster-management.io clusterpool.hive.openshift.io clusterclaim.hive.openshift.io clustercurator.cluster.open-cluster-management.io 1.1.2. Configuring active-passive hub cluster Learn how to configure an active-passive hub cluster configuration, where the initial hub cluster backs up data and one or more passive hub clusters are on stand-by to control the managed clusters when the active cluster becomes unavailable. 1.1.2.1. Active-passive configuration In an active-passive configuration, there is one active hub cluster and multiple passive hub clusters. An active hub cluster is also considered the primary hub cluster, which manages clusters and backs up resources at defined time intervals by using the BackupSchedule.cluster.open-cluster-management.io resource. Both active and passive hub clusters must use the same Red Hat Advanced Cluster Management for Kubernetes version. When you upgrade the hub clusters that the active-passive configuration uses, first upgrade the restore hub to the latest version, then upgrade the primary hub cluster. Note: To backup the primary hub cluster data, you do not need an active-passive configuration. You can simply backup and store the hub cluster data. This way, if there is an issue or failure, you can deploy a new hub cluster and restore your primary hub cluster data on this new hub cluster. To reduce the time to recover the primary hub cluster data, you can use an active-passive configuration; however, this is not necessary. Passive hub clusters continuously retrieve the latest backups and restore the passive data. Passive hubs use the Restore.cluster.open-cluster-management.io resource to restore passive data from the primary hub cluster when new backup data is available. These hub clusters are on standby to become a primary hub when the primary hub cluster fails. Active and passive hub clusters are connected to the same storage location, where the primary hub cluster backs up data for passive hub clusters to access the primary hub cluster backups. For more details on how to set up this automatic restore configuration, see Restoring passive resources while checking for backups . In the following diagram, the active hub cluster manages the local clusters and backs up the hub cluster data at regular intervals: The passive hub cluster restores this data, except for the managed cluster activation data, which moves the managed clusters to the passive hub cluster. The passive hub clusters can restore the passive data continuously. Passive hub clusters can restore passive data as a one-time operation. See Restoring passive resources for more details. 1.1.2.2. Disaster recovery When the primary hub cluster fails, as a hub administrator, you can select a passive hub cluster to take over the managed clusters. In the following Disaster recovery diagram , see how you can use Hub cluster N as the new primary hub cluster: Hub cluster N restores the managed cluster activation data. The managed clusters connect to Hub cluster N . You activate a backup on the new primary hub cluster, Hub cluster N , by creating a BackupSchedule.cluster.open-cluster-management.io resource, and storing the backups at the same storage location as the initial primary hub cluster. All other passive hub clusters now restore passive data by using the backup data that is created by the new primary hub cluster. Hub N is now the primary hub cluster, managing clusters and backing up data. Important: The first process in the earlier Disaster recovery diagram is not automated because of the following reasons: You must decide if the primary hub cluster has failed and needs to be replaced, or if there is a network communication error between the hub cluster and the managed clusters. You must decide which passive hub cluster becomes the primary hub cluster. The policy integration with Red Hat Ansible Automation Platform jobs can help you automate this step by making a job run when the backup policy reports backup errors. The second process in the earlier Disaster recovery diagram is manual. If you did not create a backup schedule on the new primary hub cluster, the backup-restore-enabled policy shows a violation by using the backup-schedule-cron-enabled policy template. In this second process, you can do the following actions: Use the backup-schedule-cron-enabled policy template to validate if the new primary hub cluster has backups running as a cron job. Use the policy integration with Ansible and define an Ansible job that can run when the backup-schedule-cron-enabled policy template reports violations. For more details about the backup-restore-enabled policy templates, see Validating your backup or restore configurations . 1.1.2.3. Additional resources See Restoring passive resources while checking for backups . See Restoring passive resources . 1.1.3. Installing the backup and restore operator The cluster backup and restore operator is not installed automatically. Continue reading to learn how to install and enable the operator. 1.1.3.1. Setting up hub clusters for backup and restore operators To use the backup and restore operator, you must set up hub clusters. To set up hub clusters, complete the following sections in order: Creating the storage location secret Enabling the backup operator Installing the OADP operator Creating a DataProtectionApplication resource Enabling the backup and restore component in a disconnected environment 1.1.3.1.1. Creating the storage location secret To create a storage location secret, complete the following steps: Complete the steps for Creating a default Secret for the cloud storage where the backups are saved. Create the secret resource in the OADP Operator namespace, which is located in the backup component namespace. 1.1.3.1.2. Enabling the backup operator Enable the backup operator to use the disaster recovery solutions for Red Hat Advanced Cluster Management for Kubernetes. 1.1.3.1.2.1. Prerequisite From your Red Hat OpenShift Container Platform cluster, install the Red Hat Advanced Cluster Management operator version 2.11. The MultiClusterHub resource is automatically created when you install Red Hat Advanced Cluster Management, and displays the following status: Running . ==== Enabling backup operators for your hub clusters To enable the backup operator for your active and passive hub clusters, complete the following steps: Enable the cluster backup and restore operator, cluster-backup , by completing the following steps: Update the MultiClusterHub resource by setting the cluster-backup parameter to true . If this hub cluster does not have the AWS Security Token Service (STS) option enabled, the OADP operator is also installed in the same namespace with the backup component. For hub clusters where the STS option is enabled, you must manually install the OADP operator. Ensure that the restore hub cluster uses the same Red Hat Advanced Cluster Management version that the backup hub cluster uses. You cannot restore a backup on a hub cluster with a version earlier than the one used by the backup hub cluster. Manually configure the restore hub cluster by completing the following steps: Install all operators that are installed on the active hub cluster and in the same namespace as the active hub cluster. Verify that the new hub cluster is configured the same way as the backup hub cluster. Use the same namespace name as the backup hub cluster when you install the backup and restore operator and any operators that are configured on the backup hub cluster. Create the DataProtectionApplication resource on the passive hub cluster by completing the following steps: Create the DataProtectionApplication resource on both the active and passive hub cluster. For the restore hub cluster, when you create the DataProtectionApplication resource, use the same storage location as the backup hub. 1.1.3.1.2.2. Running a restore operation on a hub cluster with a newer version If you want to run the restore operation on a hub cluster that uses a newer Red Hat Advanced Cluster Management version than the hub cluster where the backup was created, you can still run a restore operation by completing the following steps: On the restore hub cluster, install the operator at the same version as the hub cluster where the backup was created. Restore the backup data on the restore hub cluster. On the restore hub cluster, use the upgrade operation to upgrade to the version you want to use. 1.1.3.1.3. Installing the OADP operator For hub clusters where you enabled the AWS Security Token Service (STS) option, the OADP operator is not installed when you installed the cluster backup and restore operator on the hub cluster. To pass the STS role_arn to the OADP operator, you must manually install the OADP operator. For hub clusters where you did not enable the STS option, the OADP operator is automatically installed when you install the cluster backup and restore operator on the hub cluster. When you enable the backup option from the MultiClusterHub resource, the backup option remains in the Pending state until you install the OADP operator in the open-cluster-management-backup namespace. If you have not enabled the AWS STS option on your hub cluster, then the OADP operator is automatically installed with the backup chart. Otherwise, you must manually install the OADP operator. When the cluster is in the STS mode, the backup chart creates a config map named acm-redhat-oadp-operator-subscription in the open-cluster-management-backup namespace. This config map also contains the OADP channel and environment information. Use this config map to get the OADP version that you need to install and to get any environment options to be set on the OADP subscription. To learn more about installing the OADP operator with the STS option, see Installing the OADP Operator and providing the IAM role . Important: When you install the OADP operator, see the following notes: Custom resource definitons are cluster-scoped, so you cannot have two different versions of OADP or Velero installed on the same cluster. If you have two different versions, one version runs with the wrong custom resource definitions. When you create the MultiClusterHub resource, the Velero custom resource definitions (CRDs) do not automatically install on the hub cluster. When you install the OADP operator, the Velero CRDs are installed on the hub cluster. As a result, the Velero CRD resources are no longer reconciled and updated by the MultiClusterHub resource. The backup component works with the OADP Operator that is installed in the component namespace. If you manually install the OADP operator, the custom resource definition version of the OADP operator and Velereo must exactly match. If these versions do not exactly match one another, you get problems. Velero is installed with the OADP Operator on the Red Hat Advanced Cluster Management for Kubernetes hub cluster. It is used to backup and restore Red Hat Advanced Cluster Management hub cluster resources. For a list of supported storage providers for Velero, see About installing OADP . 1.1.3.1.4. Installing a custom OADP version Setting an annotation on your MultiClusterHub resource allows you to customize the OADP version for your specific needs. Complete the following steps: To override the OADP version installed by the backup and restore operator, use the following annotation on the MultiClusterHub resource: installer.open-cluster-management.io/oadp-subscription-spec: '{"channel": "stable-1.4"}' Important: If you use this option to get a different OADP version than the one installed by default by the backup and restore operator, make sure this version is supported on the Red Hat OpenShift Container Platform version used by the hub cluster. Use this override option with caution because it can result in unsupported configurations. For example, install the OADP 1.5 version by setting your annotation. Your MultiClusterHub resource might resemble the following example: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: annotations: installer.open-cluster-management.io/oadp-subscription-spec: '{"channel": "stable-1.5","installPlanApproval": "Automatic","name": "redhat-oadp-operator","source": "redhat-operators","sourceNamespace": "openshift-marketplace"}' name: multiclusterhub spec: {} Enable the cluster-backup option on the MultiClusterHub resource by setting cluster-back to true . 1.1.3.1.5. Creating a DataProtectionApplication resource To create an instance of the DataProtectionApplication resource for your active and passive hub clusters, complete the following steps: From the Red Hat OpenShift Container Platform console, select Operators > Installed Operators . Click Create instance under DataProtectionApplication. Create the Velero instance by selecting configurations using the OpenShift Container Platform console or by using a YAML file as mentioned in the DataProtectionApplication example. Set the DataProtectionApplication namespace to open-cluster-management-backup . Set the specification ( spec: ) values appropriately for the DataProtectionApplication resource. Then click Create . If you intend on using the default backup storage location, set the following value, default: true in the backupStorageLocations section. View the following DataProtectionApplication resource sample: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws restic: enable: true backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: my-bucket prefix: my-prefix config: region: us-east-1 profile: "default" credential: name: cloud-credentials key: cloud snapshotLocations: - name: default velero: provider: aws config: region: us-west-2 profile: "default" 1.1.3.1.6. Enabling the backup and restore component in a disconnected environment To enable the backup and restore component with Red Hat OpenShift Container Platform in a disconnected environment, complete the following steps: Update the MultiClusterHub resource with the follwing annotation to override the source from which the OADP operator is installed. Create the annotation before the cluster-backup component is enabled on the MultiClusterHub resource: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: annotations: installer.open-cluster-management.io/oadp-subscription-spec: '{"source": "redhat-operator-index"}' The redhat-operator-index is a custom name and represents the name of the CatalogSource resource that you define and use to access Red Hat OpenShift Operators in the disconnected environment. Run the following command to retrieve the catalogsource : oc get catalogsource -A The output might resemble the following: NAMESPACE NAME DISPLAY TYPE PUBLISHER AGE openshift-marketplace acm-custom-registry Advanced Cluster Management grpc Red Hat 42h openshift-marketplace multiclusterengine-catalog MultiCluster Engine grpc Red Hat 42h openshift-marketplace redhat-operator-index grpc 42h 1.1.3.2. Enabling the backup and restore operator The cluster backup and restore operator can be enabled when the MultiClusterHub resource is created for the first time. The cluster-backup parameter is set to true . When the operator is enabled, the operator resources are installed. If the MultiClusterHub resource is already created, you can install or uninstall the cluster backup operator by editing the MultiClusterHub resource. Set cluster-backup to false , if you want to uninstall the cluster backup operator. When the backup and restore operator is enabled, your MultiClusterHub resource might resemble the following YAML file: apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: open-cluster-management spec: availabilityConfig: High enableClusterBackup: false imagePullSecret: multiclusterhub-operator-pull-secret ingress: sslCiphers: - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 overrides: components: - enabled: true name: multiclusterhub-repo - enabled: true name: search - enabled: true name: management-ingress - enabled: true name: console - enabled: true name: insights - enabled: true name: grc - enabled: true name: cluster-lifecycle - enabled: true name: volsync - enabled: true name: multicluster-engine - enabled: true name: cluster-backup separateCertificateManagement: false 1.1.3.3. Additional resources See Velero . See AWS S3 compatible backup storage providers in the OpenShift Container Platform documentation for a list of supported Velero storage providers. Learn more about the DataProtectionApplication resource. 1.1.4. Scheduling and restoring backups Complete the following steps to schedule and restore backups: Use the backup and restore operator, backupschedule.cluster.open-cluster-management.io , to create a backup schedule and use the restore.cluster.open-cluster-management.io resources to restore a backup. Run the following command to create a backupschedule.cluster.open-cluster-management.io resource: Your cluster_v1beta1_backupschedule.yaml resource might resemble the following file: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm namespace: open-cluster-management-backup spec: veleroSchedule: 0 */2 * * * 1 veleroTtl: 120h 2 1 Create a backup every 2 hours 2 Optional: Deletes scheduled backups after 120h. If not specified, the maximum Velero default value of 720h is used. View the following descriptions of the backupschedule.cluster.open-cluster-management.io spec properties: veleroSchedule is a required property and defines a cron job for scheduling the backups. veleroTtl is an optional property and defines the expiration time for a scheduled backup resource. If not specified, the maximum default value set by Velero is used, which is 720h . Check the status of your backupschedule.cluster.open-cluster-management.io resource, which displays the definition for the three schedule.velero.io resources. Run the following command: As a reminder, the restore operation is run on a different hub cluster for restore scenarios. To initiate a restore operation, create a restore.cluster.open-cluster-management.io resource on the hub cluster where you want to restore backups. Note: When you restore a backup on a new hub cluster, make sure that you shut down the hub cluster where the backup was created. If it is running, the hub cluster tries to reimport the managed clusters as soon as the managed cluster reconciliation finds that the managed clusters are no longer available. You can use the cluster backup and restore operator, backupschedule.cluster.open-cluster-management.io and restore.cluster.open-cluster-management.io resources, to create a backup or restore resource. See the cluster-backup-operator samples. Run the following command to create a restore.cluster.open-cluster-management.io resource: Your resource might resemble the following file: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupName: latest veleroCredentialsBackupName: latest veleroResourcesBackupName: latest View the Velero Restore resource by running the following command: View the Red Hat Advanced Cluster Management Restore events by running the following command: For descriptions of the parameters and samples of Restore YAML resources, see the Restoring a backup section. 1.1.4.1. Extending backup data You can backup third-party resources with cluster backup and restore by adding the cluster.open-cluster-management.io/backup label to the resources. The value of the label can be any string, including an empty string. Use a value that can help you identify the component that you are backing up. For example, use the cluster.open-cluster-management.io/backup: idp label if the components are provided by an IDP solution. Note: Use the cluster-activation value for the cluster.open-cluster-management.io/backup label if you want the resources to be restored when the managed clusters activation resources are restored. Restoring the managed clusters activation resources result in managed clusters being actively managed by the hub cluster, where the restore was started. 1.1.4.2. Scheduling a cluster backup A backup schedule is activated when you create the backupschedule.cluster.open-cluster-management.io resource. View the following backupschedule.cluster.open-cluster-management.io sample: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm namespace: open-cluster-management-backup spec: veleroSchedule: 0 */2 * * * veleroTtl: 120h After you create a backupschedule.cluster.open-cluster-management.io resource, run the following command to get the status of the scheduled cluster backups: The backupschedule.cluster.open-cluster-management.io resource creates six schedule.velero.io resources, which are used to generate backups. Run the following command to view the list of the backups that are scheduled: Resources are separately backed up in the groups as seen in the following table: Table 1.2. Resource groups table Resource Description Credentials backup Backup file that stores Hive credentials, Red Hat Advanced Cluster Management, and user-created credentials and ConfigMaps. Resources backup Contains one backup for the Red Hat Advanced Cluster Management resources and one for generic resources. These resources use the cluster.open-cluster-management.io/backup label. ManagedClusters backup Contains only resources that activate the managed cluster connection to the hub cluster, where the backup is restored. Note: The resources backup file contains managed cluster-specific resources, but does not contain the subset of resources that connect managed clusters to the hub cluster. The resources that connect managed clusters are called activation resources and are contained in the managed clusters backup. When you restore backups only for the credentials and resources backup on a new hub cluster, the new hub cluster shows all managed clusters that are created by using the Hive API in a detached state. The managed clusters that are imported on the primary hub cluster by using the import operation appear only when the activation data is restored on the passive hub cluster. The managed clusters are still connected to the original hub cluster that created the backup files. When the activation data is restored, only managed clusters created by using the Hive API are automatically connected with the new hub cluster. All other managed clusters appear in a Pending state. You need to manually reattach them to the new cluster. For descriptions of the various BackupSchedule statuses, see the following table: Table 1.3. BackupSchedule status table BackupSchedule status Description Enabled The BackupSchedule is running and generating backups. FailedValidation An error prevents the BackupSchedule from running. As a result, the BackupSchedule is not generating backups, but instead, it is reconciling and waiting for the error to be fixed. Look at the BackupSchedule status section for the reason why the resource is not valid. After the error is addressed, the BackupSchedule status changes to Enabled , and the resource starts generating backups. BackupCollision The BackupSchedule is not generating backups. Look at the BackupSchedule status section for the reason why the resource status is BackupCollision . To start creating backups, delete this resource and create a new one. 1.1.4.3. Understanding backup collisions Backup collisions might occur if the hub cluster changes status from passive to primary hub cluster, or from primary to passive, and different hub clusters back up data at the same storage location. As a result, the latest backups are generated by a hub cluster that is no longer set as the primary hub cluster. This hub cluster still produces backups because the BackupSchedule.cluster.open-cluster-management.io resource is still enabled. See the following list to learn about two scenarios that might cause a backup collision: The primary hub cluster fails unexpectedly, which is caused by the following conditions: Communication from the primary hub cluster to the initial hub cluster fails. The initial hub cluster backup data is restored on a secondary hub cluster, called secondary hub cluster. The administrator creates the BackupSchedule.cluster.open-cluster-management.io resource on the secondary hub cluster, which is now the primary hub cluster and generates backup data to the common storage location. The initial hub cluster unexpectedly starts working again. Since the BackupSchedule.cluster.open-cluster-management.io resource is still enabled on the initial hub cluster, the initial hub cluster resumes writing backups to the same storage location as the secondary hub cluster. Both hub clusters are now writing backup data at the same storage location. Any hub cluster restoring the latest backups from this storage location might use the initial hub cluster data instead of the secondary hub cluster data. The administrator tests a disaster scenario by making the secondary hub cluster a primary hub cluster, which is caused by the following conditions: The initial hub cluster is stopped. The initial hub cluster backup data is restored on the secondary hub cluster. The administrator creates the BackupSchedule.cluster.open-cluster-management.io resource on the secondary hub cluster, which is now the primary hub cluster and generates backup data to the common storage location. After the disaster test is completed, the administrator reverts to the earlier state and makes the initial hub cluster the primary hub cluster again. The initial hub cluster starts while the secondary hub cluster is still active. Since the BackupSchedule.cluster.open-cluster-management.io resource is still enabled on the secondary hub cluster, it writes backups at the same storage location that corrupts the backup data. Any hub cluster restoring the latest backups from this location might use the secondary hub cluster data instead of the initial hub cluster data. In this scenario, stopping the secondary hub cluster first or deleting the BackupSchedule.cluster.open-cluster-management.io resource on the secondary hub cluster before starting the initial hub cluster fixes the backup collision issue. The administrator tests a disaster scenario by making the secondary hub cluster a primary hub cluster, without stopping the initial hub cluster first, causing the following conditions: The initial hub cluster is still running. The initial hub cluster backup data is restored on the secondary hub cluster, including managed clusters backup. Therefore, the secondary hub cluster is now the active cluster. Since the BackupSchedule.cluster.open-cluster-management.io resource is still enabled on the initial hub cluster, it writes backups at the same storage location which corrupts the backup data. For example, any hub cluster restoring the latest backups from this location might use the initial hub cluster data instead of the secondary hub cluster data. To avoid data corruption, the initial hub cluster BackupSchedule resource status automatically changes to BackupCollision . In this scenario, to avoid getting into this backup collision state, stop the initial hub cluster first or delete the BackupSchedule.cluster.open-cluster-management.io resource on the initial hub cluster before restoring managed clusters data on the secondary hub cluster. 1.1.4.4. Preventing backup collisions To prevent and report backup collisions, use the BackupCollision state in the BackupSchedule.cluster.open-cluster-management.io resource. The controller checks regularly if the latest backup in the storage location has been generated from the current hub cluster. If not, a different hub cluster has recently written backup data to the storage location, indicating that the hub cluster is colliding with a different hub cluster. In the backup collision scenario, the current hub cluster BackupSchedule.cluster.open-cluster-management.io resource status is set to BackupCollision . To prevent data corruption, this resource deletes the Schedule.velero.io resources. The backup policy reports the BackupCollision . In this same scenario, the administrator verifies which hub cluster writes to the storage location. The administrator does this verification before removing the BackupSchedule.cluster.open-cluster-management.io resource from the invalid hub cluster. Then, the administrator can create a new BackupSchedule.cluster.open-cluster-management.io resource on the valid primary hub cluster, resuming the backup. To check if there is a backup collision, run the following command: If there is a backup collision, the output might resemble the following example: 1.1.4.5. Additional resources See the Restoring a backup section for descriptions of the parameters and samples of Restore YAML resources. For more information, see Backup and restore . 1.1.5. Restoring a backup In a typical restore scenario, the hub cluster where the backups run becomes unavailable, and the backed up data needs to be moved to a new hub cluster. This is done by running the cluster restore operation on the new hub cluster. In this case, the restore operation runs on a different hub cluster than where the backup is created. There are also cases where you want to restore the data on the same hub cluster where the backup was collected so that you can recover the data from a snapshot. In this case, both restore and backup operations run on the same hub cluster. After you create a restore.cluster.open-cluster-management.io resource on the hub cluster, you can run the following command to get the status of the restore operation: You can also verify that the backed up resources that are contained by the backup file are created. Note: The restore.cluster.open-cluster-management.io resource runs once unless you use the syncRestoreWithNewBackups option and set it to true , as mentioned in the Restore passive resources section. If you want to run the same restore operation again after the restore operation is complete, you must create a new restore.cluster.open-cluster-management.io resource with the same spec options. The restore operation is used to restore all 3 backup types that are created by the backup operation. You can choose to install only a certain type of backup, such as only managed clusters, only user credentials, or only hub cluster resources. The restore defines the following 3 required spec properties, where the restore logic is defined for the types of backed up files: veleroManagedClustersBackupName is used to define the restore option for the managed clusters activation resources. veleroCredentialsBackupName is used to define the restore option for the user credentials. veleroResourcesBackupName is used to define the restore option for the hub cluster resources ( Applications , Policy , and other hub cluster resources like managed cluster passive data). The valid options for the previously mentioned properties are following values: latest - This property restores the last available backup file for this type of backup. skip - This property does not attempt to restore this type of backup with the current restore operation. <backup_name> - This property restores the specified backup pointing to it by name. The name of the restore.velero.io resources that are created by the restore.cluster.open-cluster-management.io is generated using the following template rule, <restore.cluster.open-cluster-management.io name>-<velero-backup-resource-name> . View the following descriptions: restore.cluster.open-cluster-management.io name is the name of the current restore.cluster.open-cluster-management.io resource, which initiates the restore. velero-backup-resource-name is the name of the Velero backup file that is used for restoring the data. For example, the restore.cluster.open-cluster-management.io resource named restore-acm creates restore.velero.io restore resources. View the following examples for the format: restore-acm-acm-managed-clusters-schedule-20210902205438 is used for restoring managed cluster activation data backups. In this sample, the backup.velero.io backup name used to restore the resource is acm-managed-clusters-schedule-20210902205438 . restore-acm-acm-credentials-schedule-20210902206789 is used for restoring credential backups. In this sample, the backup.velero.io backup name used to restore the resource is acm-managed-clusters-schedule-20210902206789 . restore-acm-acm-resources-schedule-20210902201234 is used for restoring application, policy, and other hub cluster resources like managed cluster passive data backups. In this sample, the backup.velero.io backup name used to restore the resource is acm-managed-clusters-schedule-20210902201234 . Note: If skip is used for a backup type, restore.velero.io is not created. View the following YAML sample of the cluster Restore resource. In this sample, all three types of backed up files are being restored, using the latest available backed up files: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupName: latest veleroCredentialsBackupName: latest veleroResourcesBackupName: latest Note: Only managed clusters created by the Hive API are automatically connected with the new hub cluster when the acm-managed-clusters backup from the managed clusters backup is restored on another hub cluster. All other managed clusters remain in the Pending Import state and must be imported back onto the new hub cluster. For more information, see Restoring imported managed clusters . 1.1.5.1. Restoring data back to the initial primary hub When you need to restore backup data on a cluster, create a new cluster instead of using the primary hub cluster or a hub cluster where user resources were created before the restore operation. During the hub cluster restore operation, you can configure the hub cluster backup restore to clean up the existing resources, if these resources are not part of the backup data being restored. The restore cleans up resources created by an earlier backup but does not clean up user resources. As a result, resources created by the user on this hub cluster are not cleaned up, so the data on this hub cluster is not reflective of the data available with the restored resource. A disaster recovery test, where you only validate the restore operation on a passive hub, then move back to use the primary hub cluster, is one situation where you can use the primary hub cluster as the passive cluster. In this recovery test scenario, you are only testing the hub backup scenario. You do not use the primary hub cluster to create new resources. Instead, the backup data has temporarily moved from the primary hub cluster to the passive hub cluster. You can make the initial primary hub cluster a primary hub cluster again if you restore all resources to this hub cluster. The operation runs the post restore operation, which moves the managed clusters back to this hub cluster. 1.1.5.2. Preparing the new hub cluster When you need to restore backup data on a hub cluster, create a new hub cluster instead of using the primary hub cluster or a hub cluster where an earlier user created resources before the restore operation. In a disaster recovery scenario, when you restore the backup to the passive hub cluster, you want a hub cluster identical to the hub cluster that failed during the disaster. You do not want any other user resources, such as: applications, policies, and extra managed clusters. When you restore the backup on this hub cluster, if you have managed clusters managed by this hub cluster, then the restore operation might detach them and delete their cluster resources. The restore operation does this to clean up the resources that are not part of the restored backup, making the content in this hub cluster identical with the initial hub cluster. In this case, you might lose user data because the managed cluster workloads get deleted by detach action. Before running the restore operation on a new hub cluster, you need to manually configure the hub cluster and install the same operators as on the initial hub cluster. You must install the Red Hat Advanced Cluster Management operator in the same namespace as the initial hub cluster, create the DataProtectionApplication resource, and then connect to the same storage location where the initial hub cluster previously backed up data. Use the same configuration as on the initial hub cluster for the MultiClusterHub resource created by the Red Hat Advanced Cluster Management operator, including any changes to the MultiClusterEngine resource. For example, if the initial hub cluster has any other operators installed, such as Ansible Automation Platform, Red Hat OpenShift GitOps, cert-manager , you have to install them before running the restore operation. This ensures that the new hub cluster is configured in the same way as the initial hub cluster. 1.1.5.3. Cleaning the hub cluster after restore Velero updates existing resources if they have changed with the currently restored backup. Velero does not clean up delta resources, which are resources created by a restore and not part of the currently restored backup. This limits the scenarios you can use when restoring hub cluster data on a new hub cluster. Unless the restore is applied only once, you cannot reliably use the new hub cluster as a passive configuration. The data on the hub cluster does not reflect the data available with the restored resources. To address this limitation, when a Restore.cluster.open-cluster-management.io resource is created, the backup operator runs a post restore operation that cleans up the hub cluster. The operation removes any resources created by a Red Hat Advanced Cluster Management restore that are not part of the currently restored backup. The post restore cleanup uses the cleanupBeforeRestore property to identify the subset of objects to clean up. You can use the following options for the post restore cleanup: None : No clean up necessary, just begin Velero restore. Use None on a brand new hub cluster. CleanupRestored : Clean up all resources created by a Red Hat Advanced Cluster Management restore that are not part of the currently restored backup. CleanupAll : Clean up all resources on the hub cluster that might be part of a Red Hat Advanced Cluster Management backup, even if they were not created as a result of a restore operation. This is to be used when extra content is created on a hub cluster before the restore operation starts. Best Practice: Avoid using the CleanupAll option. Only use it as a last resort with extreme caution. CleanupAll also cleans up resources on the hub cluster created by the user, in addition to resources created by a previously restored backup. Instead, use the CleanupRestored option to prevent updating the hub cluster content when the hub cluster is designated as a passive candidate for a disaster scenario. Use a clean hub cluster as a passive cluster. Notes: Velero sets the status, PartiallyFailed , for a velero restore resource if the restored backup has no resources. This means that a restore.cluster.open-cluster-management.io resource can be in PartiallyFailed status if any of the created restore.velero.io resources do not restore any resources because the corresponding backup is empty. The restore.cluster.open-cluster-management.io resource is run once, unless you use the syncRestoreWithNewBackups:true to keep restoring passive data when new backups are available. For this case, follow the restore passive with sync sample. See Restoring passive resources while checking for backups . After the restore operation is complete and you want to run another restore operation on the same hub cluster, you have to create a new restore.cluster.open-cluster-management.io resource. Although you can create multiple restore.cluster.open-cluster-management.io resources, only one can be active at any moment in time. 1.1.5.4. Restoring passive resources while checking for backups Use the restore-passive-sync sample to restore passive data, while continuing to check if new backups are available and restore them automatically. To automatically restore new backups, you must set the syncRestoreWithNewBackups parameter to true . You must also only restore the latest passive data. You can find the sample example at the end of this section. Set the VeleroResourcesBackupName and VeleroCredentialsBackupName parameters to latest , and the VeleroManagedClustersBackupName parameter to skip . Immediately after the VeleroManagedClustersBackupName is set to latest , the managed clusters are activated on the new hub cluster and is now the primary hub cluster. When the activated managed cluster becomes the primary hub cluster, the restore resource is set to Finished and the syncRestoreWithNewBackups is ignored, even if set to true . By default, the controler checks for new backups every 30 minutes when the syncRestoreWithNewBackups is set to true . If new backups are found, it restores the backed up resources. You can change the duration of the check by updating the restoreSyncInterval parameter. For example, see the following resource that checks for backups every 10 minutes: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm-passive-sync namespace: open-cluster-management-backup spec: syncRestoreWithNewBackups: true # restore again when new backups are available restoreSyncInterval: 10m # check for new backups every 10 minutes cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: skip veleroCredentialsBackupName: latest veleroResourcesBackupName: latest 1.1.5.5. Restoring passive resources Use the restore-acm-passive sample to restore hub cluster resources in a passive configuration. Passive data is backup data such as secrets, ConfigMaps, applications, policies, and all the managed cluster custom resources, which do not activate a connection between managed clusters and hub clusters. The backup resources are restored on the hub cluster by the credentials backup and restore resources. See the following sample: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm-passive namespace: open-cluster-management-backup spec: cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: skip veleroCredentialsBackupName: latest veleroResourcesBackupName: latest 1.1.5.6. Restoring activation resources Before you restore the activation data on the passive hub cluster, shut down the hub cluster where the backup was created. If the primary hub cluster is still running, it attempts to reconnect with the managed clusters that are no longer available, based on the reconciliation procedure running on this hub cluster. Use the restore-acm-passive-activate sample when you want the hub cluster to manage the clusters. In this case it is assumed that the other data has been restored already on the hub cluster that using the passive resource. apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm-passive-activate namespace: open-cluster-management-backup spec: cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: latest veleroCredentialsBackupName: skip veleroResourcesBackupName: skip You have some options to restore activation resources, depending on how you restored the passive resources: If you used the restore-acm-passive-sync cluster.open-cluster-management.io resource as documented in the Restore passive resources while checking for backups to restore passive data section, update the veleroManagedClustersBackupName value to latest on this resource. As a result, the managed cluster resources and the restore-acm-passive-sync resource are restored. If you restored the passive resources as a one time operation, or did not restore any resources yet, choose to restore all resources as specified in the Restoring all resources section. 1.1.5.7. Restoring managed cluster activation data Managed cluster activation data or other activation data resources are stored by the managed clusters backup and by the resource-generic backups, when you use the cluster.open-cluster-management.io/backup: cluster-activation label. When the activation data is restored on a new hub cluster, managed clusters are being actively managed by the hub cluster where the restore is run. See Scheduling and restoring backups to learn how you can use the operator. 1.1.5.8. Restoring all resources Use the restore-acm sample if you want to restore all data at once and make the hub cluster manage the managed clusters in one step. After you create a restore.cluster.open-cluster-management.io resource on the hub cluster, run the following command to get the status of the restore operation: Your sample might resemble the following resource: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: latest veleroCredentialsBackupName: latest veleroResourcesBackupName: latest From your hub cluster, verify that the backed up resources contained by the backup file are created. 1.1.5.9. Restoring imported managed clusters Only managed clusters connected with the primary hub cluster using the Hive API are automatically connected with the new hub cluster, where the activation data is restored. These clusters have been created on the primary hub cluster by the Hive API, using the Create cluster button in the Clusters tab or through the CLI. Managed clusters connected with the initial hub cluster using the Import cluster button appear as Pending Import when the activation data is restored, and must be imported back on the new hub cluster. The Hive managed clusters can be connected with the new hub cluster because Hive stores the managed cluster kubeconfig in the managed cluster namespace on the hub cluster. This is backed up and restored on the new hub cluster. The import controller then updates the bootstrap kubeconfig on the managed cluster using the restored configuration, which is only available for managed clusters created using the Hive API. It is not available for imported clusters. To reconnect imported clusters on the new hub cluster, manually create the auto-import-secret resource after your start the restore operation. See Importing the cluster with the auto import secret for more details. Create the auto-import-secret resource in the managed cluster namespace for each cluster in Pending Import state. Use a kubeconfig or token with enough permissions for the import component to start the automatic import on the new hub cluster. You must have access for each managed cluster by using a token to connect with the managed cluster. The token must have a klusterlet role binding or a role with the same permissions. 1.1.5.10. Using other restore samples View the following Restore section to view the YAML examples to restore different types of backed up files. Restore all three types of backed up resources: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupSchedule: latest veleroCredentialsBackupSchedule: latest veleroResourcesBackupSchedule: latest Restore only managed cluster resources: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupName: latest veleroCredentialsBackupName: skip veleroResourcesBackupName: skip Restore the resources for managed clusters only, using the acm-managed-clusters-schedule-20210902205438 backup: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupName: acm-managed-clusters-schedule-20210902205438 veleroCredentialsBackupName: skip veleroResourcesBackupName: skip Notes : The restore.cluster.open-cluster-management.io resource is run once. After the restore operation is completed, you can optionally run another restore operation on the same hub cluster. You must create a new restore.cluster.open-cluster-management.io resource to run a new restore operation. You can create multiple restore.cluster.open-cluster-management.io , however only one can be run at any moment. 1.1.5.11. Using advanced restore options There are advanced restore options that are defined by the velero.io.restore specification. You can use these advanced restore options to filter out resources that you want to restore. For example, you can use the following YAML sample to restore resources only from the vb-managed-cls-2 managed cluster namespace and to exclude the global Managedluster resources: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-filter-sample namespace: open-cluster-management-backup spec: cleanupBeforeRestore: None veleroCredentialsBackupName: latest veleroResourcesBackupName: latest veleroManagedClustersBackupName: latest excludedResources: - ManagedCluster If you want more options for the velero.io.restore specification when you create the Red Hat Advanced Cluster Management for Kubernetes restore resource, see the following restore filters: spec: excludedResources excludedNamespaces Note: For any of the Velero restore filters, set the cleanupBeforeRestore field to None to prevent having to clean up hub cluster resources that are part of the restored backup, but are not being restored because of the applied filter. 1.1.5.12. Viewing restore events Use the following command to get information about restore events: Your list of events might resemble the following sample: Spec: Cleanup Before Restore: CleanupRestored Restore Sync Interval: 4m Sync Restore With New Backups: true Velero Credentials Backup Name: latest Velero Managed Clusters Backup Name: skip Velero Resources Backup Name: latest Status: Last Message: Velero restores have run to completion, restore will continue to sync with new backups Phase: Enabled Velero Credentials Restore Name: example-acm-credentials-schedule-20220406171919 Velero Resources Restore Name: example-acm-resources-schedule-20220406171920 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-credentials-hive-schedule-20220406155817 Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-credentials-cluster-schedule-20220406155817 Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-credentials-schedule-20220406155817 Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-resources-generic-schedule-20220406155817 Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-resources-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-credentials-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-resources-generic-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-resources-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-credentials-cluster-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-credentials-hive-schedule-20220406155817 Normal Prepare to restore: 64m Restore controller Cleaning up resources for backup acm-resources-schedule-20220406165328 Normal Prepare to restore: 62m Restore controller Cleaning up resources for backup acm-credentials-hive-schedule-20220406165328 Normal Prepare to restore: 62m Restore controller Cleaning up resources for backup acm-credentials-cluster-schedule-20220406165328 Normal Prepare to restore: 62m Restore controller Cleaning up resources for backup acm-credentials-schedule-20220406165328 Normal Prepare to restore: 62m Restore controller Cleaning up resources for backup acm-resources-generic-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-credentials-cluster-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-credentials-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-resources-generic-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-resources-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-credentials-hive-schedule-20220406165328 Normal Prepare to restore: 38m Restore controller Cleaning up resources for backup acm-resources-generic-schedule-20220406171920 Normal Prepare to restore: 38m Restore controller Cleaning up resources for backup acm-resources-schedule-20220406171920 Normal Prepare to restore: 36m Restore controller Cleaning up resources for backup acm-credentials-hive-schedule-20220406171919 Normal Prepare to restore: 36m Restore controller Cleaning up resources for backup acm-credentials-cluster-schedule-20220406171919 Normal Prepare to restore: 36m Restore controller Cleaning up resources for backup acm-credentials-schedule-20220406171919 Normal Velero restore created: 36m Restore controller example-acm-credentials-cluster-schedule-20220406171919 Normal Velero restore created: 36m Restore controller example-acm-credentials-schedule-20220406171919 Normal Velero restore created: 36m Restore controller example-acm-resources-generic-schedule-20220406171920 Normal Velero restore created: 36m Restore controller example-acm-resources-schedule-20220406171920 Normal Velero restore created: 36m Restore controller example-acm-credentials-hive-schedule-20220406171919 1.1.5.13. Additional resources See DataProtectionApplication . See Importing the cluster with the auto import secret . See Scheduling and restoring backups . 1.1.6. Connecting clusters automatically by using a Managed Service Account The backup controller automatically connects imported clusters to the new hub cluster by using the Managed Service Account component. The Managed Service Account creates a token that is backed up for each imported cluster in each managed cluster namespace. The token uses a klusterlet-bootstrap-kubeconfig ClusterRole binding, which allows the token to be used by an automatic import operation. The klusterlet-bootstrap-kubeconfig ClusterRole can only get or update the bootstrap-hub-kubeconfig secret. To learn more about the Managed Service Account component, see What is Managed Service Account? . When the activation data is restored on the new hub cluster, the restore controller runs a post restore operation and looks for all managed clusters in the Pending Import state. If a valid token generated by the Managed Service Account is found, the controller creates an auto-import-secret using the token. As a result, the import component tries to reconnect the managed cluster. If the cluster is accessible, the operation is successful. 1.1.6.1. Enabling automatic import The automatic import feature using the Managed Service Account component is disabled by default. To enable the automatic import feature, complete the following steps: Enable the Managed Service Account component by setting the managedserviceaccount enabled parameter to true in the MultiClusterEngine resource. See the following example: apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterhub spec: overrides: components: - enabled: true name: managedserviceaccount Enable the automatic import feature for the BackupSchedule.cluster.open-cluster-management.io resource by setting the useManagedServiceAccount parameter to true . See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm-msa namespace: open-cluster-management-backup spec: veleroSchedule: veleroTtl: 120h useManagedServiceAccount: true The default token validity duration is set to twice the value of veleroTtl to increase the chance of the token being valid for all backups storing the token for their entire lifecycle. In some cases, you might need to control how long a token is valid by setting a value for the optional managedServiceAccountTTL property. Use managedServiceAccountTTL with caution if you need to update the default token expiration time for the generated tokens. Changing the token expiration time from the default value might result in producing backups with tokens set to expire during the lifecycle of the backup. As a result, the import feature does not work for the managed clusters. Important : Do not use managedServiceAccountTTL unless you need to control how long the token is valid. See the following example for using the managedServiceAccountTTL property: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm-msa namespace: open-cluster-management-backup spec: veleroSchedule: veleroTtl: 120h useManagedServiceAccount: true managedServiceAccountTTL: 300h After you enable the automatic import feature, the backup component starts processing imported managed clusters by creating the following: A ManagedServiceAddon named managed-serviceaccount . A ManagedServiceAccount named auto-import-account . A ManifestWork for each ManagedServiceAccount to set up a klusterlet-bootstrap-kubeconfig RoleBinding for the ManagedServiceAccount token on the managed cluster. The token is only created if the managed cluster is accessible when you create the Managed Service Account, otherwise it is created later once the managed cluster becomes available. 1.1.6.2. Automatic import considerations The following scenarios can prevent the managed cluster from being automatically imported when moving to a new hub cluster: When running a hub backup without a ManagedServiceAccount token, for example when you create the ManagedServiceAccount resource while the managed cluster is not accessible, the backup does not contain a token to auto import the managed cluster. The auto import operation fails if the auto-import-account secret token is valid and is backed up but the restore operation is run when the token available with the backup has already expired. The restore.cluster.open-cluster-management.io resource reports invalid token issues for each managed cluster. Since the auto-import-secret created on restore uses the ManagedServiceAccount token to connect to the managed cluster, the managed cluster must also provide the kube apiserver information. The apiserver must be set on the ManagedCluster resource. See the following example: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: managed-cluster-name spec: hubAcceptsClient: true leaseDurationSeconds: 60 managedClusterClientConfigs: url: <apiserver> When a cluster is imported on the hub cluster, the apiserver is only set up automatically on OpenShift Container Platform clusters. You must set the apiserver manually on other types of managed clusters, such as EKS clusters, otherwise the automatic import feature ignores the clusters. As a result, the clusters remain in the Pending Import state when you move them to the restore hub cluster. It is possible that a ManagedServiceAccount secret might not be included in a backup if the backup schedule runs before the backup label is set on the ManagedServiceAccount secret. ManagedServiceAccount secrets don't have the cluster open-cluster-management.io/backup label set on creation. For this reason, the backup controller regularly searches for ManagedServiceAccount secrets under the managed cluster's namespaces, and adds the backup label if not found. 1.1.6.3. Disabling automatic import You can disable the automatic import cluster feature by setting the useManagedServiceAccount parameter to false in the BackupSchedule resource. See the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm-msa namespace: open-cluster-management-backup spec: veleroSchedule: veleroTtl: 120h useManagedServiceAccount: false The default value is false . After setting the value to false , the backup operator removes all created resources, including ManagedServiceAddon , ManagedServiceAccount , and ManifestWork . Removing the resources deletes the automatic import token on the hub cluster and managed cluster. 1.1.6.4. Additional resources See What is Managed Service Account? to learn more about the Managed Service Account component. Return to Automatically connecting clusters by using a Managed Service Account . 1.1.7. Validating your backup or restore configurations When you set the cluster-backup option to true on the MultiClusterHub resource, multicluster engine operator installs the cluster backup and restore operator Helm chart that is named the cluster-backup-chart . This chart then installs the backup-restore-enabled and backup-restore-auto-import policies. Use these policies to view information about issues with your backup and restore components. Note: A hub cluster is automatically imported and self-managed by as the local-cluster . If you disable self-management by setting disableHubSelfManagement to true on the MultiClusterHub resource, the backup-restore-enabled policy is not placed on the hub cluster and the policy templates do not produce any reports. If a hub cluster is managed by a global hub cluster, or if it is installed on a managed cluster instance, the self-management option might get disabled by setting the disableHubSelfManagement to true . In this instance, you can still enable the backup-restore-enabled policy on the hub cluster. Set the is-hub=true label on the ManagedCluster resource that represents the local cluster. The backup-restore-enabled policy includes a set of templates that check for the following constraints: OADP channel validation When you enable the backup component on the MultiClusterHub , the cluster backup and restore operator Helm chart can automatically install the OADP operator in the open-cluster-management-backup namespace. You can also manually install the OADP operator in the namespace. The OADP-channel that you selected for manual installation must match or exceed the version set by the Red Hat Advanced Cluster Management backup and restore operator Helm chart. Since the OADP Operator and Velero Custom Resource Definitions (CRDs) are cluster-scoped , you cannot have multiple versions on the same cluster. You must install the same version in the open-cluster-management-backup namespace and any other namespaces. Use the following templates to check for availability and validate the OADP installation: oadp-operator-exists : Use this template to verify if the OADP operator is installed in the open-cluster-management-backup namespace. oadp-channel-validation : Use this template to ensure the OADP operator version in the open-cluster-management-backup namespace matches or exceeds the version set by the Red Hat Advanced Cluster Management backup and restore operator. custom-oadp-channel-validation : Use this template to check if OADP operators in other namespaces match the version in the open-cluster-management-backup namespace. Pod validation The following templates check the pod status for the backup component and dependencies: acm-backup-pod-running template checks if the backup and restore operator pod is running. oadp-pod-running template checks if the OADP operator pod is running. velero-pod-running template checks if the Velero pod is running. Data Protection Application validation data-protection-application-available template checks if a DataProtectioApplicatio.oadp.openshift.io resource is created. This OADP resource sets up Velero configurations. Backup storage validation backup-storage-location-available template checks if a BackupStorageLocation.velero.io resource is created and if the status value is Available . This implies that the connection to the backup storage is valid. BackupSchedule collision validation acm-backup-clusters-collision-report template verifies that the status is not BackupCollision , if a BackupSchedule.cluster.open-cluster-management.io exists on the current hub cluster. This verifies that the current hub cluster is not in collision with any other hub cluster when you write backup data to the storage location. For a definition of the BackupCollision , see Preventing backup collisions . BackupSchedule and restore status validation acm-backup-phase-validation template checks that the status is not in Failed , or Empty state, if a BackupSchedule.cluster.open-cluster-management.io exists on the current cluster. This ensures that if this cluster is the primary hub cluster and is generating backups, the BackupSchedule.cluster.open-cluster-management.io status is healthy. The same template checks that the status is not in a Failed , or Empty state, if a Restore.cluster.open-cluster-management.io exists on the current cluster. This ensures that if this cluster is the secondary hub cluster and is restoring backups, the Restore.cluster.open-cluster-management.io status is healthy. Backups exist validation acm-managed-clusters-schedule-backups-available template checks if Backup.velero.io resources are available at the location specified by the BackupStorageLocation.velero.io , and if the backups are created by a BackupSchedule.cluster.open-cluster-management.io resource. This validates that the backups have been run at least once, using the backup and restore operator. Backups for completion An acm-backup-in-progress-report template checks if Backup.velero.io resources are stuck in the InProgress state. This validation is added because with a large number of resources, the velero pod restarts as the backup runs, and the backup stays in progress without proceeding to completion. During a normal backup, the backup resources are in progress at some point when it is run, but are not stuck and run to completion. It is normal to see the acm-backup-in-progress-report template report a warning during the time the schedule is running and backups are in progress. Backup schedules generate backups on the primary hub cluster The backup-schedule-cron-enabled policy template notifies the hub cluster administrator that the primary hub cluster is not creating new backups. The policy shows that the BackupSchedule.cluster.open-cluster-management.io resource is disabled on the primary hub cluster or is not able to create scheduled backups. The backup-schedule-cron-enabled policy template is in violation if the backup schedule on the primary hub cluster did not generate new backups when the BackupSchedule cron job definition expects new backups. This might happen if the BackupSchedule does not exist, is paused, or the BackupSchedule cron job property updated and the time to run the backups increased. The template is in violation until a new set of backups is created. The template validation works in the following ways: A BackupSchedule.cluster.open-cluster-management.io actively runs and saves new backups at the storage location. The backup-schedule-cron-enabled policy template completes this validation. The template checks that there is a Backup.velero.io with a velero.io/schedule-name: acm-validation-policy-schedule label at the storage location. The acm-validation-policy-schedule backups are set to expire after you set a time for the backups cron schedule. If no cron job is running to create backups, the old acm-validation-policy-schedule backup is deleted because it expired and a new one is not created. As a result, if no acm-validation-policy-schedule backups exists, there are no active cron jobs generating backups. The backup-restore-auto-import policy includes a set of templates that check for the following constraints: Auto import secret validation The auto-import-account-secret template checks whether a ManagedServiceAccount secret is created in the managed cluster namespaces other than the local-cluster . The backup controller regularly scans for imported managed clusters. As soon as a managed cluster is discovered, the backup controller creates the ManagedServiceAccount resource in the managed cluster namespace. This process initiates token creation on the managed cluster. However, if the managed cluster is not accessible at the time of this operation, the ManagedServiceAccount is unable to create the token. For example, if the managed cluster is hibernating, it is unable to create the token. So, if a hub backup is executed during this period, the backup then lacks a token for auto-importing the managed cluster. Auto import backup label validation The auto-import-backup-label template verifies the existence of a ManagedServiceAccount secret in the managed cluster namespaces other than the local-cluster . If the template finds the ManagedServiceAccount secret, then the template enforces the cluster.open-cluster-management.io/backup label on the secret. This label is crucial for including the ManagedServiceAccount secrets in Red Hat Advanced Cluster Management backups. 1.1.7.1. Protecting data using server-side encryption Server-side encryption is data encryption for the application or service that receives the data at the storage location. The backup mechanism itself does not encrypt data while in-transit (as it travels to and from backup storage location), or at rest (while it is stored on disks at backup storage location). Instead it relies on the native mechanisms in the object and snapshot systems. Best practice : Encrypt the data at the destination using the available backup storage server-side encryption. The backup contains resources, such as credentials and configuration files that need to be encrypted when stored outside of the hub cluster. You can use serverSideEncryption and kmsKeyId parameters to enable encryption for the backups stored in Amazon S3. For more details, see the Backup Storage Location YAML . The following sample specifies an AWS KMS key ID when setting up the DataProtectionApplication resource: spec: backupLocations: - velero: config: kmsKeyId: 502b409c-4da1-419f-a16e-eif453b3i49f profile: default region: us-east-1 Refer to Velero supported storage providers to find out about all of the configurable parameters of other storage providers. 1.1.7.2. Additional resources See the Backup Storage Location YAML . See Velero supported storage providers . Return to Validating your backup or restore configurations . 1.1.8. Backup and restore advanced configuration You can further configure backup and restore by viewing the following sections: 1.1.8.1. Resource requests and limits customization When Velero is initially installed, Velero pod is set to the default CPU and memory limits as defined in the following sample: resources: limits: cpu: "1" memory: 256Mi requests: cpu: 500m memory: 128Mi The limits from the sample work well with some scenarios, but might need to be updated when your cluster backs up a large number of resources. For instance, when back up is run on a hub cluster that manages 2000 clusters, then the Velero pod fails due to the out-of-memory error (OOM). The following configuration allows for the backup to complete for this scenario: limits: cpu: "2" memory: 1Gi requests: cpu: 500m memory: 256Mi To update the limits and requests for the Velero pod resource, you need to update the DataProtectionApplication resource and insert the resourceAllocation template for the Velero pod. View the following sample: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero namespace: open-cluster-management-backup spec: ... configuration: ... velero: podConfig: resourceAllocations: limits: cpu: "2" memory: 1Gi requests: cpu: 500m memory: 256Mi 1.1.8.2. Additional resources Refer to the Default Velero cloud provider plugins topic in the Red Hat OpenShift Container Platform documentation to find out more about the DataProtectionApplication parameters. Refer to the CPU and memory requirement for configurations topic in the OpenShift Container Platform documentation for more details about the backup and restore CPU and memory requirements based on cluster usage. 1.2. VolSync persistent volume replication service VolSync is a Kubernetes operator that enables asynchronous replication of persistent volumes within a cluster, or across clusters with storage types that are not otherwise compatible for replication. It uses the Container Storage Interface (CSI) to overcome the compatibility limitation. After deploying the VolSync operator in your environment, you can leverage it to create and maintain copies of your persistent data. VolSync can only replicate persistent volume claims on Red Hat OpenShift Container Platform clusters that are at version 4.14 or later. Important: VolSync only supports replicating persistent volume claims with the volumeMode of Filesystem . If you do not select the volumeMode , it defaults to Filesystem . Replicating persistent volumes with VolSync Installing VolSync on the managed clusters Configuring an Rsync-TLS replication Configuring an Rsync replication Configuring a restic backup Configuring an Rclone replication Converting a replicated image to a usable persistent volume claim Scheduling your synchronization 1.2.1. Replicating persistent volumes with VolSync You can use three supported methods to replicate persistent volumes with VolSync, which depend on the number of synchronization locations that you have: rsync, rsync-tls, restic, or Rclone. 1.2.1.1. Prerequisites Before installing VolSync on your clusters, you must have the following requirements: A configured Red Hat OpenShift Container Platform environment on a supported Red Hat Advanced Cluster Management hub cluster At least two managed clusters for the same Red Hat Advanced Cluster Management hub cluster Network connectivity between the clusters that you are configuring with VolSync. If the clusters are not on the same network, you can configure the Submariner multicluster networking and service discovery and use the ClusterIP value for ServiceType to network the clusters, or use a load balancer with the LoadBalancer value for ServiceType . The storage driver that you use for your source persistent volume must be CSI-compatible and able to support snapshots. 1.2.1.2. Installing VolSync on the managed clusters To enable VolSync to replicate the persistent volume claim on one cluster to the persistent volume claim of another cluster, you must install VolSync on both the source and the target managed clusters. VolSync does not create its own namespace, so it is in the same namespace as other OpenShift Container Platform all-namespace operators. Any changes that you make to the operator settings for VolSync also affects the other operators in the same namespace, such as if you change to manual approval for channel updates. You can use either of two methods to install VolSync on two clusters in your environment. You can either add a label to each of the managed clusters in the hub cluster, or you can manually create and apply a ManagedClusterAddOn , as they are described in the following sections: 1.2.1.2.1. Installing VolSync by using labels To install VolSync on the managed cluster by adding a label. Complete the following steps from the Red Hat Advanced Cluster Management console: Select one of the managed clusters from the Clusters page in the hub cluster console to view its details. In the Labels field, add the following label: The VolSync service pod is installed on the managed cluster. Add the same label the other managed cluster. Run the following command on each managed cluster to confirm that the VolSync operator is installed: There is an operator listed for VolSync when it is installed. Complete the following steps from the command-line interface: Start a command-line session on the hub cluster. Enter the following command to add the label to the first cluster: Replace managed-cluster-1 with the name of one of your managed clusters. Enter the following command to add the label to the second cluster: Replace managed-cluster-2 with the name of your other managed cluster. A ManagedClusterAddOn resource should be created automatically on your hub cluster in the namespace of each corresponding managed cluster. 1.2.1.2.2. Installing VolSync by using a ManagedClusterAddOn To install VolSync on the managed cluster by adding a ManagedClusterAddOn manually, complete the following steps: On the hub cluster, create a YAML file called volsync-mcao.yaml that contains content that is similar to the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: volsync namespace: <managed-cluster-1-namespace> spec: {} Replace managed-cluster-1-namespace with the namespace of one of your managed clusters. This namespace is the same as the name of the managed cluster. Note: The name must be volsync . Apply the file to your configuration by entering a command similar to the following example: Repeat the procedure for the other managed cluster. A ManagedClusterAddOn resource should be created automatically on your hub cluster in the namespace of each corresponding managed cluster. 1.2.1.2.3. Updating the VolSync ManagedClusterAddOn Depending on what version of Red Hat Advanced Cluster Management you use, you might need to update your VolSync version. To update VolSync ManagedClusterAddOn resource, complete the following steps: Add the following annotation to your ManagedClusterAddOn resource: annotations: operator-subscription-channel: stable-0.9 Define the operator-subscription-channel where you want VolySync deployed from. Verify that you updated your Volsync version by going to your ManagedClusterAddOn resource and confirming that your chosen operator-subscription-channel is included. 1.2.1.3. Configuring an Rsync-TLS replication You can create a 1:1 asynchronous replication of persistent volumes by using an Rsync-TLS replication. You can use Rsync-TLS-based replication for disaster recovery or sending data to a remote site. When using Rsync-TLS, VolSync synchronizes data by using Rsync across a TLS-protected tunnel provided by stunnel. See the stunnel documentation for more information. The following example shows how to configure by using the Rsync-TLS method. For additional information about Rsync-TLS, see Usage in the VolSync documentation. 1.2.1.3.1. Configuring Rsync-TLS replication across managed clusters For Rsync-TLS-based replication, configure custom resources on the source and destination clusters. The custom resources use the address value to connect the source to the destination, and a TLS-protected tunnel provided by stunnel to ensure that the transferred data is secure. See the following information and examples to configure an Rsync-TLS replication from a persistent volume claim on the source cluster in the source-ns namespace to a persistent volume claim on a destination cluster in the destination-ns namespace. Replace the values where necessary: Configure your destination cluster. Run the following command on the destination cluster to create the namespace: Replace destination-ns with the namespace where your replication destination is located. Create a new YAML file called replication_destination and copy the following content: apiVersion: volsync.backube/v1alpha1 kind: ReplicationDestination metadata: name: <destination> namespace: <destination-ns> spec: rsyncTLS: serviceType: LoadBalancer 1 copyMethod: Snapshot capacity: 2Gi 2 accessModes: [ReadWriteOnce] storageClassName: gp2-csi volumeSnapshotClassName: csi-aws-vsc 1 For this example, the ServiceType value of LoadBalancer is used. The load balancer service is created by the source cluster to enable your source managed cluster to transfer information to a different destination managed cluster. You can use ClusterIP as the service type if your source and destinations are on the same cluster, or if you have Submariner network service configured. Note the address and the name of the secret to refer to when you configure the source cluster.Make sure that the capacity value matches the capacity of the persistent volume claim that is being replicated. 2 Make sure that the capacity value matches the capacity of the persistent volume claim that is being replicated. Optional: Specify the values of the storageClassName and volumeSnapshotClassName parameters if you are using a storage class and volume snapshot class name that are different than the default values for your environment. Run the following command on the destination cluster to create the replicationdestination resource: Replace destination-ns with the name of the namespace where your destination is located. After the replicationdestination resource is created, the following parameters and values are added to the resource: Parameter Value .status.rsyncTLS.address IP address of the destination cluster that is used to enable the source and destination clusters to communicate. .status.rsyncTLS.keySecret Name of the secret containing the TLS key that authenticates the connection with the source cluster. Run the following command to copy the value of .status.rsyncTLS.address to use on the source cluster. Replace destination with the name of your replication destination custom resource. Replace destination-ns with the name of the namespace where your destination is located: The output appears similar to the following, which is for an Amazon Web Services environment: Run the following command to copy the name of the secret: Replace destination with the name of your replication destination custom resource. Replace destination-ns with the name of the namespace where your destination is located. You will have to enter it on the source cluster when you configure the source. The output should be the name of your SSH keys secret file, which might resemble the following name: Copy the key secret from the destination cluster by entering the following command against the destination cluster: Replace destination-ns with the namespace where your replication destination is located. Open the secret file in the vi editor by entering the following command: In the open secret file on the destination cluster, make the following changes: Change the namespace to the namespace of your source cluster. For this example, it is source-ns . Remove the owner references ( .metadata.ownerReferences ). On the source cluster, create the secret file by entering the following command on the source cluster: Identify the source persistent volume claim that you want to replicate. Note: The source persistent volume claim must be on a CSI storage class. Create the ReplicationSource items. Create a new YAML file called replication_source on the source cluster and copy the following content: apiVersion: volsync.backube/v1alpha1 kind: ReplicationSource metadata: name: <source> 1 namespace: <source-ns> 2 spec: sourcePVC: <persistent_volume_claim> 3 trigger: schedule: "*/3 * * * *" #/* rsyncTLS: keySecret: <mykeysecret> 4 address: <my.host.com> 5 copyMethod: Snapshot storageClassName: gp2-csi volumeSnapshotClassName: csi-aws-vsc 1 Replace source with the name for your replication source custom resource. See step 3-vi of this procedure for instructions on how to replace this automatically. 2 Replace source-ns with the namespace of the persistent volume claim where your source is located. See step 3-vi of this procedure for instructions on how to replace this automatically. 3 Replace persistent_volume_claim with the name of your source persistent volume claim. 4 Replace mykeysecret with the name of the secret that you copied from the destination cluster to the source cluster (the value of USDKEYSECRET ). 5 Replace my.host.com with the host address that you copied from the .status.rsyncTLS.address field of the ReplicationDestination when you configured it. You can find examples of sed commands in the step. If your storage driver supports cloning, using Clone as the value for copyMethod might be a more streamlined process for the replication. Optional: Specify the values of the storageClassName and volumeSnapshotClassName parameters if you are using a storage class and volume snapshot class name that are different than the default values for your environment. You can now set up the synchronization method of the persistent volume. On the source cluster, modify the replication_source.yaml file by replacing the value of the address and keySecret in the ReplicationSource object with the values that you noted from the destination cluster by entering the following commands: Replace my.host.com with the host address that you copied from the .status.rsyncTLS.address field of the ReplicationDestination when you configured it. Replace keySecret with the keys that you copied from the .status.rsyncTLS.keySecret field of the ReplicationDestination when you configured it. Replace source with the name of the persistent volume claim where your source is located. Note: You must create the file in the same namespace as the persistent volume claim that you want to replicate. Verify that the replication completed by running the following command on the ReplicationSource object: Replace source-ns with the namespace of the persistent volume claim where your source is located. Replace source with the name of your replication source custom resource. If the replication was successful, the output should be similar to the following example: If the Last Sync Time has no time listed, then the replication is not complete. You have a replica of your original persistent volume claim. 1.2.1.4. Configuring an Rsync replication Important: Use Rsync-TLS instead of Rsync for enhanced security. By using Rsync-TLS, you can avoid using elevated user permissions that are not required for replicating persistent volumes. You can create a 1:1 asynchronous replication of persistent volumes by using an Rsync replication. You can use Rsync-based replication for disaster recovery or sending data to a remote site. The following example shows how to configure by using the Rsync method. 1.2.1.4.1. Configuring Rsync replication across managed clusters For Rsync-based replication, configure custom resources on the source and destination clusters. The custom resources use the address value to connect the source to the destination, and the sshKeys to ensure that the transferred data is secure. Note: You must copy the values for address and sshKeys from the destination to the source, so configure the destination before you configure the source. This example provides the steps to configure an Rsync replication from a persistent volume claim on the source cluster in the source-ns namespace to a persistent volume claim on a destination cluster in the destination-ns namespace. You can replace those values with other values, if necessary. Configure your destination cluster. Run the following command on the destination cluster to create the namespace: Replace destination-ns with a name for the namespace that will contain your destination persistent volume claim. Copy the following YAML content to create a new file called replication_destination.yaml : apiVersion: volsync.backube/v1alpha1 kind: ReplicationDestination metadata: name: <destination> namespace: <destination-ns> spec: rsync: serviceType: LoadBalancer copyMethod: Snapshot capacity: 2Gi accessModes: [ReadWriteOnce] storageClassName: gp2-csi volumeSnapshotClassName: csi-aws-vsc Note: The capacity value should match the capacity of the persistent volume claim that is being replicated. Replace destination with the name of your replication destination CR. Replace destination-ns with the name of the namespace where your destination is located. For this example, the ServiceType value of LoadBalancer is used. The load balancer service is created by the source cluster to enable your source managed cluster to transfer information to a different destination managed cluster. You can use ClusterIP as the service type if your source and destinations are on the same cluster, or if you have Submariner network service configured. Note the address and the name of the secret to refer to when you configure the source cluster. The storageClassName and volumeSnapshotClassName are optional parameters. Specify the values for your environment, particularly if you are using a storage class and volume snapshot class name that are different than the default values for your environment. Run the following command on the destination cluster to create the replicationdestination resource: Replace destination-ns with the name of the namespace where your destination is located. After the replicationdestination resource is created, following parameters and values are added to the resource: Parameter Value .status.rsync.address IP address of the destination cluster that is used to enable the source and destination clusters to communicate. .status.rsync.sshKeys Name of the SSH key file that enables secure data transfer from the source cluster to the destination cluster. Run the following command to copy the value of .status.rsync.address to use on the source cluster: Replace destination with the name of your replication destination custom resource. Replace destination-ns with the name of the namespace where your destination is located. The output should appear similar to the following output, which is for an Amazon Web Services environment: Run the following command to copy the name of the secret: Replace destination with the name of your replication destination custom resource. Replace destination-ns with the name of the namespace where your destination is located. You will have to enter it on the source cluster when you configure the source. The output should be the name of your SSH keys secret file, which might resemble the following name: Copy the SSH secret from the destination cluster by entering the following command against the destination cluster: Replace destination-ns with the namespace of the persistent volume claim where your destination is located. Open the secret file in the vi editor by entering the following command: In the open secret file on the destination cluster, make the following changes: Change the namespace to the namespace of your source cluster. For this example, it is source-ns . Remove the owner references ( .metadata.ownerReferences ). On the source cluster, create the secret file by entering the following command on the source cluster: Identify the source persistent volume claim that you want to replicate. Note: The source persistent volume claim must be on a CSI storage class. Create the ReplicationSource items. Copy the following YAML content to create a new file called replication_source.yaml on the source cluster: apiVersion: volsync.backube/v1alpha1 kind: ReplicationSource metadata: name: <source> namespace: <source-ns> spec: sourcePVC: <persistent_volume_claim> trigger: schedule: "*/3 * * * *" #/* rsync: sshKeys: <mysshkeys> address: <my.host.com> copyMethod: Snapshot storageClassName: gp2-csi volumeSnapshotClassName: csi-aws-vsc Replace source with the name for your replication source custom resource. See step 3-vi of this procedure for instructions on how to replace this automatically. Replace source-ns with the namespace of the persistent volume claim where your source is located. See step 3-vi of this procedure for instructions on how to replace this automatically. Replace persistent_volume_claim with the name of your source persistent volume claim. Replace mysshkeys with the keys that you copied from the .status.rsync.sshKeys field of the ReplicationDestination when you configured it. Replace my.host.com with the host address that you copied from the .status.rsync.address field of the ReplicationDestination when you configured it. If your storage driver supports cloning, using Clone as the value for copyMethod might be a more streamlined process for the replication. StorageClassName and volumeSnapshotClassName are optional parameters. If you are using a storage class and volume snapshot class name that are different than the defaults for your environment, specify those values. You can now set up the synchronization method of the persistent volume. On the source cluster, modify the replication_source.yaml file by replacing the value of the address and sshKeys in the ReplicationSource object with the values that you noted from the destination cluster by entering the following commands: Replace my.host.com with the host address that you copied from the .status.rsync.address field of the ReplicationDestination when you configured it. Replace mysshkeys with the keys that you copied from the .status.rsync.sshKeys field of the ReplicationDestination when you configured it. Replace source with the name of the persistent volume claim where your source is located. Note: You must create the file in the same namespace as the persistent volume claim that you want to replicate. Verify that the replication completed by running the following command on the ReplicationSource object: Replace source-ns with the namespace of the persistent volume claim where your source is located. Replace source with the name of your replication source custom resource. If the replication was successful, the output should be similar to the following example: If the Last Sync Time has no time listed, then the replication is not complete. You have a replica of your original persistent volume claim. 1.2.1.5. Configuring a restic backup A restic-based backup copies a restic-based backup copy of the persistent volume to a location that is specified in your restic-config.yaml secret file. A restic backup does not synchronize data between the clusters, but provides data backup. Complete the following steps to configure a restic-based backup: Specify a repository where your backup images are stored by creating a secret that resembles the following YAML content: apiVersion: v1 kind: Secret metadata: name: restic-config type: Opaque stringData: RESTIC_REPOSITORY: <my-restic-repository> RESTIC_PASSWORD: <my-restic-password> AWS_ACCESS_KEY_ID: access AWS_SECRET_ACCESS_KEY: password Replace my-restic-repository with the location of the S3 bucket repository where you want to store your backup files. Replace my-restic-password with the encryption key that is required to access the repository. Replace access and password with the credentials for your provider, if required. If you need to prepare a new repository, see Preparing a new repository for the procedure. If you use that procedure, skip the step that requires you to run the restic init command to initialize the repository. VolSync automatically initializes the repository during the first backup. Important: When backing up multiple persistent volume claims to the same S3 bucket, the path to the bucket must be unique for each persistent volume claim. Each persistent volume claim is backed up with a separate ReplicationSource , and each requires a separate restic-config secret. By sharing the same S3 bucket, each ReplicationSource has write access to the entire S3 bucket. Configure your backup policy by creating a ReplicationSource object that resembles the following YAML content: apiVersion: volsync.backube/v1alpha1 kind: ReplicationSource metadata: name: mydata-backup spec: sourcePVC: <source> trigger: schedule: "*/30 * * * *" #\* restic: pruneIntervalDays: 14 repository: <restic-config> retain: hourly: 6 daily: 5 weekly: 4 monthly: 2 yearly: 1 copyMethod: Clone # The StorageClass to use when creating the PiT copy (same as source PVC if omitted) #storageClassName: my-sc-name # The VSC to use if the copy method is Snapshot (default if omitted) #volumeSnapshotClassName: my-vsc-name Replace source with the persistent volume claim that you are backing up. Replace the value for schedule with how often to run the backup. This example has the schedule for every 30 minutes. See Scheduling your synchronization for more information about setting up your schedule. Replace the value of PruneIntervalDays to the number of days that elapse between instances of repacking the data to save space. The prune operation can generate significant I/O traffic while it is running. Replace restic-config with the name of the secret that you created in step 1. Set the values for retain to your retention policy for the backed up images. Best practice: Use Clone for the value of CopyMethod to ensure that a point-in-time image is saved. Note: Restic movers run without root permissions by default. If you want to run restic movers as root, run the following command to add the elevated permissions annotation to your namespace. Replace <namespace> with the name of your namespace. 1.2.1.5.1. Restoring a restic backup You can restore the copied data from a restic backup into a new persistent volume claim. Best practice: Restore only one backup into a new persistent volume claim. To restore the restic backup, complete the following steps: Create a new persistent volume claim to contain the new data similar to the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc-name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi Replace pvc-name with the name of the new persistent volume claim. Create a ReplicationDestination custom resource that resembles the following example to specify where to restore the data: apiVersion: volsync.backube/v1alpha1 kind: ReplicationDestination metadata: name: <destination> spec: trigger: manual: restore-once restic: repository: <restic-repo> destinationPVC: <pvc-name> copyMethod: Direct Replace destination with the name of your replication destination CR. Replace restic-repo with the path to your repository where the source is stored. Replace pvc-name with the name of the new persistent volume claim where you want to restore the data. Use an existing persistent volume claim for this, rather than provisioning a new one. The restore process only needs to be completed once, and this example restores the most recent backup. For more information about restore options, see Restore options in the VolSync documentation. 1.2.1.6. Configuring an Rclone replication An Rclone backup copies a single persistent volume to multiple locations by using Rclone through an intermediate object storage location, like AWS S3. It can be helpful when distributing data to multiple locations. Complete the following steps to configure an Rclone replication: Create a ReplicationSource custom resource that resembles the following example: apiVersion: volsync.backube/v1alpha1 kind: ReplicationSource metadata: name: <source> namespace: <source-ns> spec: sourcePVC: <source-pvc> trigger: schedule: "*/6 * * * *" #\* rclone: rcloneConfigSection: <intermediate-s3-bucket> rcloneDestPath: <destination-bucket> rcloneConfig: <rclone-secret> copyMethod: Snapshot storageClassName: <my-sc-name> volumeSnapshotClassName: <my-vsc> Replace source-pvc with the name for your replication source custom resource. Replace source-ns with the namespace of the persistent volume claim where your source is located. Replace source with the persistent volume claim that you are replicating. Replace the value of schedule with how often to run the replication. This example has the schedule for every 6 minutes. This value must be within quotation marks. See Scheduling your synchronization for more information. Replace intermediate-s3-bucket with the path to the configuration section of the Rclone configuration file. Replace destination-bucket with the path to the object bucket where you want your replicated files copied. Replace rclone-secret with the name of the secret that contains your Rclone configuration information. Set the value for copyMethod as Clone , Direct , or Snapshot . This value specifies whether the point-in-time copy is generated, and if so, what method is used for generating it. Replace my-sc-name with the name of the storage class that you want to use for your point-in-time copy. If not specified, the storage class of the source volume is used. Replace my-vsc with the name of the VolumeSnapshotClass to use, if you specified Snapshot as your copyMethod . This is not required for other types of copyMethod . Create a ReplicationDestination custom resource that resembles the following example: apiVersion: volsync.backube/v1alpha1 kind: ReplicationDestination metadata: name: database-destination namespace: dest spec: trigger: schedule: "3,9,15,21,27,33,39,45,51,57 * * * *" #/* rclone: rcloneConfigSection: <intermediate-s3-bucket> rcloneDestPath: <destination-bucket> rcloneConfig: <rclone-secret> copyMethod: Snapshot accessModes: [ReadWriteOnce] capacity: 10Gi storageClassName: <my-sc> volumeSnapshotClassName: <my-vsc> Replace the value for schedule with how often to move the replication to the destination. The schedules for the source and destination must be offset to allow the data to finish replicating before it is pulled from the destination. This example has the schedule for every 6 minutes, offset by 3 minutes. This value must be within quotation marks. See Scheduling your synchronization for more information about scheduling. Replace intermediate-s3-bucket with the path to the configuration section of the Rclone configuration file. Replace destination-bucket with the path to the object bucket where you want your replicated files copied. Replace rclone-secret with the name of the secret that contains your Rclone configuration information. Set the value for copyMethod as Clone , Direct , or Snapshot . This value specifies whether the point-in-time copy is generated, and if so, which method is used for generating it. The value for accessModes specifies the access modes for the persistent volume claim. Valid values are ReadWriteOnce or ReadWriteMany . The capacity specifies the size of the destination volume, which must be large enough to contain the incoming data. Replace my-sc with the name of the storage class that you want to use as the destination for your point-in-time copy. If not specified, the system storage class is used. Replace my-vsc with the name of the VolumeSnapshotClass to use, if you specified Snapshot as your copyMethod . This is not required for other types of copyMethod . If not included, the system default VolumeSnapshotClass is used. Note: Rclone movers run without root permissions by default. If you want to run Rclone movers as root, run the following command to add the elevated permissions annotation to your namespace. Replace <namespace> with the name of your namespace. 1.2.1.7. Additional resources See the following topics for more information: See Creating a secret for Rsync-TLS replication to learn how to create your own secret for an Rsync-TLS replication. For additional information about Rsync, see Usage in the VolSync documentation. For additional information about restic options, see Backup options in the VolSync documentation. Go back to Installing VolSync on the managed clusters 1.2.2. Converting a replicated image to a usable persistent volume claim You might need to convert the replicated image to a persistent volume claim to recover data. When you replicate or restore a pesistent volume claim from a ReplicationDestination location by using a VolumeSnapshot , a VolumeSnapshot is created. The VolumeSnapshot contains the latestImage from the last successful synchronization. The copy of the image must be converted to a persistent volume claim before it can be used. The VolSync ReplicationDestination volume populator can be used to convert a copy of the image to a usable persistent volume claim. Create a persistent volume claim with a dataSourceRef that references the ReplicationDestination where you want to restore a persistent volume claim. This persistent volume claim is populated with the contents of the VolumeSnapshot that is specified in the status.latestImage setting of the ReplicationDestination custom resource definition. The following YAML content shows a sample persistent volume claim that might be used: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <pvc-name> namespace: <destination-ns> spec: accessModes: - ReadWriteOnce dataSourceRef: kind: ReplicationDestination apiGroup: volsync.backube name: <replicationdestination_to_replace> resources: requests: storage: 2Gi Replace pvc-name with a name for your new persistent volume claim. Replace destination-ns with the namespace where the persistent volume claim and ReplicationDestination are located. Replace replicationdestination_to_replace with the ReplicationDestination name. Best practice: You can update resources.requests.storage with a different value when the value is at least the same size as the initial source persistent volume claim. Validate that your persistent volume claim is running in the environment by entering the following command: Note: If no latestImage exists, the persistent volume claim remains in a pending state until the ReplicationDestination completes and a snapshot is available. You can create a ReplicationDestination and a persistent volume controller that use the ReplicationDestination at the same time. The persistent volume claim only starts the volume population process after the ReplicationDestination completed a replication and a snapshot is available. You can find the snapshot in .status.latestImage . Additionally, if the storage class that is used has a volumeBindingMode value of WaitForFirstConsumer , the volume populator waits until there is a consumer of the persistent volume claim before it is populated. When a consumer requires access, such as a pod that wants to mount the persistent volume claim, then the volume is populated. The VolSync volume populator controller uses the latestImage from the ReplicationDestination . The latestImage is updated each time a replication completes after the persistent volume control was created. 1.2.3. Scheduling your synchronization Select from three options when determining how you start your replications: always running, on a schedule, or manually. Scheduling your replications is an option that is often selected. The Schedule option runs replications at scheduled times. A schedule is defined by a cronspec , so the schedule can be configured as intervals of time or as specific times. The order of the schedule values are: "minute (0-59) hour (0-23) day-of-month (1-31) month (1-12) day-of-week (0-6)" The replication starts when the scheduled time occurs. Your setting for this replication option might resemble the following content: spec: trigger: schedule: "*/6 * * * *" After enabling one of these methods, your synchronization schedule runs according to the method that you configured. See the VolSync documentation for additional information and options. 1.2.4. VolSync advanced configuration You can further configure VolSync when replicating persistent volumes, such as creating your own secret. 1.2.4.1. Creating a secret for Rsync-TLS replication The source and destination must have access to the shared key for the TLS connection. You can find the key location in the keySecret field. If you do not provide a secret name in .spec.rsyncTLS.keySecret , the secret name is automatically generated and added to .status.rsyncTLS.keySecret . To create your own secret, complete the following steps: Use the following format for the secret: <id>:<at_least_32_hex_digits> See the following example: 1:23b7395fafc3e842bd8ac0fe142e6ad1 See the following secret.yaml example which corresponds to the example: apiVersion: v1 data: # echo -n 1:23b7395fafc3e842bd8ac0fe142e6ad1 | base64 psk.txt: MToyM2I3Mzk1ZmFmYzNlODQyYmQ4YWMwZmUxNDJlNmFkMQ== kind: Secret metadata: name: tls-key-secret type: Opaque
|
[
"apiVersion: my.group/v1alpha1 kind: MyResource metadata: labels: cluster.open-cluster-management.io/backup: \"\"",
"apiVersion: my.group/v1alpha1 kind: MyResource metadata: labels: velero.io/exclude-from-backup: \"true\"",
"apiVersion: my.group/v1alpha1 kind: MyResource metadata: labels: cluster.open-cluster-management.io/backup: cluster-activation",
"installer.open-cluster-management.io/oadp-subscription-spec: '{\"channel\": \"stable-1.4\"}'",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: annotations: installer.open-cluster-management.io/oadp-subscription-spec: '{\"channel\": \"stable-1.5\",\"installPlanApproval\": \"Automatic\",\"name\": \"redhat-oadp-operator\",\"source\": \"redhat-operators\",\"sourceNamespace\": \"openshift-marketplace\"}' name: multiclusterhub spec: {}",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws restic: enable: true backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: my-bucket prefix: my-prefix config: region: us-east-1 profile: \"default\" credential: name: cloud-credentials key: cloud snapshotLocations: - name: default velero: provider: aws config: region: us-west-2 profile: \"default\"",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: annotations: installer.open-cluster-management.io/oadp-subscription-spec: '{\"source\": \"redhat-operator-index\"}'",
"get catalogsource -A",
"NAMESPACE NAME DISPLAY TYPE PUBLISHER AGE openshift-marketplace acm-custom-registry Advanced Cluster Management grpc Red Hat 42h openshift-marketplace multiclusterengine-catalog MultiCluster Engine grpc Red Hat 42h openshift-marketplace redhat-operator-index grpc 42h",
"apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: name: multiclusterhub namespace: open-cluster-management spec: availabilityConfig: High enableClusterBackup: false imagePullSecret: multiclusterhub-operator-pull-secret ingress: sslCiphers: - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 overrides: components: - enabled: true name: multiclusterhub-repo - enabled: true name: search - enabled: true name: management-ingress - enabled: true name: console - enabled: true name: insights - enabled: true name: grc - enabled: true name: cluster-lifecycle - enabled: true name: volsync - enabled: true name: multicluster-engine - enabled: true name: cluster-backup separateCertificateManagement: false",
"create -f cluster_v1beta1_backupschedule.yaml",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm namespace: open-cluster-management-backup spec: veleroSchedule: 0 */2 * * * 1 veleroTtl: 120h 2",
"get BackupSchedule -n open-cluster-management-backup",
"create -f cluster_v1beta1_backupschedule.yaml",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupName: latest veleroCredentialsBackupName: latest veleroResourcesBackupName: latest",
"get restore.velero.io -n open-cluster-management-backup",
"describe restore.cluster.open-cluster-management.io -n open-cluster-management-backup",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm namespace: open-cluster-management-backup spec: veleroSchedule: 0 */2 * * * veleroTtl: 120h",
"get BackupSchedule -n open-cluster-management-backup",
"get schedules -A | grep acm",
"get backupschedule -A",
"NAMESPACE NAME PHASE MESSAGE openshift-adp schedule-hub-1 BackupCollision Backup acm-resources-schedule-20220301234625, from cluster with id [be97a9eb-60b8-4511-805c-298e7c0898b3] is using the same storage location. This is a backup collision with current cluster [1f30bfe5-0588-441c-889e-eaf0ae55f941] backup. Review and resolve the collision then create a new BackupSchedule resource to resume backups from this cluster.",
"get restore -n open-cluster-management-backup",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupName: latest veleroCredentialsBackupName: latest veleroResourcesBackupName: latest",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm-passive-sync namespace: open-cluster-management-backup spec: syncRestoreWithNewBackups: true # restore again when new backups are available restoreSyncInterval: 10m # check for new backups every 10 minutes cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: skip veleroCredentialsBackupName: latest veleroResourcesBackupName: latest",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm-passive namespace: open-cluster-management-backup spec: cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: skip veleroCredentialsBackupName: latest veleroResourcesBackupName: latest",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm-passive-activate namespace: open-cluster-management-backup spec: cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: latest veleroCredentialsBackupName: skip veleroResourcesBackupName: skip",
"get restore -n open-cluster-management-backup",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: latest veleroCredentialsBackupName: latest veleroResourcesBackupName: latest",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupSchedule: latest veleroCredentialsBackupSchedule: latest veleroResourcesBackupSchedule: latest",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupName: latest veleroCredentialsBackupName: skip veleroResourcesBackupName: skip",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm namespace: open-cluster-management-backup spec: veleroManagedClustersBackupName: acm-managed-clusters-schedule-20210902205438 veleroCredentialsBackupName: skip veleroResourcesBackupName: skip",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-filter-sample namespace: open-cluster-management-backup spec: cleanupBeforeRestore: None veleroCredentialsBackupName: latest veleroResourcesBackupName: latest veleroManagedClustersBackupName: latest excludedResources: - ManagedCluster",
"spec: excludedResources excludedNamespaces",
"describe -n open-cluster-management-backup <restore-name>",
"Spec: Cleanup Before Restore: CleanupRestored Restore Sync Interval: 4m Sync Restore With New Backups: true Velero Credentials Backup Name: latest Velero Managed Clusters Backup Name: skip Velero Resources Backup Name: latest Status: Last Message: Velero restores have run to completion, restore will continue to sync with new backups Phase: Enabled Velero Credentials Restore Name: example-acm-credentials-schedule-20220406171919 Velero Resources Restore Name: example-acm-resources-schedule-20220406171920 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-credentials-hive-schedule-20220406155817 Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-credentials-cluster-schedule-20220406155817 Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-credentials-schedule-20220406155817 Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-resources-generic-schedule-20220406155817 Normal Prepare to restore: 76m Restore controller Cleaning up resources for backup acm-resources-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-credentials-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-resources-generic-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-resources-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-credentials-cluster-schedule-20220406155817 Normal Velero restore created: 74m Restore controller example-acm-credentials-hive-schedule-20220406155817 Normal Prepare to restore: 64m Restore controller Cleaning up resources for backup acm-resources-schedule-20220406165328 Normal Prepare to restore: 62m Restore controller Cleaning up resources for backup acm-credentials-hive-schedule-20220406165328 Normal Prepare to restore: 62m Restore controller Cleaning up resources for backup acm-credentials-cluster-schedule-20220406165328 Normal Prepare to restore: 62m Restore controller Cleaning up resources for backup acm-credentials-schedule-20220406165328 Normal Prepare to restore: 62m Restore controller Cleaning up resources for backup acm-resources-generic-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-credentials-cluster-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-credentials-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-resources-generic-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-resources-schedule-20220406165328 Normal Velero restore created: 61m Restore controller example-acm-credentials-hive-schedule-20220406165328 Normal Prepare to restore: 38m Restore controller Cleaning up resources for backup acm-resources-generic-schedule-20220406171920 Normal Prepare to restore: 38m Restore controller Cleaning up resources for backup acm-resources-schedule-20220406171920 Normal Prepare to restore: 36m Restore controller Cleaning up resources for backup acm-credentials-hive-schedule-20220406171919 Normal Prepare to restore: 36m Restore controller Cleaning up resources for backup acm-credentials-cluster-schedule-20220406171919 Normal Prepare to restore: 36m Restore controller Cleaning up resources for backup acm-credentials-schedule-20220406171919 Normal Velero restore created: 36m Restore controller example-acm-credentials-cluster-schedule-20220406171919 Normal Velero restore created: 36m Restore controller example-acm-credentials-schedule-20220406171919 Normal Velero restore created: 36m Restore controller example-acm-resources-generic-schedule-20220406171920 Normal Velero restore created: 36m Restore controller example-acm-resources-schedule-20220406171920 Normal Velero restore created: 36m Restore controller example-acm-credentials-hive-schedule-20220406171919",
"apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterhub spec: overrides: components: - enabled: true name: managedserviceaccount",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm-msa namespace: open-cluster-management-backup spec: veleroSchedule: veleroTtl: 120h useManagedServiceAccount: true",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm-msa namespace: open-cluster-management-backup spec: veleroSchedule: veleroTtl: 120h useManagedServiceAccount: true managedServiceAccountTTL: 300h",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: managed-cluster-name spec: hubAcceptsClient: true leaseDurationSeconds: 60 managedClusterClientConfigs: url: <apiserver>",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: BackupSchedule metadata: name: schedule-acm-msa namespace: open-cluster-management-backup spec: veleroSchedule: veleroTtl: 120h useManagedServiceAccount: false",
"spec: backupLocations: - velero: config: kmsKeyId: 502b409c-4da1-419f-a16e-eif453b3i49f profile: default region: us-east-1",
"resources: limits: cpu: \"1\" memory: 256Mi requests: cpu: 500m memory: 128Mi",
"limits: cpu: \"2\" memory: 1Gi requests: cpu: 500m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero namespace: open-cluster-management-backup spec: configuration: velero: podConfig: resourceAllocations: limits: cpu: \"2\" memory: 1Gi requests: cpu: 500m memory: 256Mi",
"addons.open-cluster-management.io/volsync=true",
"get csv -n openshift-operators",
"label managedcluster <managed-cluster-1> \"addons.open-cluster-management.io/volsync\"=\"true\"",
"label managedcluster <managed-cluster-2> \"addons.open-cluster-management.io/volsync\"=\"true\"",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: volsync namespace: <managed-cluster-1-namespace> spec: {}",
"apply -f volsync-mcao.yaml",
"annotations: operator-subscription-channel: stable-0.9",
"create ns <destination-ns>",
"apiVersion: volsync.backube/v1alpha1 kind: ReplicationDestination metadata: name: <destination> namespace: <destination-ns> spec: rsyncTLS: serviceType: LoadBalancer 1 copyMethod: Snapshot capacity: 2Gi 2 accessModes: [ReadWriteOnce] storageClassName: gp2-csi volumeSnapshotClassName: csi-aws-vsc",
"create -n <destination-ns> -f replication_destination.yaml",
"ADDRESS=`oc get replicationdestination <destination> -n <destination-ns> --template={{.status.rsyncTLS.address}}` echo USDADDRESS",
"a831264645yhrjrjyer6f9e4a02eb2-5592c0b3d94dd376.elb.us-east-1.amazonaws.com",
"KEYSECRET=`oc get replicationdestination <destination> -n <destination-ns> --template={{.status.rsyncTLS.keySecret}}` echo USDKEYSECRET",
"volsync-rsync-tls-destination-name",
"get secret -n <destination-ns> USDKEYSECRET -o yaml > /tmp/secret.yaml",
"vi /tmp/secret.yaml",
"create -f /tmp/secret.yaml",
"apiVersion: volsync.backube/v1alpha1 kind: ReplicationSource metadata: name: <source> 1 namespace: <source-ns> 2 spec: sourcePVC: <persistent_volume_claim> 3 trigger: schedule: \"*/3 * * * *\" #/* rsyncTLS: keySecret: <mykeysecret> 4 address: <my.host.com> 5 copyMethod: Snapshot storageClassName: gp2-csi volumeSnapshotClassName: csi-aws-vsc",
"sed -i \"s/<my.host.com>/USDADDRESS/g\" replication_source.yaml sed -i \"s/<mykeysecret>/USDKEYSECRET/g\" replication_source.yaml create -n <source> -f replication_source.yaml",
"describe ReplicationSource -n <source-ns> <source>",
"Status: Conditions: Last Transition Time: 2021-10-14T20:48:00Z Message: Synchronization in-progress Reason: SyncInProgress Status: True Type: Synchronizing Last Transition Time: 2021-10-14T20:41:41Z Message: Reconcile complete Reason: ReconcileComplete Status: True Type: Reconciled Last Sync Duration: 5m20.764642395s Last Sync Time: 2021-10-14T20:47:01Z Next Sync Time: 2021-10-14T20:48:00Z",
"create ns <destination-ns>",
"apiVersion: volsync.backube/v1alpha1 kind: ReplicationDestination metadata: name: <destination> namespace: <destination-ns> spec: rsync: serviceType: LoadBalancer copyMethod: Snapshot capacity: 2Gi accessModes: [ReadWriteOnce] storageClassName: gp2-csi volumeSnapshotClassName: csi-aws-vsc",
"create -n <destination-ns> -f replication_destination.yaml",
"ADDRESS=`oc get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.address}}` echo USDADDRESS",
"a831264645yhrjrjyer6f9e4a02eb2-5592c0b3d94dd376.elb.us-east-1.amazonaws.com",
"SSHKEYS=`oc get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.sshKeys}}` echo USDSSHKEYS",
"volsync-rsync-dst-src-destination-name",
"get secret -n <destination-ns> USDSSHKEYS -o yaml > /tmp/secret.yaml",
"vi /tmp/secret.yaml",
"create -f /tmp/secret.yaml",
"apiVersion: volsync.backube/v1alpha1 kind: ReplicationSource metadata: name: <source> namespace: <source-ns> spec: sourcePVC: <persistent_volume_claim> trigger: schedule: \"*/3 * * * *\" #/* rsync: sshKeys: <mysshkeys> address: <my.host.com> copyMethod: Snapshot storageClassName: gp2-csi volumeSnapshotClassName: csi-aws-vsc",
"sed -i \"s/<my.host.com>/USDADDRESS/g\" replication_source.yaml sed -i \"s/<mysshkeys>/USDSSHKEYS/g\" replication_source.yaml create -n <source> -f replication_source.yaml",
"describe ReplicationSource -n <source-ns> <source>",
"Status: Conditions: Last Transition Time: 2021-10-14T20:48:00Z Message: Synchronization in-progress Reason: SyncInProgress Status: True Type: Synchronizing Last Transition Time: 2021-10-14T20:41:41Z Message: Reconcile complete Reason: ReconcileComplete Status: True Type: Reconciled Last Sync Duration: 5m20.764642395s Last Sync Time: 2021-10-14T20:47:01Z Next Sync Time: 2021-10-14T20:48:00Z",
"apiVersion: v1 kind: Secret metadata: name: restic-config type: Opaque stringData: RESTIC_REPOSITORY: <my-restic-repository> RESTIC_PASSWORD: <my-restic-password> AWS_ACCESS_KEY_ID: access AWS_SECRET_ACCESS_KEY: password",
"apiVersion: volsync.backube/v1alpha1 kind: ReplicationSource metadata: name: mydata-backup spec: sourcePVC: <source> trigger: schedule: \"*/30 * * * *\" #\\* restic: pruneIntervalDays: 14 repository: <restic-config> retain: hourly: 6 daily: 5 weekly: 4 monthly: 2 yearly: 1 copyMethod: Clone # The StorageClass to use when creating the PiT copy (same as source PVC if omitted) #storageClassName: my-sc-name # The VSC to use if the copy method is Snapshot (default if omitted) #volumeSnapshotClassName: my-vsc-name",
"annotate namespace <namespace> volsync.backube/privileged-movers=true",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc-name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi",
"apiVersion: volsync.backube/v1alpha1 kind: ReplicationDestination metadata: name: <destination> spec: trigger: manual: restore-once restic: repository: <restic-repo> destinationPVC: <pvc-name> copyMethod: Direct",
"apiVersion: volsync.backube/v1alpha1 kind: ReplicationSource metadata: name: <source> namespace: <source-ns> spec: sourcePVC: <source-pvc> trigger: schedule: \"*/6 * * * *\" #\\* rclone: rcloneConfigSection: <intermediate-s3-bucket> rcloneDestPath: <destination-bucket> rcloneConfig: <rclone-secret> copyMethod: Snapshot storageClassName: <my-sc-name> volumeSnapshotClassName: <my-vsc>",
"apiVersion: volsync.backube/v1alpha1 kind: ReplicationDestination metadata: name: database-destination namespace: dest spec: trigger: schedule: \"3,9,15,21,27,33,39,45,51,57 * * * *\" #/* rclone: rcloneConfigSection: <intermediate-s3-bucket> rcloneDestPath: <destination-bucket> rcloneConfig: <rclone-secret> copyMethod: Snapshot accessModes: [ReadWriteOnce] capacity: 10Gi storageClassName: <my-sc> volumeSnapshotClassName: <my-vsc>",
"annotate namespace <namespace> volsync.backube/privileged-movers=true",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <pvc-name> namespace: <destination-ns> spec: accessModes: - ReadWriteOnce dataSourceRef: kind: ReplicationDestination apiGroup: volsync.backube name: <replicationdestination_to_replace> resources: requests: storage: 2Gi",
"kubectl get pvc -n <destination-ns>",
"spec: trigger: schedule: \"*/6 * * * *\"",
"apiVersion: v1 data: # echo -n 1:23b7395fafc3e842bd8ac0fe142e6ad1 | base64 psk.txt: MToyM2I3Mzk1ZmFmYzNlODQyYmQ4YWMwZmUxNDJlNmFkMQ== kind: Secret metadata: name: tls-key-secret type: Opaque"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/business_continuity/business-cont-overview
|
Providing feedback on Red Hat build of OpenJDK documentation
|
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.392_release_notes/providing-direct-documentation-feedback_openjdk
|
Appendix A. Providing feedback on Red Hat documentation
|
Appendix A. Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/release_notes/providing-feedback-on-red-hat-documentation_satellite
|
Chapter 4. Configuring secrets for Alertmanager
|
Chapter 4. Configuring secrets for Alertmanager The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver. For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object. 4.1. Adding a secret to the Alertmanager configuration You can add secrets to the Alertmanager configuration for core platform monitoring components by editing the cluster-monitoring-config config map in the openshift-monitoring project. After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods. Prerequisites If you are configuring core OpenShift Container Platform monitoring components in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. You have created the secret to be configured in Alertmanager in the openshift-monitoring project. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the secret to be configured in Alertmanager in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object. To add a secret configuration to Alertmanager for core platform monitoring : Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a secrets: section under data/config.yaml/alertmanagerMain with the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: 1 - <secret_name_1> 2 - <secret_name_2> 1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. 2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token To add a secret configuration to Alertmanager for user-defined project monitoring : Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a secrets: section under data/config.yaml/alertmanager/secrets with the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2> 1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. 2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. The following sample config map settings configure Alertmanager to use two Secret objects named test-secret and test-secret-api-token : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true secrets: - test-secret - test-api-receiver-token Save the file to apply the changes. The new configuration is applied automatically. 4.2. Attaching additional labels to your time series and alerts You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Define a map of labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. For example, to add metadata about the region and environment to all time series and alerts, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod Save the file to apply the changes. The new configuration is applied automatically. To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Define a map of labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. Note In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules. For example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. Enabling monitoring for user-defined projects
|
[
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: 1 - <secret_name_1> 2 - <secret_name_2>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true secrets: - test-secret - test-api-receiver-token",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/monitoring/monitoring-configuring-secrets-for-alertmanager_configuring-the-monitoring-stack
|
Chapter 1. Support policy for Eclipse Temurin
|
Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.402_release_notes/openjdk8-temurin-support-policy
|
Appendix A. Example Configuration: Load Balancing Ceph Object Gateway Servers with HAProxy and Keepalived
|
Appendix A. Example Configuration: Load Balancing Ceph Object Gateway Servers with HAProxy and Keepalived This appendix provides an example showing the configuration of HAProxy and Keepalived with a Ceph cluster. The Ceph Object Gateway allows you to assign many instances of the object gateway to a single zone so that you can scale out as load increases. Since each object gateway instance has its own IP address, you can use HAProxy and keepalived to balance the load across Ceph Object Gateway servers. In this configuration, HAproxy performs the load balancing across Ceph Object Gateway servers while Keepalived is used to manage the Virtual IP addresses of the Ceph Object Gateway servers and to monitor HAProxy. Another use case for HAProxy and keepalived is to terminate HTTPS at the HAProxy server. Red Hat Ceph Storage (RHCS) 1.3.x uses Civetweb, and the implementation in RHCS 1.3.x does not support HTTPS. You can use an HAProxy server to terminate HTTPS at the HAProxy server and use HTTP between the HAProxy server and the Civetweb gateway instances. This example includes this configuration as part of the procedure. A.1. Prerequisites To set up HAProxy with the Ceph Object Gateway, you must have: A running Ceph cluster; At least two Ceph Object Gateway servers within the same zone configured to run on port 80; At least two servers for HAProxy and keepalived. Note This procedure assumes that you have at least two Ceph Object Gateway servers running, and that you get a valid response when running test scripts over port 80.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/ceph_example
|
Chapter 5. Authenticating Camel K against Kafka
|
Chapter 5. Authenticating Camel K against Kafka You can authenticate Camel K against Apache Kafka. The following example demonstrates how to set up a Kafka Topic and use it in a simple Producer/Consumer pattern Integration. 5.1. Setting up Kafka To set up Kafka, you must: Install the required OpenShift operators Create a Kafka instance Create a Kafka topic Use the Red Hat product mentioned below to set up Kafka: Red Hat Advanced Message Queuing (AMQ) streams - A self-managed Apache Kafka offering. AMQ Streams is based on open source Strimzi and is included as part of Red Hat Integration . AMQ Streams is a distributed and scalable streaming platform based on Apache Kafka that includes a publish/subscribe messaging broker. Kafka Connect provides a framework to integrate Kafka-based systems with external systems. Using Kafka Connect, you can configure source and sink connectors to stream data from external systems into and out of a Kafka broker. 5.1.1. Setting up Kafka by using AMQ streams AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. 5.1.1.1. Preparing your OpenShift cluster for AMQ Streams To use Camel K or Kamelets and Red Hat AMQ Streams, you must install the following operators and tools: Red Hat Integration - AMQ Streams operator - Manages the communication between your Openshift Cluster and AMQ Streams for Apache Kafka instances. Red Hat Integration - Camel K operator - Installs and manages Camel K - a lightweight integration framework that runs natively in the cloud on OpenShift. Camel K CLI tool - Allows you to access all Camel K features. Prerequisites You are familiar with Apache Kafka concepts. You can access an OpenShift 4.6 (or later) cluster with the correct access level, the ability to create projects and install operators, and the ability to install the OpenShift and the Camel K CLI on your local system. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. Procedure To set up Kafka by using AMQ Streams: Log in to your OpenShift cluster's web console. Create or open a project in which you plan to create your integration, for example my-camel-k-kafka . Install the Camel K operator and Camel K CLI as described in Installing Camel K . Install the AMQ streams operator: From any project, select Operators > OperatorHub . In the Filter by Keyword field, type AMQ Streams . Click the Red Hat Integration - AMQ Streams card and then click Install . The Install Operator page opens. Accept the defaults and then click Install . Select Operators > Installed Operators to verify that the Camel K and AMQ Streams operators are installed. steps Setting up a Kafka topic with AMQ Streams 5.1.1.2. Setting up a Kafka topic with AMQ Streams A Kafka topic provides a destination for the storage of data in a Kafka instance. You must set up a Kafka topic before you can send data to it. Prerequisites You can access an OpenShift cluster. You installed the Red Hat Integration - Camel K and Red Hat Integration - AMQ Streams operators as described in Preparing your OpenShift cluster . You installed the OpenShift CLI ( oc ) and the Camel K CLI ( kamel ). Procedure To set up a Kafka topic by using AMQ Streams: Log in to your OpenShift cluster's web console. Select Projects and then click the project in which you installed the Red Hat Integration - AMQ Streams operator. For example, click the my-camel-k-kafka project. Select Operators > Installed Operators and then click Red Hat Integration - AMQ Streams . Create a Kafka cluster: Under Kafka , click Create instance . Type a name for the cluster, for example kafka-test . Accept the other defaults and then click Create . The process to create the Kafka instance might take a few minutes to complete. When the status is ready, continue to the step. Create a Kafka topic: Select Operators > Installed Operators and then click Red Hat Integration - AMQ Streams . Under Kafka Topic , click Create Kafka Topic . Type a name for the topic, for example test-topic . Accept the other defaults and then click Create . 5.1.2. Creating a secret by using the SASL/Plain authentication method You can create a secret with the credentials that you obtained (Kafka bootstrap URL, service account ID, and service account secret). Procedure Edit the application.properties file and add the Kafka credentials. application.properties file Run the following command to create a secret that contains the sensitive properties in the application.properties file: You use this secret when you run a Camel K integration. See Also The Camel K Kafka Basic Quickstart 5.1.3. Creating a secret by using the SASL/OAUTHBearer authentication method You can create a secret with the credentials that you obtained (Kafka bootstrap URL, service account ID, and service account secret). Procedure Edit the application-oauth.properties file and add the Kafka credentials. application-oauth.properties file Run the following command to create a secret that contains the sensitive properties in the application.properties file: You use this secret when you run a Camel K integration. See Also The Camel K Kafka Basic Quickstart 5.2. Running a Kafka integration Running a producer integration Create a sample producer integration. This fills the topic with a message, every 10 seconds. Sample SaslSSLKafkaProducer.java Then run the procedure integration. The producer will create a new message and push into the topic and log some information. Running a consumer integration Create a consumer integration. Sample SaslSSLKafkaProducer.java Open another shell and run the consumer integration using the command: A consumer will start logging the events found in the Topic:
|
[
"camel.component.kafka.brokers = <YOUR-KAFKA-BOOTSTRAP-URL-HERE> camel.component.kafka.security-protocol = SASL_SSL camel.component.kafka.sasl-mechanism = PLAIN camel.component.kafka.sasl-jaas-config=org.apache.kafka.common.security.plain.PlainLoginModule required username='<YOUR-SERVICE-ACCOUNT-ID-HERE>' password='<YOUR-SERVICE-ACCOUNT-SECRET-HERE>'; consumer.topic=<TOPIC-NAME> producer.topic=<TOPIC-NAME>",
"create secret generic kafka-props --from-file application.properties",
"camel.component.kafka.brokers = <YOUR-KAFKA-BOOTSTRAP-URL-HERE> camel.component.kafka.security-protocol = SASL_SSL camel.component.kafka.sasl-mechanism = OAUTHBEARER camel.component.kafka.sasl-jaas-config = org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id='<YOUR-SERVICE-ACCOUNT-ID-HERE>' oauth.client.secret='<YOUR-SERVICE-ACCOUNT-SECRET-HERE>' oauth.token.endpoint.uri=\"https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/token\" ; camel.component.kafka.additional-properties[sasl.login.callback.handler.class]=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler consumer.topic=<TOPIC-NAME> producer.topic=<TOPIC-NAME>",
"create secret generic kafka-props --from-file application-oauth.properties",
"// kamel run --secret kafka-props SaslSSLKafkaProducer.java --dev // camel-k: language=java dependency=mvn:org.apache.camel.quarkus:camel-quarkus-kafka dependency=mvn:io.strimzi:kafka-oauth-client:0.7.1.redhat-00003 import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.kafka.KafkaConstants; public class SaslSSLKafkaProducer extends RouteBuilder { @Override public void configure() throws Exception { log.info(\"About to start route: Timer -> Kafka \"); from(\"timer:foo\") .routeId(\"FromTimer2Kafka\") .setBody() .simple(\"Message #USD{exchangeProperty.CamelTimerCounter}\") .to(\"kafka:{{producer.topic}}\") .log(\"Message correctly sent to the topic!\"); } }",
"kamel run --secret kafka-props SaslSSLKafkaProducer.java --dev",
"[2] 2021-05-06 08:48:11,854 INFO [FromTimer2Kafka] (Camel (camel-1) thread #1 - KafkaProducer[test]) Message correctly sent to the topic! [2] 2021-05-06 08:48:11,854 INFO [FromTimer2Kafka] (Camel (camel-1) thread #3 - KafkaProducer[test]) Message correctly sent to the topic! [2] 2021-05-06 08:48:11,973 INFO [FromTimer2Kafka] (Camel (camel-1) thread #5 - KafkaProducer[test]) Message correctly sent to the topic! [2] 2021-05-06 08:48:12,970 INFO [FromTimer2Kafka] (Camel (camel-1) thread #7 - KafkaProducer[test]) Message correctly sent to the topic! [2] 2021-05-06 08:48:13,970 INFO [FromTimer2Kafka] (Camel (camel-1) thread #9 - KafkaProducer[test]) Message correctly sent to the topic!",
"// kamel run --secret kafka-props SaslSSLKafkaConsumer.java --dev // camel-k: language=java dependency=mvn:org.apache.camel.quarkus:camel-quarkus-kafka dependency=mvn:io.strimzi:kafka-oauth-client:0.7.1.redhat-00003 import org.apache.camel.builder.RouteBuilder; public class SaslSSLKafkaConsumer extends RouteBuilder { @Override public void configure() throws Exception { log.info(\"About to start route: Kafka -> Log \"); from(\"kafka:{{consumer.topic}}\") .routeId(\"FromKafka2Log\") .log(\"USD{body}\"); } }",
"kamel run --secret kafka-props SaslSSLKafkaConsumer.java --dev",
"[1] 2021-05-06 08:51:08,991 INFO [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #8 [1] 2021-05-06 08:51:10,065 INFO [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #9 [1] 2021-05-06 08:51:10,991 INFO [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #10 [1] 2021-05-06 08:51:11,991 INFO [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #11"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/developing_and_managing_integrations_using_camel_k/authenticate-camel-k-against-kafka
|
Chapter 6. Logging performance data with pmlogger
|
Chapter 6. Logging performance data with pmlogger With the PCP tool you can log the performance metric values and replay them later. This allows you to perform a retrospective performance analysis. Using the pmlogger tool, you can: Create the archived logs of selected metrics on the system Specify which metrics are recorded on the system and how often 6.1. Modifying the pmlogger configuration file with pmlogconf When the pmlogger service is running, PCP logs a default set of metrics on the host. Use the pmlogconf utility to check the default configuration. If the pmlogger configuration file does not exist, pmlogconf creates it with a default metric values. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . Procedure Create or modify the pmlogger configuration file: Follow pmlogconf prompts to enable or disable groups of related performance metrics and to control the logging interval for each enabled group. Additional resources pmlogconf(1) and pmlogger(1) man pages on your system System services and tools distributed with PCP 6.2. Editing the pmlogger configuration file manually To create a tailored logging configuration with specific metrics and given intervals, edit the pmlogger configuration file manually. The default pmlogger configuration file is /var/lib/pcp/config/pmlogger/config.default . The configuration file specifies which metrics are logged by the primary logging instance. In manual configuration, you can: Record metrics which are not listed in the automatic configuration. Choose custom logging frequencies. Add PMDA with the application metrics. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . Procedure Open and edit the /var/lib/pcp/config/pmlogger/config.default file to add specific metrics: Additional resources pmlogger(1) man page on your system System services and tools distributed with PCP 6.3. Enabling the pmlogger service The pmlogger service must be started and enabled to log the metric values on the local machine. This procedure describes how to enable the pmlogger service. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . Procedure Start and enable the pmlogger service: Verification Verify if the pmlogger service is enabled: Additional resources pmlogger(1) man page on your system System services and tools distributed with PCP /var/lib/pcp/config/pmlogger/config.default file 6.4. Setting up a client system for metrics collection This procedure describes how to set up a client system so that a central server can collect metrics from clients running PCP. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . Procedure Install the pcp-system-tools package: Configure an IP address for pmcd : Replace 192.168.4.62 with the IP address, the client should listen on. By default, pmcd is listening on the localhost. Configure the firewall to add the public zone permanently: Set an SELinux boolean: Enable the pmcd and pmlogger services: Verification Verify if the pmcd is correctly listening on the configured IP address: Additional resources pmlogger(1) , firewall-cmd(1) , ss(8) , and setsebool(8) man pages on your system System services and tools distributed with PCP /var/lib/pcp/config/pmlogger/config.default file 6.5. Setting up a central server to collect data This procedure describes how to create a central server to collect metrics from clients running PCP. Prerequisites PCP is installed. For more information, see Installing and enabling PCP . Client is configured for metrics collection. For more information, see Setting up a client system for metrics collection . Procedure Install the pcp-system-tools package: Create the /etc/pcp/pmlogger/control.d/remote file with the following content: Replace 192.168.4.13 , 192.168.4.14 , 192.168.4.62 and 192.168.4.69 with the client IP addresses. Enable the pmcd and pmlogger services: Verification Ensure that you can access the latest archive file from each directory: The archive files from the /var/log/pcp/pmlogger/ directory can be used for further analysis and graphing. Additional resources pmlogger(1) man page on your system System services and tools distributed with PCP /var/lib/pcp/config/pmlogger/config.default file 6.6. Systemd units and pmlogger When you deploy the pmlogger service, either as a single host monitoring itself or a pmlogger farm with a single host collecting metrics from several remote hosts, there are several associated systemd service and timer units that are automatically deployed. These services and timers provide routine checks to ensure that your pmlogger instances are running, restart any missing instances, and perform archive management such as file compression. The checking and housekeeping services typically deployed by pmlogger are: pmlogger_daily.service Runs daily, soon after midnight by default, to aggregate, compress, and rotate one or more sets of PCP archives. Also culls archives older than the limit, 2 weeks by default. Triggered by the pmlogger_daily.timer unit, which is required by the pmlogger.service unit. pmlogger_check Performs half-hourly checks that pmlogger instances are running. Restarts any missing instances and performs any required compression tasks. Triggered by the pmlogger_check.timer unit, which is required by the pmlogger.service unit. pmlogger_farm_check Checks the status of all configured pmlogger instances. Restarts any missing instances. Migrates all non-primary instances to the pmlogger_farm service. Triggered by the pmlogger_farm_check.timer , which is required by the pmlogger_farm.service unit that is itself required by the pmlogger.service unit. These services are managed through a series of positive dependencies, meaning that they are all enabled upon activating the primary pmlogger instance. Note that while pmlogger_daily.service is disabled by default, pmlogger_daily.timer being active via the dependency with pmlogger.service will trigger pmlogger_daily.service to run. pmlogger_daily is also integrated with pmlogrewrite for automatically rewriting archives before merging. This helps to ensure metadata consistency amid changing production environments and PMDAs. For example, if pmcd on one monitored host is updated during the logging interval, the semantics for some metrics on the host might be updated, thus making the new archives incompatible with the previously recorded archives from that host. For more information see the pmlogrewrite(1) man page. Managing systemd services triggered by pmlogger You can create an automated custom archive management system for data collected by your pmlogger instances. This is done using control files. These control files are: For the primary pmlogger instance: etc/pcp/pmlogger/control /etc/pcp/pmlogger/control.d/local For the remote hosts: /etc/pcp/pmlogger/control.d/remote Replace remote with your desired file name. NOTE The primary pmlogger instance must be running on the same host as the pmcd it connects to. You do not need to have a primary instance and you might not need it in your configuration if one central host is collecting data on several pmlogger instances connected to pmcd instances running on remote host The file should contain one line for each host to be logged. The default format of the primary logger instance that is automatically created looks similar to: The fields are: Host The name of the host to be logged P? Stands for "Primary?" This field indicates if the host is the primary logger instance, y , or not, n . There can only be one primary logger across all the files in your configuration and it must be running on the same host as the pmcd it connects to. S? Stands for "Socks?" This field indicates if this logger instance needs to use the SOCKS protocol to connect to pmcd through a firewall, y , or not, n . directory All archives associated with this line are created in this directory. args Arguments passed to pmlogger . The default values for the args field are: -r Report the archive sizes and growth rate. T24h10m Specifies when to end logging for each day. This is typically the time when pmlogger_daily.service runs. The default value of 24h10m indicates that logging should end 24 hours and 10 minutes after it begins, at the latest. -c config.default Specifies which configuration file to use. This essentially defines what metrics to record. -v 100Mb Specifies the size at which point one data volume is filled and another is created. After it switches to the new archive, the previously recorded one will be compressed by either pmlogger_daily or pmlogger_check . Additional resources pmlogger(1) and pmlogrewrite(1) man pages on your system pmlogger_daily(1) , pmlogger_check(1) , and pmlogger.control(5) man pages on your system 6.7. Replaying the PCP log archives with pmrep After recording the metric data, you can replay the PCP log archives. To export the logs to text files and import them into spreadsheets, use PCP utilities such as pcp2csv , pcp2xml , pmrep or pmlogsummary . Using the pmrep tool, you can: View the log files Parse the selected PCP log archive and export the values into an ASCII table Extract the entire archive log or only select metric values from the log by specifying individual metrics on the command line Prerequisites PCP is installed. For more information, see Installing and enabling PCP . The pmlogger service is enabled. For more information, see Enabling the pmlogger service . Install the pcp-gui package: Procedure Display the data on the metric: The mentioned example displays the data on the disk.dev.write metric collected in an archive at a 5 second interval in comma-separated-value format. Note Replace 20211128 in this example with a filename containing the pmlogger archive you want to display data for. Additional resources pmlogger(1) , pmrep(1) , and pmlogsummary(1) man pages on your system System services and tools distributed with PCP 6.8. Enabling PCP version 3 archives Performance Co-Pilot (PCP) archives store historical values of PCP metrics recorded from a single host and support retrospective performance analysis. PCP archives contain all the important metric data and metadata needed for offline or offsite analysis. These archives can be read by most PCP client tools or dumped raw by the pmdumplog tool. From PCP 6.0, version 3 archives are supported in addition to version 2 archives. Version 2 archives remain the default and will continue to receive long-term support for backwards compatibility purposes in addition to version 3 archives receiving long-term support from RHEL 9.2 and on. Using PCP version 3 archives offers the following benefits over version 2: Support for instance domain change-deltas Y2038-safe timestamps Nanosecond-precision timestamps Arbitrary timezones support 64-bit file offsets used for individual volumes larger than 2GB Prerequisites PCP is installed. For more information, see Installing and enabling PCP . Procedure Open the /etc/pcp.conf file in a text editor of your choice and set the PCP archive version: Restart the pmlogger service to apply your configuration changes: Create a new PCP archive log using your new configuration. For more information, see Logging performance data with pmlogger . Verification Verify the version of the archive created with your new configuration: Additional resources logarchive(5) and pmlogger(1) man pages on your system Logging performance data with pmlogger
|
[
"pmlogconf -r /var/lib/pcp/config/pmlogger/config.default",
"It is safe to make additions from here on # log mandatory on every 5 seconds { xfs.write xfs.write_bytes xfs.read xfs.read_bytes } log mandatory on every 10 seconds { xfs.allocs xfs.block_map xfs.transactions xfs.log } [access] disallow * : all; allow localhost : enquire;",
"systemctl start pmlogger systemctl enable pmlogger",
"pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 hardware: 12 cpus, 2 disks, 1 node, 36023MB RAM timezone: CEST-2 services: pmcd pmcd: Version 4.3.0-1, 8 agents, 1 client pmda: root pmcd proc xfs linux mmv kvm jbd2 pmlogger: primary logger: /var/log/pcp/pmlogger/workstation/20190827.15.54",
"dnf install pcp-system-tools",
"echo \"-i 192.168.4.62 \" >>/etc/pcp/pmcd/pmcd.options",
"firewall-cmd --permanent --zone=public --add-port=44321/tcp success firewall-cmd --reload success",
"setsebool -P pcp_bind_all_unreserved_ports on",
"systemctl enable pmcd pmlogger systemctl restart pmcd pmlogger",
"ss -tlp | grep 44321 LISTEN 0 5 127.0.0.1:44321 0.0.0.0:* users:((\"pmcd\",pid=151595,fd=6)) LISTEN 0 5 192.168.4.62:44321 0.0.0.0:* users:((\"pmcd\",pid=151595,fd=0)) LISTEN 0 5 [::1]:44321 [::]:* users:((\"pmcd\",pid=151595,fd=7))",
"dnf install pcp-system-tools",
"DO NOT REMOVE OR EDIT THE FOLLOWING LINE USDversion=1.1 192.168.4.13 n n PCP_ARCHIVE_DIR/rhel7u4a -r -T24h10m -c config.rhel7u4a 192.168.4.14 n n PCP_ARCHIVE_DIR/rhel6u10a -r -T24h10m -c config.rhel6u10a 192.168.4.62 n n PCP_ARCHIVE_DIR/rhel8u1a -r -T24h10m -c config.rhel8u1a 192.168.4.69 n n PCP_ARCHIVE_DIR/rhel9u3a -r -T24h10m -c config.rhel9u3a",
"systemctl enable pmcd pmlogger systemctl restart pmcd pmlogger",
"for i in /var/log/pcp/pmlogger/rhel*/*.0; do pmdumplog -L USDi; done Log Label (Log Format Version 2) Performance metrics from host rhel6u10a.local commencing Mon Nov 25 21:55:04.851 2019 ending Mon Nov 25 22:06:04.874 2019 Archive timezone: JST-9 PID for pmlogger: 24002 Log Label (Log Format Version 2) Performance metrics from host rhel7u4a commencing Tue Nov 26 06:49:24.954 2019 ending Tue Nov 26 07:06:24.979 2019 Archive timezone: CET-1 PID for pmlogger: 10941 [..]",
"=== LOGGER CONTROL SPECIFICATIONS === # #Host P? S? directory args local primary logger LOCALHOSTNAME y n PCP_ARCHIVE_DIR/LOCALHOSTNAME -r -T24h10m -c config.default -v 100Mb",
"dnf install pcp-gui",
"pmrep --start @3:00am --archive 20211128 --interval 5seconds --samples 10 --output csv disk.dev.write Time,\"disk.dev.write-sda\",\"disk.dev.write-sdb\" 2021-11-28 03:00:00,, 2021-11-28 03:00:05,4.000,5.200 2021-11-28 03:00:10,1.600,7.600 2021-11-28 03:00:15,0.800,7.100 2021-11-28 03:00:20,16.600,8.400 2021-11-28 03:00:25,21.400,7.200 2021-11-28 03:00:30,21.200,6.800 2021-11-28 03:00:35,21.000,27.600 2021-11-28 03:00:40,12.400,33.800 2021-11-28 03:00:45,9.800,20.600",
"PCP_ARCHIVE_VERSION=3",
"systemctl restart pmlogger.service",
"pmloglabel -l /var/log/pcp/pmlogger/ 20230208 Log Label (Log Format Version 3) Performance metrics from host host1 commencing Wed Feb 08 00:11:09.396 2023 ending Thu Feb 07 00:13:54.347 2023"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/logging-performance-data-with-pmlogger_monitoring-and-managing-system-status-and-performance
|
7.222. samba
|
7.222. samba 7.222.1. RHBA-2013:0338 - samba bug fix and enhancement update Updated samba packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Samba is an open-source implementation of the Server Message Block (SMB) and Common Internet File System (CIFS) protocol, which allows PC-compatible machines to share files, printers, and other information. Note The samba packages have been upgraded to upstream version 3.6, which provides a number of bug fixes and enhancements over the version. In particular, support for the SMB2 protocol has been added. SMB2 support can be enabled with the following parameter in the [global] section of the /etc/samba/smb.conf file: Additionally, Samba now has support for AES Kerberos encryption. AES support has been available in Microsoft Windows operating systems since Windows Vista and Windows Server 2008. It is reported to be the new default Kerberos encryption type since Windows 7. Samba now adds AES Kerberos keys to the keytab it controls. This means that other Kerberos based services that use the Samba keytab and run on the same machine can benefit from AES encryption. In order to use AES session keys (and not only use AES encrypted ticket granting tickets), the Samba machine account in Active Directory's LDAP server needs to be manually modified. For more information, refer to the Microsoft Open Specifications Support Team Blog . Also note that several Trivial Database ( TDB ) files have been updated and printing support has been rewritten to use the actual registry implementation. This means that all TDB files are upgraded as soon as you start the new Samba server daemon ( smbd ) version. You cannot downgrade to an older Samba version unless you have backups of the TDB files. (BZ# 649479 ) Warning The updated samba packages also change the way ID mapping is configured. Users are advised to modify their existing Samba configuration files. For more information, refer to the Release Notes for Samba 3.6.0 , the smb.conf man page and the individual IDMAP backend man pages. If you upgrade from Red Hat Enterprise Linux 6.3 to Red Hat Enterprise Linux 6.4 and you have Samba in use, you should make sure that you uninstall the package named samba4 to avoid conflicts during the upgrade. Bug Fixes BZ# 760109 Previously, the pam_winbind utility returned an incorrect PAM error code if the Winbind module was not reachable. Consequently, users were not able to log in even if another PAM Module authenticated the user successfully. With this update, the error PAM_USER_UNKNOWN is always returned in case Winbind fails to authenticate a user. As a result, users successfully authenticated by another PAM module can log in as expected. BZ#838893 Samba 3.6 failed to migrate existing printers from the Trivial Database ( TDB ) to the registry due to a Network Data Representation ( NDR ) alignment problem. Consequently, printers from 3.5 could not be migrated and the Samba server daemon ( smbd ) stopped with an error. The NDR parser has been fixed to correctly parse printing entries from Samba 3.5. As a result, printers are correctly migrated from 3.5 TDB to the 3.6 registry. BZ# 866412 Due to a regression, the release changed the behavior of resolving domain local groups and the Winbind daemon ( winbindd ) could not find them. The original behavior for resolving the domain local groups has been restored. As a result, the ID command resolves domain local groups in its own domain correctly again. BZ# 866570 The net utility improperly displayed the realm which it had joined in all lowercase letters. Consequently, a user might misunderstand the domain join and might use the lowercase format of the realm name. This update corrects the case and improves the wording of the message printed about a domain join. As a result, the user is correctly informed as to which DNS domain the system has joined. BZ#875879 If a Domain Controller ( DC ) was rebuilding the System Volume ( Sysvol ) shared directory and turned off netlogon , users were not able to log in until it was finished, even if another working DC was available. Consequently, users could not log in and got strange errors if netlogon was available and then was turned off. With this update, Samba retries twice to open the netlogon connection and if it still does not work the DC is added to the negative connection cache and Samba will failover to the DC. As a result, the user no longer sees any error messages in this scenario and can log in using another DC as expected. Enhancements BZ# 748407 When joining an Active Directory domain and using Samba's support for using Kerberos keytabs, AES Kerberos keys were not added into the generated keytab. Consequently, Samba did not support the new AES encryption type for Kerberos. This update adds support for AES Kerberos keys to Samba and AES Kerberos Keys are now created in the keytab during the Domain join. Users of samba are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
|
[
"max protocol = SMB2"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/samba
|
Chapter 4. Deploying AMQ Streams
|
Chapter 4. Deploying AMQ Streams Having prepared your environment for a deployment of AMQ Streams , this section shows: How to create the Kafka cluster Optional procedures to deploy other Kafka components according to your requirements: Kafka Connect Kafka MirrorMaker Kafka Bridge The procedures assume an OpenShift cluster is available and running. AMQ Streams is based on AMQ Streams Strimzi 0.20.x. This section describes the procedures to deploy AMQ Streams on OpenShift 3.11 and later. Note To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs. 4.1. Create the Kafka cluster In order to create your Kafka cluster, you deploy the Cluster Operator to manage the Kafka cluster, then deploy the Kafka cluster. When deploying the Kafka cluster using the Kafka resource, you can deploy the Topic Operator and User Operator at the same time. Alternatively, if you are using a non-AMQ Streams Kafka cluster, you can deploy the Topic Operator and User Operator as standalone components. Deploying a Kafka cluster with the Topic Operator and User Operator Perform these deployment steps if you want to use the Topic Operator and User Operator with a Kafka cluster managed by AMQ Streams. Deploy the Cluster Operator Use the Cluster Operator to deploy the: Kafka cluster Topic Operator User Operator Deploying a standalone Topic Operator and User Operator Perform these deployment steps if you want to use the Topic Operator and User Operator with a Kafka cluster that is not managed by AMQ Streams. Deploy the standalone Topic Operator Deploy the standalone User Operator 4.1.1. Deploying the Cluster Operator The Cluster Operator is responsible for deploying and managing Apache Kafka clusters within an OpenShift cluster. The procedures in this section show: How to deploy the Cluster Operator to watch : A single namespace Multiple namespaces All namespaces Alternative deployment options: How to deploy the Cluster Operator deployment from the OperatorHub . 4.1.1.1. Watch options for a Cluster Operator deployment When the Cluster Operator is running, it starts to watch for updates of Kafka resources. You can choose to deploy the Cluster Operator to watch Kafka resources from: A single namespace (the same namespace containing the Cluster Operator) Multiple namespaces All namespaces Note AMQ Streams provides example YAML files to make the deployment process easier. The Cluster Operator watches for changes to the following resources: Kafka for the Kafka cluster. KafkaConnect for the Kafka Connect cluster. KafkaConnectS2I for the Kafka Connect cluster with Source2Image support. KafkaConnector for creating and managing connectors in a Kafka Connect cluster. KafkaMirrorMaker for the Kafka MirrorMaker instance. KafkaBridge for the Kafka Bridge instance When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as StatefulSets, Services and ConfigMaps. Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource. Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption. When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources. 4.1.1.2. Deploying the Cluster Operator to watch a single namespace This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources in a single namespace in your OpenShift cluster. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Deploy the Cluster Operator: oc apply -f install/cluster-operator -n my-cluster-operator-namespace Verify that the Cluster Operator was successfully deployed: oc get deployments 4.1.1.3. Deploying the Cluster Operator to watch multiple namespaces This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across multiple namespaces in your OpenShift cluster. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to add a list of all the namespaces the Cluster Operator will watch to the STRIMZI_NAMESPACE environment variable. For example, in this procedure the Cluster Operator will watch the namespaces watched-namespace-1 , watched-namespace-2 , watched-namespace-3 . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3 For each namespace listed, install the RoleBindings . In this example, we replace watched-namespace in these commands with the namespaces listed in the step, repeating them for watched-namespace-1 , watched-namespace-2 , watched-namespace-3 : oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n watched-namespace oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n watched-namespace oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n watched-namespace Deploy the Cluster Operator: oc apply -f install/cluster-operator -n my-cluster-operator-namespace Verify that the Cluster Operator was successfully deployed: oc get deployments 4.1.1.4. Deploying the Cluster Operator to watch all namespaces This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster. When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to set the value of the STRIMZI_NAMESPACE environment variable to * . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: # ... serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: "*" # ... Create ClusterRoleBindings that grant cluster-wide access for all namespaces to the Cluster Operator. oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-topic-operator-delegation --clusterrole=strimzi-topic-operator --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator Replace my-cluster-operator-namespace with the namespace you want to install the Cluster Operator into. Deploy the Cluster Operator to your OpenShift cluster. oc apply -f install/cluster-operator -n my-cluster-operator-namespace Verify that the Cluster Operator was successfully deployed: oc get deployments 4.1.1.5. Deploying the Cluster Operator from the OperatorHub You can deploy the Cluster Operator to your OpenShift cluster by installing the AMQ Streams Operator from the OperatorHub. The OperatorHub is available in OpenShift 4 only. Prerequisites The Red Hat Operators OperatorSource is enabled in your OpenShift cluster. If you can see Red Hat Operators in the OperatorHub, the correct OperatorSource is enabled. For more information, see the Operators guide. Installation requires a user with sufficient privileges to install Operators from the OperatorHub. Procedure In the OpenShift 4 web console, click Operators > OperatorHub . Search or browse for the AMQ Streams Operator, in the Streaming & Messaging category. Click the AMQ Streams tile and then, in the sidebar on the right, click Install . On the Create Operator Subscription screen, choose from the following installation and update options: Installation Mode : Choose to install the AMQ Streams Operator to all (projects) namespaces in the cluster (the default option) or a specific (project) namespace. It is good practice to use namespaces to separate functions. We recommend that you dedicate a specific namespace to the Kafka cluster and other AMQ Streams components. Approval Strategy : By default, the AMQ Streams Operator is automatically upgraded to the latest AMQ Streams version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information, see the Operators guide in the OpenShift documentation. Click Subscribe ; the AMQ Streams Operator is installed to your OpenShift cluster. The AMQ Streams Operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace, or to all namespaces. On the Installed Operators screen, check the progress of the installation. The AMQ Streams Operator is ready to use when its status changes to InstallSucceeded . , you can deploy the other components of AMQ Streams, starting with a Kafka cluster, using the YAML example files. Additional resources Section 1.4, "AMQ Streams installation methods" Section 4.1.2.1, "Deploying the Kafka cluster" 4.1.2. Deploying Kafka Apache Kafka is an open-source distributed publish-subscribe messaging system for fault-tolerant real-time data feeds. The procedures in this section show: How to use the Cluster Operator to deploy: An ephemeral or persistent Kafka cluster The Topic Operator and User Operator by configuring the Kafka custom resource: Topic Operator User Operator Alternative standalone deployment procedures for the Topic Operator and User Operator: Deploy the standalone Topic Operator Deploy the standalone User Operator When installing Kafka, AMQ Streams also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper. 4.1.2.1. Deploying the Kafka cluster This procedure shows how to deploy a Kafka cluster to your OpenShift using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a Kafka resource. AMQ Streams provides example YAMLs files for deployment in examples/kafka/ : kafka-persistent.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes. kafka-jbod.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes). kafka-persistent-single.yaml Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node. kafka-ephemeral.yaml Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes. kafka-ephemeral-single.yaml Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node. In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment: Ephemeral cluster In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses emptyDir volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using an emptyDir volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down. Persistent cluster A persistent Kafka cluster uses PersistentVolumes to store ZooKeeper and Kafka data. The PersistentVolume is acquired using a PersistentVolumeClaim to make it independent of the actual type of the PersistentVolume . For example, it can use Amazon EBS volumes in Amazon AWS deployments without any changes in the YAML files. The PersistentVolumeClaim can use a StorageClass to trigger automatic volume provisioning. The example clusters are named my-cluster by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name property of the Kafka resource in the relevant YAML file. apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster # ... For more information about configuring the Kafka resource, see Kafka cluster configuration in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Create and deploy an ephemeral or persistent cluster. For development or testing, you might prefer to use an ephemeral cluster. You can use a persistent cluster in any situation. To create and deploy an ephemeral cluster: oc apply -f examples/kafka/kafka-ephemeral.yaml To create and deploy a persistent cluster: oc apply -f examples/kafka/kafka-persistent.yaml Verify that the Kafka cluster was successfully deployed: oc get deployments 4.1.2.2. Deploying the Topic Operator using the Cluster Operator This procedure describes how to deploy the Topic Operator using the Cluster Operator. You configure the entityOperator property of the Kafka resource to include the topicOperator . If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component . For more information about configuring the entityOperator and topicOperator properties, see Entity Operator in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include topicOperator : apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the Topic Operator spec using the properties described in EntityTopicOperatorSpec schema reference . Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: Use oc apply : oc apply -f <your-file> 4.1.2.3. Deploying the User Operator using the Cluster Operator This procedure describes how to deploy the User Operator using the Cluster Operator. You configure the entityOperator property of the Kafka resource to include the userOperator . If you want to use the User Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the User Operator as a standalone component . For more information about configuring the entityOperator and userOperator properties, see Entity Operator in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include userOperator : apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the User Operator spec using the properties described in EntityUserOperatorSpec schema reference in the Using AMQ Streams on OpenShift guide. Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: oc apply -f <your-file> 4.1.3. Alternative standalone deployment options for AMQ Streams Operators When deploying a Kafka cluster using the Cluster Operator, you can also deploy the Topic Operator and User Operator. Alternatively, you can perform a standalone deployment. A standalone deployment means the Topic Operator and User Operator can operate with a Kafka cluster that is not managed by AMQ Streams. 4.1.3.1. Deploying the standalone Topic Operator This procedure shows how to deploy the Topic Operator as a standalone component. A standalone deployment requires configuration of environment variables, and is more complicated than deploying the Topic Operator using the Cluster Operator . However, a standalone deployment is more flexible as the Topic Operator can operate with any Kafka cluster, not necessarily one deployed by the Cluster Operator. Prerequisites You need an existing Kafka cluster for the Topic Operator to connect to. Procedure Edit the Deployment.spec.template.spec.containers[0].env properties in the install/topic-operator/05-Deployment-strimzi-topic-operator.yaml file by setting: STRIMZI_KAFKA_BOOTSTRAP_SERVERS to list the bootstrap brokers in your Kafka cluster, given as a comma-separated list of hostname : port pairs. STRIMZI_ZOOKEEPER_CONNECT to list the ZooKeeper nodes, given as a comma-separated list of hostname : port pairs. This should be the same ZooKeeper cluster that your Kafka cluster is using. STRIMZI_NAMESPACE to the OpenShift namespace in which you want the operator to watch for KafkaTopic resources. STRIMZI_RESOURCE_LABELS to the label selector used to identify the KafkaTopic resources managed by the operator. STRIMZI_FULL_RECONCILIATION_INTERVAL_MS to specify the interval between periodic reconciliations, in milliseconds. STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS to specify the number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation could take more time due to the number of partitions or replicas. Default 6 . STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS to the ZooKeeper session timeout, in milliseconds. For example, 10000 . Default 20000 (20 seconds). STRIMZI_TOPICS_PATH to the Zookeeper node path where the Topic Operator stores its metadata. Default /strimzi/topics . STRIMZI_TLS_ENABLED to enable TLS support for encrypting the communication with Kafka brokers. Default true . STRIMZI_TRUSTSTORE_LOCATION to the path to the truststore containing certificates for enabling TLS based communication. Mandatory only if TLS is enabled through STRIMZI_TLS_ENABLED . STRIMZI_TRUSTSTORE_PASSWORD to the password for accessing the truststore defined by STRIMZI_TRUSTSTORE_LOCATION . Mandatory only if TLS is enabled through STRIMZI_TLS_ENABLED . STRIMZI_KEYSTORE_LOCATION to the path to the keystore containing private keys for enabling TLS based communication. Mandatory only if TLS is enabled through STRIMZI_TLS_ENABLED . STRIMZI_KEYSTORE_PASSWORD to the password for accessing the keystore defined by STRIMZI_KEYSTORE_LOCATION . Mandatory only if TLS is enabled through STRIMZI_TLS_ENABLED . STRIMZI_LOG_LEVEL to the level for printing logging messages. The value can be set to: ERROR , WARNING , INFO , DEBUG , and TRACE . Default INFO . STRIMZI_JAVA_OPTS (optional) to the Java options used for the JVM running the Topic Operator. An example is -Xmx=512M -Xms=256M . STRIMZI_JAVA_SYSTEM_PROPERTIES (optional) to list the -D options which are set to the Topic Operator. An example is -Djavax.net.debug=verbose -DpropertyName=value . Deploy the Topic Operator: oc apply -f install/topic-operator Verify that the Topic Operator has been deployed successfully: oc describe deployment strimzi-topic-operator The Topic Operator is deployed when the Replicas: entry shows 1 available . Note You may experience a delay with the deployment if you have a slow connection to the OpenShift cluster and the images have not been downloaded before. 4.1.3.2. Deploying the standalone User Operator This procedure shows how to deploy the User Operator as a standalone component. A standalone deployment requires configuration of environment variables, and is more complicated than deploying the User Operator using the Cluster Operator . However, a standalone deployment is more flexible as the User Operator can operate with any Kafka cluster, not necessarily one deployed by the Cluster Operator. Prerequisites You need an existing Kafka cluster for the User Operator to connect to. Procedure Edit the following Deployment.spec.template.spec.containers[0].env properties in the install/user-operator/05-Deployment-strimzi-user-operator.yaml file by setting: STRIMZI_KAFKA_BOOTSTRAP_SERVERS to list the Kafka brokers, given as a comma-separated list of hostname : port pairs. STRIMZI_ZOOKEEPER_CONNECT to list the ZooKeeper nodes, given as a comma-separated list of hostname : port pairs. This must be the same ZooKeeper cluster that your Kafka cluster is using. Connecting to ZooKeeper nodes with TLS encryption is not supported. STRIMZI_NAMESPACE to the OpenShift namespace in which you want the operator to watch for KafkaUser resources. STRIMZI_LABELS to the label selector used to identify the KafkaUser resources managed by the operator. STRIMZI_FULL_RECONCILIATION_INTERVAL_MS to specify the interval between periodic reconciliations, in milliseconds. STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS to the ZooKeeper session timeout, in milliseconds. For example, 10000 . Default 20000 (20 seconds). STRIMZI_CA_CERT_NAME to point to an OpenShift Secret that contains the public key of the Certificate Authority for signing new user certificates for TLS client authentication. The Secret must contain the public key of the Certificate Authority under the key ca.crt . STRIMZI_CA_KEY_NAME to point to an OpenShift Secret that contains the private key of the Certificate Authority for signing new user certificates for TLS client authentication. The Secret must contain the private key of the Certificate Authority under the key ca.key . STRIMZI_CLUSTER_CA_CERT_SECRET_NAME to point to an OpenShift Secret containing the public key of the Certificate Authority used for signing Kafka brokers certificates for enabling TLS-based communication. The Secret must contain the public key of the Certificate Authority under the key ca.crt . This environment variable is optional and should be set only if the communication with the Kafka cluster is TLS based. STRIMZI_EO_KEY_SECRET_NAME to point to an OpenShift Secret containing the private key and related certificate for TLS client authentication against the Kafka cluster. The Secret must contain the keystore with the private key and certificate under the key entity-operator.p12 , and the related password under the key entity-operator.password . This environment variable is optional and should be set only if TLS client authentication is needed when the communication with the Kafka cluster is TLS based. STRIMZI_CA_VALIDITY the validity period for the Certificate Authority. Default is 365 days. STRIMZI_CA_RENEWAL the renewal period for the Certificate Authority. STRIMZI_LOG_LEVEL to the level for printing logging messages. The value can be set to: ERROR , WARNING , INFO , DEBUG , and TRACE . Default INFO . STRIMZI_GC_LOG_ENABLED to enable garbage collection (GC) logging. Default true . Default is 30 days to initiate certificate renewal before the old certificates expire. STRIMZI_JAVA_OPTS (optional) to the Java options used for the JVM running User Operator. An example is -Xmx=512M -Xms=256M . STRIMZI_JAVA_SYSTEM_PROPERTIES (optional) to list the -D options which are set to the User Operator. An example is -Djavax.net.debug=verbose -DpropertyName=value . Deploy the User Operator: oc apply -f install/user-operator Verify that the User Operator has been deployed successfully: oc describe deployment strimzi-user-operator The User Operator is deployed when the Replicas: entry shows 1 available . Note You may experience a delay with the deployment if you have a slow connection to the OpenShift cluster and the images have not been downloaded before. 4.2. Deploy Kafka Connect Kafka Connect is a tool for streaming data between Apache Kafka and external systems. In AMQ Streams, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by AMQ Streams. Using the concept of connectors , Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability. Kafka Connect is typically used to integrate Kafka with external databases and storage and messaging systems. The procedures in this section show how to: Deploy a Kafka Connect cluster using a KafkaConnect resource Create a Kafka Connect image containing the connectors you need to make your connection Create and manage connectors using a KafkaConnector resource or the Kafka Connect REST API Deploy a KafkaConnector resource to Kafka Connect Note The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context. 4.2.1. Deploying Kafka Connect to your OpenShift cluster This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator. A Kafka Connect cluster is implemented as a Deployment with a configurable number of nodes (also called workers ) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable. The deployment uses a YAML file to provide the specification to create a KafkaConnect resource. In this procedure, we use the example file provided with AMQ Streams: examples/connect/kafka-connect.yaml For information about configuring the KafkaConnect resource (or the KafkaConnectS2I resource with Source-to-Image (S2I) support), see Kafka Connect cluster configuration in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Running Kafka cluster. Procedure Deploy Kafka Connect to your OpenShift cluster. For a Kafka cluster with 3 or more brokers, use the examples/connect/kafka-connect.yaml file. For a Kafka cluster with less than 3 brokers, use the examples/connect/kafka-connect-single-node-kafka.yaml file. oc apply -f examples/connect/kafka-connect.yaml Verify that Kafka Connect was successfully deployed: oc get deployments 4.2.2. Extending Kafka Connect with connector plug-ins The AMQ Streams container images for Kafka Connect include two built-in file connectors for moving file-based data into and out of your Kafka cluster. Table 4.1. File connectors File Connector Description FileStreamSourceConnector Transfers data to your Kafka cluster from a file (the source). FileStreamSinkConnector Transfers data from your Kafka cluster to a file (the sink). The Cluster Operator can also use images that you have created to deploy a Kafka Connect cluster to your OpenShift cluster. The procedures in this section show how to add your own connector classes to connector images by: Creating a container image from the Kafka Connect base image (manually or using continuous integration) Creating a container image using OpenShift builds and Source-to-Image (S2I) (available only on OpenShift) Important You create the configuration for connectors directly using the Kafka Connect REST API or KafkaConnector custom resources . 4.2.2.1. Creating a Docker image from the Kafka Connect base image This procedure shows how to create a custom image and add it to the /opt/kafka/plugins directory. You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plug-ins. At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plug-ins contained in the /opt/kafka/plugins directory. Prerequisites The Cluster Operator must be deployed. Procedure Create a new Dockerfile using registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 as the base image: Example plug-in file Build the container image. Push your custom image to your container registry. Point to the new container image. You can either: Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource. If set, this property overrides the STRIMZI_KAFKA_CONNECT_IMAGES variable in the Cluster Operator. apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... image: my-new-container-image 2 config: 3 #... 1 The specification for the Kafka Connect cluster . 2 The docker image for the pods. 3 Configuration of the Kafka Connect workers (not connectors). or In the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file, edit the STRIMZI_KAFKA_CONNECT_IMAGES variable to point to the new container image, and then reinstall the Cluster Operator. Additional resources See the Using AMQ Streams on OpenShift guide for more information on: Container image configuration and the KafkaConnect.spec.image property Cluster Operator configuration and the STRIMZI_KAFKA_CONNECT_IMAGES variable 4.2.2.2. Creating a container image using OpenShift builds and Source-to-Image This procedure shows how to use OpenShift builds and the Source-to-Image (S2I) framework to create a new container image. An OpenShift build takes a builder image with S2I support, together with source code and binaries provided by the user, and uses them to build a new container image. Once built, container images are stored in OpenShift's local container image repository and are available for use in deployments. A Kafka Connect builder image with S2I support is provided on the Red Hat Ecosystem Catalog as part of the registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 image. This S2I image takes your binaries (with plug-ins and connectors) and stores them in the /tmp/kafka-plugins/s2i directory. It creates a new Kafka Connect image from this directory, which can then be used with the Kafka Connect deployment. When started using the enhanced image, Kafka Connect loads any third-party plug-ins from the /tmp/kafka-plugins/s2i directory. Procedure On the command line, use the oc apply command to create and deploy a Kafka Connect S2I cluster: oc apply -f examples/connect/kafka-connect-s2i.yaml Create a directory with Kafka Connect plug-ins: Use the oc start-build command to start a new build of the image using the prepared directory: oc start-build my-connect-cluster-connect --from-dir ./ my-plugins / Note The name of the build is the same as the name of the deployed Kafka Connect cluster. When the build has finished, the new image is used automatically by the Kafka Connect deployment. 4.2.3. Creating and managing connectors When you have created a container image for your connector plug-in, you need to create a connector instance in your Kafka Connect cluster. You can then configure, monitor, and manage a running connector instance. A connector is an instance of a particular connector class that knows how to communicate with the relevant external system in terms of messages. Connectors are available for many external systems, or you can create your own. You can create source and sink types of connector. Source connector A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages. Sink connector A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system. AMQ Streams provides two APIs for creating and managing connectors: KafkaConnector resources (referred to as KafkaConnectors ) Kafka Connect REST API Using the APIs, you can: Check the status of a connector instance Reconfigure a running connector Increase or decrease the number of tasks for a connector instance Restart failed tasks (not supported by KafkaConnector resource) Pause a connector instance Resume a previously paused connector instance Delete a connector instance 4.2.3.1. KafkaConnector resources KafkaConnectors allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way, so an HTTP client such as cURL is not required. Like other Kafka resources, you declare a connector's desired state in a KafkaConnector YAML file that is deployed to your OpenShift cluster to create the connector instance. You manage a running connector instance by updating its corresponding KafkaConnector , and then applying the updates. You remove a connector by deleting its corresponding KafkaConnector . To ensure compatibility with earlier versions of AMQ Streams, KafkaConnectors are disabled by default. To enable them for a Kafka Connect cluster, you must use annotations on the KafkaConnect resource. For instructions, see Configuring Kafka Connect in the Using AMQ Streams on OpenShift guide. When KafkaConnectors are enabled, the Cluster Operator begins to watch for them. It updates the configurations of running connector instances to match the configurations defined in their KafkaConnectors . AMQ Streams includes an example KafkaConnector , named examples/connect/source-connector.yaml . You can use this example to create and manage a FileStreamSourceConnector . 4.2.3.2. Availability of the Kafka Connect REST API The Kafka Connect REST API is available on port 8083 as the <connect-cluster-name>-connect-api service. If KafkaConnectors are enabled, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. The operations supported by the REST API are described in the Apache Kafka documentation . 4.2.4. Deploying a KafkaConnector resource to Kafka Connect This procedure describes how to deploy the example KafkaConnector to a Kafka Connect cluster. The example YAML will create a FileStreamSourceConnector to send each line of the license file to Kafka as a message in a topic named my-topic . Prerequisites A Kafka Connect deployment in which KafkaConnectors are enabled A running Cluster Operator Procedure Edit the examples/connect/source-connector.yaml file: apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 config: 5 file: "/opt/kafka/LICENSE" topic: my-topic # ... 1 Enter a name for the KafkaConnector resource. This will be used as the name of the connector within Kafka Connect. You can choose any name that is valid for an OpenShift resource. 2 Enter the name of the Kafka Connect cluster in which to create the connector. 3 The name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster. 4 The maximum number of tasks that the connector can create. 5 Configuration settings for the connector. Available configuration options depend on the connector class. Create the KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/source-connector.yaml Check that the resource was created: oc get kctr --selector strimzi.io/cluster=my-connect-cluster -o name 4.3. Deploy Kafka MirrorMaker The Cluster Operator deploys one or more Kafka MirrorMaker replicas to replicate data between Kafka clusters. This process is called mirroring to avoid confusion with the Kafka partitions replication concept. MirrorMaker consumes messages from the source cluster and republishes those messages to the target cluster. 4.3.1. Deploying Kafka MirrorMaker to your OpenShift cluster This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker or KafkaMirrorMaker2 resource depending on the version of MirrorMaker deployed. In this procedure, we use the example files provided with AMQ Streams: examples/mirror-maker/kafka-mirror-maker.yaml examples/mirror-maker/kafka-mirror-maker-2.yaml For information about configuring KafkaMirrorMaker or KafkaMirrorMaker2 resources, see Kafka MirrorMaker cluster configuration in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka MirrorMaker to your OpenShift cluster: For MirrorMaker: oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml For MirrorMaker 2.0: oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml Verify that MirrorMaker was successfully deployed: oc get deployments 4.4. Deploy Kafka Bridge The Cluster Operator deploys one or more Kafka bridge replicas to send data between Kafka clusters and clients via HTTP API. 4.4.1. Deploying Kafka Bridge to your OpenShift cluster This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaBridge resource. In this procedure, we use the example file provided with AMQ Streams: examples/bridge/kafka-bridge.yaml For information about configuring the KafkaBridge resource, see Kafka Bridge cluster configuration in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka Bridge to your OpenShift cluster: oc apply -f examples/bridge/kafka-bridge.yaml Verify that Kafka Bridge was successfully deployed: oc get deployments
|
[
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"apply -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3",
"apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n watched-namespace apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n watched-namespace apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n watched-namespace",
"apply -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: # serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: \"*\" #",
"create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-topic-operator-delegation --clusterrole=strimzi-topic-operator --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator",
"apply -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster",
"apply -f examples/kafka/kafka-ephemeral.yaml",
"apply -f examples/kafka/kafka-persistent.yaml",
"get deployments",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <your-file>",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <your-file>",
"apply -f install/topic-operator",
"describe deployment strimzi-topic-operator",
"apply -f install/user-operator",
"describe deployment strimzi-user-operator",
"apply -f examples/connect/kafka-connect.yaml",
"get deployments",
"FROM registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001",
"tree ./ my-plugins / ./ my-plugins / βββ debezium-connector-mongodb β βββ bson-3.4.2.jar β βββ CHANGELOG.md β βββ CONTRIBUTE.md β βββ COPYRIGHT.txt β βββ debezium-connector-mongodb-0.7.1.jar β βββ debezium-core-0.7.1.jar β βββ LICENSE.txt β βββ mongodb-driver-3.4.2.jar β βββ mongodb-driver-core-3.4.2.jar β βββ README.md βββ debezium-connector-mysql β βββ CHANGELOG.md β βββ CONTRIBUTE.md β βββ COPYRIGHT.txt β βββ debezium-connector-mysql-0.7.1.jar β βββ debezium-core-0.7.1.jar β βββ LICENSE.txt β βββ mysql-binlog-connector-java-0.13.0.jar β βββ mysql-connector-java-5.1.40.jar β βββ README.md β βββ wkb-1.0.2.jar βββ debezium-connector-postgres βββ CHANGELOG.md βββ CONTRIBUTE.md βββ COPYRIGHT.txt βββ debezium-connector-postgres-0.7.1.jar βββ debezium-core-0.7.1.jar βββ LICENSE.txt βββ postgresql-42.0.0.jar βββ protobuf-java-2.6.1.jar βββ README.md",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # image: my-new-container-image 2 config: 3 #",
"apply -f examples/connect/kafka-connect-s2i.yaml",
"tree ./ my-plugins / ./ my-plugins / βββ debezium-connector-mongodb β βββ bson-3.4.2.jar β βββ CHANGELOG.md β βββ CONTRIBUTE.md β βββ COPYRIGHT.txt β βββ debezium-connector-mongodb-0.7.1.jar β βββ debezium-core-0.7.1.jar β βββ LICENSE.txt β βββ mongodb-driver-3.4.2.jar β βββ mongodb-driver-core-3.4.2.jar β βββ README.md βββ debezium-connector-mysql β βββ CHANGELOG.md β βββ CONTRIBUTE.md β βββ COPYRIGHT.txt β βββ debezium-connector-mysql-0.7.1.jar β βββ debezium-core-0.7.1.jar β βββ LICENSE.txt β βββ mysql-binlog-connector-java-0.13.0.jar β βββ mysql-connector-java-5.1.40.jar β βββ README.md β βββ wkb-1.0.2.jar βββ debezium-connector-postgres βββ CHANGELOG.md βββ CONTRIBUTE.md βββ COPYRIGHT.txt βββ debezium-connector-postgres-0.7.1.jar βββ debezium-core-0.7.1.jar βββ LICENSE.txt βββ postgresql-42.0.0.jar βββ protobuf-java-2.6.1.jar βββ README.md",
"start-build my-connect-cluster-connect --from-dir ./ my-plugins /",
"apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 config: 5 file: \"/opt/kafka/LICENSE\" topic: my-topic #",
"apply -f examples/connect/source-connector.yaml",
"get kctr --selector strimzi.io/cluster=my-connect-cluster -o name",
"apply -f examples/mirror-maker/kafka-mirror-maker.yaml",
"apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml",
"get deployments",
"apply -f examples/bridge/kafka-bridge.yaml",
"get deployments"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-tasks_str
|
Chapter 8. Using a service account as an OAuth client
|
Chapter 8. Using a service account as an OAuth client 8.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 8.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior:
|
[
"oc sa get-token <service_account_name>",
"serviceaccounts.openshift.io/oauth-redirecturi.<name>",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\""
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/using-service-accounts-as-oauth-client
|
5.5. Configuring IPv6 Settings
|
5.5. Configuring IPv6 Settings To configure IPv6 settings, follow the procedure described in Section 5.4, "Configuring IPv4 Settings" and click the IPv6 menu entry. Method Ignore - Choose this option if you want to ignore IPv6 settings for this connection. Automatic - Choose this option to use SLAAC to create an automatic, stateless configuration based on the hardware address and router advertisements (RA). Automatic, addresses only - Choose this option if the network you are connecting to uses router advertisements (RA) to create an automatic, stateless configuration, but you want to assign DNS servers manually. Automatic, DHCP only - Choose this option to not use RA, but request information from DHCPv6 directly to create a stateful configuration. Manual - Choose this option if you want to assign IP addresses manually. Link-Local Only - Choose this option if the network you are connecting to does not have a DHCP server and you do not want to assign IP addresses manually. Random addresses will be assigned as per RFC 4862 with prefix FE80::0 . Addresses DNS servers - Enter a comma separated list of DNS servers. Search domains - Enter a comma separated list of domain controllers. If you need to configure static routes, click the Routes button and for more details on configuration options, see Section 4.3, "Configuring Static Routes with GUI" .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_ipv6_settings
|
3.7. Data Journaling
|
3.7. Data Journaling Ordinarily, GFS2 writes only metadata to its journal. File contents are subsequently written to disk by the kernel's periodic sync that flushes file system buffers. An fsync() call on a file causes the file's data to be written to disk immediately. The call returns when the disk reports that all data is safely written. Data journaling can result in a reduced fsync() time for very small files because the file data is written to the journal in addition to the metadata. This advantage rapidly reduces as the file size increases. Writing to medium and larger files will be much slower with data journaling turned on. Applications that rely on fsync() to sync file data may see improved performance by using data journaling. Data journaling can be enabled automatically for any GFS2 files created in a flagged directory (and all its subdirectories). Existing files with zero length can also have data journaling turned on or off. Enabling data journaling on a directory sets the directory to "inherit jdata", which indicates that all files and directories subsequently created in that directory are journaled. You can enable and disable data journaling on a file with the chattr command. The following commands enable data journaling on the /mnt/gfs2/gfs2_dir/newfile file and then check whether the flag has been set properly. The following commands disable data journaling on the /mnt/gfs2/gfs2_dir/newfile file and then check whether the flag has been set properly. You can also use the chattr command to set the j flag on a directory. When you set this flag for a directory, all files and directories subsequently created in that directory are journaled. The following set of commands sets the j flag on the gfs2_dir directory, then checks whether the flag has been set properly. After this, the commands create a new file called newfile in the /mnt/gfs2/gfs2_dir directory and then check whether the j flag has been set for the file. Since the j flag is set for the directory, then newfile should also have journaling enabled.
|
[
"chattr +j /mnt/gfs2/gfs2_dir/newfile lsattr /mnt/gfs2/gfs2_dir ---------j--- /mnt/gfs2/gfs2_dir/newfile",
"chattr -j /mnt/gfs2/gfs2_dir/newfile lsattr /mnt/gfs2/gfs2_dir ------------- /mnt/gfs2/gfs2_dir/newfile",
"chattr -j /mnt/gfs2/gfs2_dir lsattr /mnt/gfs2 ---------j--- /mnt/gfs2/gfs2_dir touch /mnt/gfs2/gfs2_dir/newfile lsattr /mnt/gfs2/gfs2_dir ---------j--- /mnt/gfs2/gfs2_dir/newfile"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-manage-data-journal
|
Chapter 4. Creating images
|
Chapter 4. Creating images Learn how to create your own container images, based on pre-built images that are ready to help you. The process includes learning best practices for writing images, defining metadata for images, testing images, and using a custom builder workflow to create images to use with Red Hat OpenShift Service on AWS. 4.1. Learning container best practices When creating container images to run on Red Hat OpenShift Service on AWS there are a number of best practices to consider as an image author to ensure a good experience for consumers of those images. Because images are intended to be immutable and used as-is, the following guidelines help ensure that your images are highly consumable and easy to use on Red Hat OpenShift Service on AWS. 4.1.1. General container image guidelines The following guidelines apply when creating a container image in general, and are independent of whether the images are used on Red Hat OpenShift Service on AWS. Reuse images Wherever possible, base your image on an appropriate upstream image using the FROM statement. This ensures your image can easily pick up security fixes from an upstream image when it is updated, rather than you having to update your dependencies directly. In addition, use tags in the FROM instruction, for example, rhel:rhel7 , to make it clear to users exactly which version of an image your image is based on. Using a tag other than latest ensures your image is not subjected to breaking changes that might go into the latest version of an upstream image. Maintain compatibility within tags When tagging your own images, try to maintain backwards compatibility within a tag. For example, if you provide an image named image and it currently includes version 1.0 , you might provide a tag of image:v1 . When you update the image, as long as it continues to be compatible with the original image, you can continue to tag the new image image:v1 , and downstream consumers of this tag are able to get updates without being broken. If you later release an incompatible update, then switch to a new tag, for example image:v2 . This allows downstream consumers to move up to the new version at will, but not be inadvertently broken by the new incompatible image. Any downstream consumer using image:latest takes on the risk of any incompatible changes being introduced. Avoid multiple processes Do not start multiple services, such as a database and SSHD , inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. Red Hat OpenShift Service on AWS allows you to easily colocate and co-manage related images by grouping them into a single pod. This colocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not have to manage routing signals to spawned processes. Use exec in wrapper scripts Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses exec so that the script's process is replaced by your software. If you do not use exec , then signals sent by your container runtime go to your wrapper script instead of your software's process. This is not what you want. If you have a wrapper script that starts a process for some server. You start your container, for example, using podman run -i , which runs the wrapper script, which in turn starts your process. If you want to close your container with CTRL+C . If your wrapper script used exec to start the server process, podman sends SIGINT to the server process, and everything works as you expect. If you did not use exec in your wrapper script, podman sends SIGINT to the process for the wrapper script and your process keeps running like nothing happened. Also note that your process runs as PID 1 when running in a container. This means that if your main process terminates, the entire container is stopped, canceling any child processes you launched from your PID 1 process. Clean temporary files Remove all temporary files you create during the build process. This also includes any files added with the ADD command. For example, run the yum clean command after performing yum install operations. You can prevent the yum cache from ending up in an image layer by creating your RUN statement as follows: RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y Note that if you instead write: RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y Then the first yum invocation leaves extra files in that layer, and these files cannot be removed when the yum clean operation is run later. The extra files are not visible in the final image, but they are present in the underlying layers. The current container build process does not allow a command run in a later layer to shrink the space used by the image when something was removed in an earlier layer. However, this may change in the future. This means that if you perform an rm command in a later layer, although the files are hidden it does not reduce the overall size of the image to be downloaded. Therefore, as with the yum clean example, it is best to remove files in the same command that created them, where possible, so they do not end up written to a layer. In addition, performing multiple commands in a single RUN statement reduces the number of layers in your image, which improves download and extraction time. Place instructions in the proper order The container builder reads the Dockerfile and runs the instructions from top to bottom. Every instruction that is successfully executed creates a layer which can be reused the time this or another image is built. It is very important to place instructions that rarely change at the top of your Dockerfile . Doing so ensures the builds of the same image are very fast because the cache is not invalidated by upper layer changes. For example, if you are working on a Dockerfile that contains an ADD command to install a file you are iterating on, and a RUN command to yum install a package, it is best to put the ADD command last: FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile This way each time you edit myfile and rerun podman build or docker build , the system reuses the cached layer for the yum command and only generates the new layer for the ADD operation. If instead you wrote the Dockerfile as: FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y Then each time you changed myfile and reran podman build or docker build , the ADD operation would invalidate the RUN layer cache, so the yum operation must be rerun as well. Mark important ports The EXPOSE instruction makes a port in the container available to the host system and other containers. While it is possible to specify that a port should be exposed with a podman run invocation, using the EXPOSE instruction in a Dockerfile makes it easier for both humans and software to use your image by explicitly declaring the ports your software needs to run: Exposed ports show up under podman ps associated with containers created from your image. Exposed ports are present in the metadata for your image returned by podman inspect . Exposed ports are linked when you link one container to another. Set environment variables It is good practice to set environment variables with the ENV instruction. One example is to set the version of your project. This makes it easy for people to find the version without looking at the Dockerfile . Another example is advertising a path on the system that could be used by another process, such as JAVA_HOME . Avoid default passwords Avoid setting default passwords. Many people extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords are configurable using an environment variable instead. If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set. Avoid sshd It is best to avoid running sshd in your image. You can use the podman exec or docker exec command to access containers that are running on the local host. Alternatively, you can use the oc exec command or the oc rsh command to access containers that are running on the Red Hat OpenShift Service on AWS cluster. Installing and running sshd in your image opens up additional vectors for attack and requirements for security patching. Use volumes for persistent data Images use a volume for persistent data. This way Red Hat OpenShift Service on AWS mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content could not be preserved. All data that needs to be preserved even after the container is destroyed must be written to a volume. Container engines support a readonly flag for containers, which can be used to strictly enforce good practices about not writing data to ephemeral storage in a container. Designing your image around that capability now makes it easier to take advantage of it later. Explicitly defining volumes in your Dockerfile makes it easy for consumers of the image to understand what volumes they must define when running your image. See the Kubernetes documentation for more information on how volumes are used in Red Hat OpenShift Service on AWS. Note Even with persistent volumes, each instance of your image has its own volume, and the filesystem is not shared between instances. This means the volume cannot be used to share state in a cluster. 4.1.2. Red Hat OpenShift Service on AWS-specific guidelines The following are guidelines that apply when creating container images specifically for use on Red Hat OpenShift Service on AWS. 4.1.2.1. Enable images for source-to-image (S2I) For images that are intended to run application code provided by a third party, such as a Ruby image designed to run Ruby code provided by a developer, you can enable your image to work with the Source-to-Image (S2I) build tool. S2I is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. 4.1.2.2. Support arbitrary user ids By default, Red Hat OpenShift Service on AWS runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node. For an image to support running as an arbitrary user, directories and files that are written to by processes in the image must be owned by the root group and be read/writable by that group. Files to be executed must also have group execute permissions. Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image: RUN chgrp -R 0 /some/directory && \ chmod -R g=u /some/directory Because the container user is always a member of the root group, the container user can read and write these files. Warning Care must be taken when altering the directories and file permissions of the sensitive areas of a container. If applied to sensitive areas, such as the /etc/passwd file, such changes can allow the modification of these files by unintended users, potentially exposing the container or host. CRI-O supports the insertion of arbitrary user IDs into a container's /etc/passwd file. As such, changing permissions is never required. Additionally, the /etc/passwd file should not exist in any container image. If it does, the CRI-O container runtime will fail to inject a random UID into the /etc/passwd file. In such cases, the container might face challenges in resolving the active UID. Failing to meet this requirement could impact the functionality of certain containerized applications. In addition, the processes running in the container must not listen on privileged ports, ports below 1024, since they are not running as a privileged user. 4.1.2.3. Use services for inter-image communication For cases where your image needs to communicate with a service provided by another image, such as a web front end image that needs to access a database image to store and retrieve data, your image consumes an Red Hat OpenShift Service on AWS service. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests. 4.1.2.4. Provide common libraries For images that are intended to run application code provided by a third party, ensure that your image contains commonly used libraries for your platform. In particular, provide database drivers for common databases used with your platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are creating a Java framework image. Doing so prevents the need for common dependencies to be downloaded during application assembly time, speeding up application image builds. It also simplifies the work required by application developers to ensure all of their dependencies are met. 4.1.2.5. Use environment variables for configuration Users of your image are able to configure it without having to create a downstream image based on your image. This means that the runtime configuration is handled using environment variables. For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file. It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a container image registry. Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image. For extremely complex scenarios, configuration can also be supplied using volumes that would be mounted into the container at runtime. However, if you elect to do it this way you must ensure that your image provides clear error messages on startup when the necessary volume or configuration is not present. This topic is related to the Using Services for Inter-image Communication topic in that configuration like datasources are defined in terms of environment variables that provide the service endpoint information. This allows an application to dynamically consume a datasource service that is defined in the Red Hat OpenShift Service on AWS environment without modifying the application image. In addition, tuning is done by inspecting the cgroups settings for the container. This allows the image to tune itself to the available memory, CPU, and other resources. For example, Java-based images tune their heap based on the cgroup maximum memory parameter to ensure they do not exceed the limits and get an out-of-memory error. 4.1.2.6. Set image metadata Defining image metadata helps Red Hat OpenShift Service on AWS better consume your container images, allowing Red Hat OpenShift Service on AWS to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that are needed. 4.1.2.7. Clustering You must fully understand what it means to run multiple instances of your image. In the simplest case, the load balancing function of a service handles routing traffic to all instances of your image. However, many frameworks must share information to perform leader election or failover state; for example, in session replication. Consider how your instances accomplish this communication when running in Red Hat OpenShift Service on AWS. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic. 4.1.2.8. Logging It is best to send all logging to standard out. Red Hat OpenShift Service on AWS collects standard out from containers and sends it to the centralized logging service where it can be viewed. If you must separate log content, prefix the output with an appropriate keyword, which makes it possible to filter the messages. If your image logs to a file, users must use manual operations to enter the running container and retrieve or view the log file. 4.1.2.9. Liveness and readiness probes Document example liveness and readiness probes that can be used with your image. These probes allow users to deploy your image with confidence that traffic is not be routed to the container until it is prepared to handle it, and that the container is restarted if the process gets into an unhealthy state. 4.1.2.10. Templates Consider providing an example template with your image. A template gives users an easy way to quickly get your image deployed with a working configuration. Your template must include the liveness and readiness probes you documented with the image, for completeness. 4.2. Including metadata in images Defining image metadata helps Red Hat OpenShift Service on AWS better consume your container images, allowing Red Hat OpenShift Service on AWS to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed. This topic only defines the metadata needed by the current set of use cases. Additional metadata or use cases may be added in the future. 4.2.1. Defining image metadata You can use the LABEL instruction in a Dockerfile to define image metadata. Labels are similar to environment variables in that they are key value pairs attached to an image or a container. Labels are different from environment variable in that they are not visible to the running application and they can also be used for fast look-up of images and containers. Docker documentation for more information on the LABEL instruction. The label names are typically namespaced. The namespace is set accordingly to reflect the project that is going to pick up the labels and use them. For Red Hat OpenShift Service on AWS the namespace is set to io.openshift and for Kubernetes the namespace is io.k8s . See the Docker custom metadata documentation for details about the format. Table 4.1. Supported Metadata Variable Description io.openshift.tags This label contains a list of tags represented as a list of comma-separated string values. The tags are the way to categorize the container images into broad areas of functionality. Tags help UI and generation tools to suggest relevant container images during the application creation process. io.openshift.wants Specifies a list of tags that the generation tools and the UI uses to provide relevant suggestions if you do not have the container images with specified tags already. For example, if the container image wants mysql and redis and you do not have the container image with redis tag, then UI can suggest you to add this image into your deployment. io.k8s.description This label can be used to give the container image consumers more detailed information about the service or functionality this image provides. The UI can then use this description together with the container image name to provide more human friendly information to end users. io.openshift.non-scalable An image can use this variable to suggest that it does not support scaling. The UI then communicates this to consumers of that image. Being not-scalable means that the value of replicas should initially not be set higher than 1 . io.openshift.min-memory and io.openshift.min-cpu This label suggests how much resources the container image needs to work properly. The UI can warn the user that deploying this container image may exceed their user quota. The values must be compatible with Kubernetes quantity. 4.3. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 4.3.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 4.3.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 4.2. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 4.4. About testing source-to-image images As an Source-to-Image (S2I) builder image author, you can test your S2I image locally and use the Red Hat OpenShift Service on AWS build system for automated testing and continuous integration. S2I requires the assemble and run scripts to be present to successfully run the S2I build. Providing the save-artifacts script reuses the build artifacts, and providing the usage script ensures that usage information is printed to console when someone runs the container image outside of the S2I. The goal of testing an S2I image is to make sure that all of these described commands work properly, even if the base container image has changed or the tooling used by the commands was updated. 4.4.1. Understanding testing requirements The standard location for the test script is test/run . This script is invoked by the Red Hat OpenShift Service on AWS S2I image builder and it could be a simple Bash script or a static Go binary. The test/run script performs the S2I build, so you must have the S2I binary available in your USDPATH . If required, follow the installation instructions in the S2I README . S2I combines the application source code and builder image, so to test it you need a sample application source to verify that the source successfully transforms into a runnable container image. The sample application should be simple, but it should exercise the crucial steps of assemble and run scripts. 4.4.2. Generating scripts and tools The S2I tooling comes with powerful generation tools to speed up the process of creating a new S2I image. The s2i create command produces all the necessary S2I scripts and testing tools along with the Makefile : USD s2i create <image_name> <destination_directory> The generated test/run script must be adjusted to be useful, but it provides a good starting point to begin developing. Note The test/run script produced by the s2i create command requires that the sample application sources are inside the test/test-app directory. 4.4.3. Testing locally The easiest way to run the S2I image tests locally is to use the generated Makefile . If you did not use the s2i create command, you can copy the following Makefile template and replace the IMAGE_NAME parameter with your image name. Sample Makefile 4.4.4. Basic testing workflow The test script assumes you have already built the image you want to test. If required, first build the S2I image. Run one of the following commands: If you use Podman, run the following command: USD podman build -t <builder_image_name> If you use Docker, run the following command: USD docker build -t <builder_image_name> The following steps describe the default workflow to test S2I image builders: Verify the usage script is working: If you use Podman, run the following command: USD podman run <builder_image_name> . If you use Docker, run the following command: USD docker run <builder_image_name> . Build the image: USD s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_ Optional: if you support save-artifacts , run step 2 once again to verify that saving and restoring artifacts works properly. Run the container: If you use Podman, run the following command: USD podman run <output_application_image_name> If you use Docker, run the following command: USD docker run <output_application_image_name> Verify the container is running and the application is responding. Running these steps is generally enough to tell if the builder image is working as expected. 4.4.5. Using Red Hat OpenShift Service on AWS for building the image Once you have a Dockerfile and the other artifacts that make up your new S2I builder image, you can put them in a git repository and use Red Hat OpenShift Service on AWS to build and push the image. Define a Docker build that points to your repository. If your Red Hat OpenShift Service on AWS instance is hosted on a public IP address, the build can be triggered each time you push into your S2I builder image GitHub repository. You can also use the ImageChangeTrigger to trigger a rebuild of your applications that are based on the S2I builder image you updated.
|
[
"RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y",
"RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y",
"FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile",
"FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y",
"RUN chgrp -R 0 /some/directory && chmod -R g=u /some/directory",
"LABEL io.openshift.tags mongodb,mongodb24,nosql",
"LABEL io.openshift.wants mongodb,redis",
"LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support",
"LABEL io.openshift.non-scalable true",
"LABEL io.openshift.min-memory 16Gi LABEL io.openshift.min-cpu 4",
"#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd",
"#!/bin/bash run the application /opt/application/run.sh",
"#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd",
"#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF",
"s2i create <image_name> <destination_directory>",
"IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := USD(shell command -v podman 2> /dev/null | echo docker) build: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME) . .PHONY: test test: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME)-candidate . IMAGE_NAME=USD(IMAGE_NAME)-candidate test/run",
"podman build -t <builder_image_name>",
"docker build -t <builder_image_name>",
"podman run <builder_image_name> .",
"docker run <builder_image_name> .",
"s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_",
"podman run <output_application_image_name>",
"docker run <output_application_image_name>"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/images/creating-images
|
Chapter 3. Installation and Booting
|
Chapter 3. Installation and Booting 3.1. Installer The Red Hat Enterprise Linux installer, Anaconda , has been enhanced in order to improve the installation process for Red Hat Enterprise Linux 7.1. Interface The graphical installer interface now contains one additional screen which enables configuring the Kdump kernel crash dumping mechanism during the installation. Previously, this was configured after the installation using the firstboot utility, which was not accessible without a graphical interface. Now, you can configure Kdump as part of the installation process on systems without a graphical environment. The new screen is accessible from the main installer menu ( Installation Summary ). Figure 3.1. The new Kdump screen The manual partitioning screen has been redesigned to improve user experience. Some of the controls have been moved to different locations on the screen. Figure 3.2. The redesigned Manual Partitioning screen You can now configure a network bridge in the Network & Hostname screen of the installer. To do so, click the + button at the bottom of the interface list, select Bridge from the menu, and configure the bridge in the Editing bridge connection dialog window which appears afterwards. This dialog is provided by NetworkManager and is fully documented in the Red Hat Enterprise Linux 7.1 Networking Guide . Several new Kickstart options have also been added for bridge configuration. See below for details. The installer no longer uses multiple consoles to display logs. Instead, all logs are in tmux panes in virtual console 1 ( tty1 ). To access logs during the installation, press Ctrl + Alt + F1 to switch to tmux , and then use Ctrl + b X to switch between different windows (replace X with the number of a particular window as displayed at the bottom of the screen). To switch back to the graphical interface, press Ctrl + Alt + F6 . The command-line interface for Anaconda now includes full help. To view it, use the anaconda -h command on a system with the anaconda package installed. The command-line interface allows you to run the installer on an installed system, which is useful for disk image installations. Kickstart Commands and Options The logvol command has a new option, --profile= . This option enables the user to specify the configuration profile name to use with thin logical volumes. If used, the name will also be included in the metadata for the logical volume. By default, the available profiles are default and thin-performance and are defined in the /etc/lvm/profile directory. See the lvm(8) man page for additional information. The behavior of the --size= and --percent= options of the logvol command has changed. Previously, the --percent= option was used together with --grow and --size= to specify how much a logical volume should expand after all statically-sized volumes have been created. Since Red Hat Enterprise Linux 7.1, --size= and --percent= can not be used on the same logvol command. The --autoscreenshot option of the autostep Kickstart command has been fixed, and now correctly saves a screenshot of each screen into the /tmp/anaconda-screenshots directory upon exiting the screen. After the installation completes, these screenshots are moved into /root/anaconda-screenshots . The liveimg command now supports installation from tar files as well as disk images. The tar archive must contain the installation media root file system, and the file name must end with .tar , .tbz , .tgz , .txz , .tar.bz2 , .tar.gz , or .tar.xz . Several new options have been added to the network command for configuring network bridges: When the --bridgeslaves= option is used, the network bridge with device name specified using the --device= option will be created and devices defined in the --bridgeslaves= option will be added to the bridge. For example: The --bridgeopts= option requires an optional comma-separated list of parameters for the bridged interface. Available values are stp , priority , forward-delay , hello-time , max-age , and ageing-time . For information about these parameters, see the nm-settings(5) man page. The autopart command has a new option, --fstype . This option allows you to change the default file system type ( xfs ) when using automatic partitioning in a Kickstart file. Several new features have been added to Kickstart for better container support. These features include: The new --install option for the repo command saves the provided repository configuration on the installed system in the /etc/yum.repos.d/ directory. Without using this option, a repository configured in a Kickstart file will only be available during the installation process, not on the installed system. The --disabled option for the bootloader command prevents the boot loader from being installed. The new --nocore option for the %packages section of a Kickstart file prevents the system from installing the @core package group. This enables installing extremely minimal systems for use with containers. Note Please note that the described options are useful only when combined with containers. Using these options in a general-purpose installation could result in an unusable system. Entropy Gathering for LUKS Encryption If you choose to encrypt one or more partitions or logical volumes during the installation (either during an interactive installation or in a Kickstart file), Anaconda will attempt to gather 256 bits of entropy (random data) to ensure the encryption is secure. The installation will continue after 256 bits of entropy are gathered or after 10 minutes. The attempt to gather entropy happens at the beginning of the actual installation phase when encrypted partitions or volumes are being created. A dialog window will open in the graphical interface, showing progress and remaining time. The entropy gathering process can not be skipped or disabled. However, there are several ways to speed the process up: If you can access the system during the installation, you can supply additional entropy by pressing random keys on the keyboard and moving the mouse. If the system being installed is a virtual machine, you can attach a virtio-rng device (a virtual random number generator) as described in the Red Hat Enterprise Linux 7.1 Virtualization Deployment and Administration Guide . Figure 3.3. Gathering Entropy for Encryption Built-in Help in the Graphical Installer Each screen in the installer's graphical interface and in the Initial Setup utility now has a Help button in the top right corner. Clicking this button opens the section of the Installation Guide relevant to the current screen using the Yelp help browser. Figure 3.4. Anaconda built-in help
|
[
"network --device=bridge0 --bridgeslaves=em1"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-installation_and_booting
|
Appendix A. Reference: Settings in Administration Portal and VM Portal Windows
|
Appendix A. Reference: Settings in Administration Portal and VM Portal Windows A.1. Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows A.1.1. Virtual Machine General Settings Explained The following table details the options available on the General tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.1. Virtual Machine: General Settings Field Name Description Power cycle required? Cluster The name of the host cluster to which the virtual machine is attached. Virtual machines are hosted on any physical machine in that cluster in accordance with policy rules. Yes. Cross-cluster migration is for emergency use only. Moving clusters requires the virtual machine to be down. Template The template on which the virtual machine is based. This field is set to Blank by default, which allows you to create a virtual machine on which an operating system has not yet been installed. Templates are displayed as Name | Sub-version name (Sub-version number) . Each new version is displayed with a number in brackets that indicates the relative order of the version, with a higher number indicating a more recent version. The version name is displayed as base version if it is the root template of the template version chain. When the virtual machine is stateless, there is an option to select the latest version of the template. This option means that anytime a new version of this template is created, the virtual machine is automatically recreated on restart based on the latest template. Not applicable. This setting is for provisioning new virtual machines only. Operating System The operating system. Valid values include a range of Red Hat Enterprise Linux and Windows variants. Yes. Potentially changes the virtual hardware. Instance Type The instance type on which the virtual machine's hardware configuration can be based. This field is set to Custom by default, which means the virtual machine is not connected to an instance type. The other options available from this drop down menu are Large , Medium , Small , Tiny , XLarge , and any custom instance types that the Administrator has created. Other settings that have a chain link icon to them are pre-filled by the selected instance type. If one of these values is changed, the virtual machine will be detached from the instance type and the chain icon will appear broken. However, if the changed setting is restored to its original value, the virtual machine will be reattached to the instance type and the links in the chain icon will rejoin. NOTE: Support for instance types is now deprecated, and will be removed in a future release. Yes. Optimized for The type of system for which the virtual machine is to be optimized. There are three options: Server , Desktop , and High Performance ; by default, the field is set to Server . Virtual machines optimized to act as servers have no sound card, use a cloned disk image, and are not stateless. Virtual machines optimized to act as desktop machines do have a sound card, use an image (thin allocation), and are stateless. Virtual machines optimized for high performance have a number of configuration changes. See Configuring High Performance Virtual Machines Templates and Pools . Yes. Name The name of the virtual machine. The name must be a unique name within the data center and must not contain any spaces, and must contain at least one character from A-Z or 0-9. The maximum length of a virtual machine name is 255 characters. The name can be reused in different data centers in the environment. Yes. VM ID The virtual machine ID. The virtual machine's creator can set a custom ID for that virtual machine. The custom ID must contain only numbers, in the format, 00000000-0000-0000-0000-00000000 . If no ID is specified during creation a UUID will be automatically assigned. For both custom and automatically-generated IDs, changes are not possible after virtual machine creation. Yes. Description A meaningful description of the new virtual machine. No. Comment A field for adding plain text human-readable comments regarding the virtual machine. No. Affinity Labels Add or remove a selected Affinity Label . No. Stateless Select this check box to run the virtual machine in stateless mode. This mode is used primarily for desktop virtual machines. Running a stateless desktop or server creates a new COW layer on the virtual machine hard disk image where new and changed data is stored. Shutting down the stateless virtual machine deletes the new COW layer which includes all data and configuration changes, and returns the virtual machine to its original state. Stateless virtual machines are useful when creating machines that need to be used for a short time, or by temporary staff. Not applicable. Start in Pause Mode Select this check box to always start the virtual machine in pause mode. This option is suitable for virtual machines which require a long time to establish a SPICE connection; for example, virtual machines in remote locations. Not applicable. Delete Protection Select this check box to make it impossible to delete the virtual machine. It is only possible to delete the virtual machine if this check box is not selected. No. Sealed Select this check box to seal the created virtual machine. This option eliminates machine-specific settings from virtual machines that are provisioned from the template. For more information about the sealing process, see Sealing a Windows Virtual Machine for Deployment as a Template No. Instance Images Click Attach to attach a floating disk to the virtual machine, or click Create to add a new virtual disk. Use the plus and minus buttons to add or remove additional virtual disks. Click Edit to change the configuration of a virtual disk that has already been attached or created. No. Instantiate VM network interfaces by picking a vNIC profile. Add a network interface to the virtual machine by selecting a vNIC profile from the nic1 drop-down list. Use the plus and minus buttons to add or remove additional network interfaces. No. A.1.2. Virtual Machine System Settings Explained CPU Considerations For non-CPU-intensive workloads , you can run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). The following benefits can be achieved: You can run a greater number of virtual machines, which reduces hardware requirements. You can configure virtual machines with CPU topologies that are otherwise not possible, such as when the number of virtual cores is between the number of host cores and the number of host threads. For best performance, and especially for CPU-intensive workloads , you should use the same topology in the virtual machine as in the host, so the host and the virtual machine expect the same cache usage. When the host has hyperthreading enabled, QEMU treats the host's hyperthreads as cores, so the virtual machine is not aware that it is running on a single core with multiple threads. This behavior might impact the performance of a virtual machine, because a virtual core that actually corresponds to a hyperthread in the host core might share a single cache with another hyperthread in the same host core, while the virtual machine treats it as a separate core. The following table details the options available on the System tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.2. Virtual Machine: System Settings Field Name Description Power cycle required? Memory Size The amount of memory assigned to the virtual machine. When allocating memory, consider the processing and storage needs of the applications that are intended to run on the virtual machine. If OS supports hotplugging, no. Otherwise, yes. Maximum Memory The maximum amount of memory that can be assigned to the virtual machine. Maximum guest memory is also constrained by the selected guest architecture and the cluster compatibility level. If OS supports hotplugging, no. Otherwise, yes. Total Virtual CPUs The processing power allocated to the virtual machine as CPU Cores. For high performance, do not assign more cores to a virtual machine than are present on the physical host. If OS supports hotplugging, no. Otherwise, yes. Virtual Sockets The number of CPU sockets for the virtual machine. Do not assign more sockets to a virtual machine than are present on the physical host. If OS supports hotplugging, no. Otherwise, yes. Cores per Virtual Socket The number of cores assigned to each virtual socket. If OS supports hotplugging, no. Otherwise, yes. Threads per Core The number of threads assigned to each core. Increasing the value enables simultaneous multi-threading (SMT). IBM POWER8 supports up to 8 threads per core. For x86 and x86_64 (Intel and AMD) CPU types, the recommended value is 1, unless you want to replicate the exact host topology, which you can do using CPU pinning. For more information, see Pinning CPU . If OS supports hotplugging, no. Otherwise, yes. Chipset/Firmware Type Specifies the chipset and firmware type. Defaults to the cluster's default chipset and firmware type. Options are: I440FX Chipset with BIOS Legacy BIOS Q35 Chipset with BIOS BIOS without UEFI (Default for clusters with compatibility version 4.4) Q35 Chipset with UEFI BIOS with UEFI (Default for clusters with compatibility version 4.7) Q35 Chipset with UEFI SecureBoot UEFI with SecureBoot, which authenticates the digital signatures of the boot loader For more information, see UEFI and the Q35 chipset in the Administration Guide . Yes. Custom Emulated Machine This option allows you to specify the machine type. If changed, the virtual machine will only run on hosts that support this machine type. Defaults to the cluster's default machine type. Yes. Custom CPU Type This option allows you to specify a CPU type. If changed, the virtual machine will only run on hosts that support this CPU type. Defaults to the cluster's default CPU type. Yes. Hardware Clock Time Offset This option sets the time zone offset of the guest hardware clock. For Windows, this should correspond to the time zone set in the guest. Most default Linux installations expect the hardware clock to be GMT+00:00. Yes. Custom Compatibility Version The compatibility version determines which features are supported by the cluster, as well as, the values of some properties and the emulated machine type. By default, the virtual machine is configured to run in the same compatibility mode as the cluster as the default is inherited from the cluster. In some situations the default compatibility mode needs to be changed. An example of this is if the cluster has been updated to a later compatibility version but the virtual machines have not been restarted. These virtual machines can be set to use a custom compatibility mode that is older than that of the cluster. See Changing the Cluster Compatibility Version in the Administration Guide for more information. Yes. Serial Number Policy Override the system-level and cluster-level policies for assigning a serial numbers to virtual machines. Apply a policy that is unique to this virtual machine: System Default : Use the system-wide defaults, which are configured in the Manager database using the engine configuration tool and the DefaultSerialNumberPolicy and DefaultCustomSerialNumber key names. The default value for DefaultSerialNumberPolicy is to use the Host ID. See Scheduling Policies in the Administration Guide for more information. Host ID : Set this virtual machine's serial number to the UUID of the host. Vm ID : Set this virtual machine's serial number to the UUID of this virtual machine. Custom serial number : Set this virtual machine's serial number to the value you specify in the following Custom Serial Number parameter. Yes. Custom Serial Number Specify the custom serial number to apply to this virtual machine. Yes. A.1.3. Virtual Machine Initial Run Settings Explained The following table details the options available on the Initial Run tab of the New Virtual Machine and Edit Virtual Machine windows. The settings in this table are only visible if the Use Cloud-Init/Sysprep check box is selected, and certain options are only visible when either a Linux-based or Windows-based option has been selected in the Operating System list in the General tab, as outlined below. Note This table does not include information on whether a power cycle is required because the settings apply to the virtual machine's initial run; the virtual machine is not running when you configure these settings. Table A.3. Virtual Machine: Initial Run Settings Field Name Operating System Description Use Cloud-Init/Sysprep Linux, Windows This check box toggles whether Cloud-Init or Sysprep will be used to initialize the virtual machine. VM Hostname Linux, Windows The host name of the virtual machine. Domain Windows The Active Directory domain to which the virtual machine belongs. Organization Name Windows The name of the organization to which the virtual machine belongs. This option corresponds to the text field for setting the organization name displayed when a machine running Windows is started for the first time. Active Directory OU Windows The organizational unit in the Active Directory domain to which the virtual machine belongs. Configure Time Zone Linux, Windows The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list. Admin Password Windows The administrative user password for the virtual machine. Click the disclosure arrow to display the settings for this option. Use already configured password : This check box is automatically selected after you specify an initial administrative user password. You must clear this check box to enable the Admin Password and Verify Admin Password fields and specify a new password. Admin Password : The administrative user password for the virtual machine. Enter the password in this text field and the Verify Admin Password text field to verify the password. Authentication Linux The authentication details for the virtual machine. Click the disclosure arrow to display the settings for this option. Use already configured password : This check box is automatically selected after you specify an initial root password. You must clear this check box to enable the Password and Verify Password fields and specify a new password. Password : The root password for the virtual machine. Enter the password in this text field and the Verify Password text field to verify the password. SSH Authorized Keys : SSH keys to be added to the authorized keys file of the virtual machine. You can specify multiple SSH keys by entering each SSH key on a new line. Regenerate SSH Keys : Regenerates SSH keys for the virtual machine. Custom Locale Windows Custom locale options for the virtual machine. Locales must be in a format such as en-US . Click the disclosure arrow to display the settings for this option. Input Locale : The locale for user input. UI Language : The language used for user interface elements such as buttons and menus. System Locale : The locale for the overall system. User Locale : The locale for users. Networks Linux Network-related settings for the virtual machine. Click the disclosure arrow to display the settings for this option. DNS Servers : The DNS servers to be used by the virtual machine. DNS Search Domains : The DNS search domains to be used by the virtual machine. Network : Configures network interfaces for the virtual machine. Select this check box and click + or - to add or remove network interfaces to or from the virtual machine. When you click + , a set of fields becomes visible that can specify whether to use DHCP, and configure an IP address, netmask, and gateway, and specify whether the network interface will start on boot. Custom Script Linux Custom scripts that will be run on the virtual machine when it starts. The scripts entered in this field are custom YAML sections that are added to those produced by the Manager, and allow you to automate tasks such as creating users and files, configuring yum repositories and running commands. For more information on the format of scripts that can be entered in this field, see the Custom Script documentation. Sysprep Windows A custom Sysprep definition. The definition must be in the format of a complete unattended installation answer file. You can copy and paste the default answer files in the /usr/share/ovirt-engine/conf/sysprep/ directory on the machine on which the Red Hat Virtualization Manager is installed and alter the fields as required. See Templates for more information. Ignition 2.3.0 Red Hat Enterprise Linux CoreOS When Red Hat Enterprise Linux CoreOS is selected as Operating System, this check box toggles whether Ignition will be used to initialize the virtual machine. A.1.4. Virtual Machine Console Settings Explained The following table details the options available on the Console tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.4. Virtual Machine: Console Settings Field Name Description Power cycle required? Graphical Console Section A group of settings. Yes. Headless Mode Select this check box if you do not a require a graphical console for the virtual machine. When selected, all other fields in the Graphical Console section are disabled. In the VM Portal, the Console icon in the virtual machine's details view is also disabled. Important See Configuring Headless Machines for more details and prerequisites for using headless mode. Yes. Video Type Defines the graphics device. QXL is the default and supports both graphic protocols. VGA supports only the VNC protocol. Yes. Graphics protocol Defines which display protocol to use. SPICE is the default protocol. VNC is an alternative option. To allow both protocols select SPICE + VNC . Yes. VNC Keyboard Layout Defines the keyboard layout for the virtual machine. This option is only available when using the VNC protocol. Yes. USB enabled Defines SPICE USB redirection. This check box is not selected by default. This option is only available for virtual machines using the SPICE protocol: Disabled (check box is cleared) - USB controller devices are added according to the devices.usb.controller value in the osinfo-defaults.properties configuration file. The default for all x86 and x86_64 operating systems is piix3-uhci . For ppc64 systems, the default is nec-xhci . Enabled (check box is selected) - Enables native KVM/SPICE USB redirection for Linux and Windows virtual machines. Virtual machines do not require any in-guest agents or drivers for native USB. Yes. Console Disconnect Action Defines what happens when the console is disconnected. This is only relevant with SPICE and VNC console connections. This setting can be changed while the virtual machine is running but will not take effect until a new console connection is established. Select either: No action - No action is taken. Lock screen - This is the default option. For all Linux machines and for Windows desktops this locks the currently active user session. For Windows servers, this locks the desktop and the currently active user. Logout user - For all Linux machines and Windows desktops, this logs out the currently active user session. For Windows servers, the desktop and the currently active user are logged out. Shutdown virtual machine - Initiates a graceful virtual machine shutdown. Reboot virtual machine - Initiates a graceful virtual machine reboot. No. Monitors The number of monitors for the virtual machine. This option is only available for virtual desktops using the SPICE display protocol. You can choose 1 , 2 or 4 . Note that multiple monitors are not supported for Windows systems with WDDMDoD drivers. Yes. Smartcard Enabled Smart cards are an external hardware security feature, most commonly seen in credit cards, but also used by many businesses as authentication tokens. Smart cards can be used to protect Red Hat Virtualization virtual machines. Select or clear the check box to activate and deactivate Smart card authentication for individual virtual machines. Yes. Single Sign On method Enabling Single Sign On allows users to sign into the guest operating system when connecting to a virtual machine from the VM Portal using the Guest Agent. Disable Single Sign On - Select this option if you do not want the Guest Agent to attempt to sign into the virtual machine. Use Guest Agent - Enables Single Sign On to allow the Guest Agent to sign you into the virtual machine. If you select Use Guest Agent, no. Otherwise, yes. Disable strict user checking Click the Advanced Parameters arrow and select the check box to use this option. With this option selected, the virtual machine does not need to be rebooted when a different user connects to it. By default, strict checking is enabled so that only one user can connect to the console of a virtual machine. No other user is able to open a console to the same virtual machine until it has been rebooted. The exception is that a SuperUser can connect at any time and replace a existing connection. When a SuperUser has connected, no normal user can connect again until the virtual machine is rebooted. Disable strict checking with caution, because you can expose the user's session to the new user. No. Soundcard Enabled A sound card device is not necessary for all virtual machine use cases. If it is for yours, enable a sound card here. Yes. Enable SPICE file transfer Defines whether a user is able to drag and drop files from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. No. Enable SPICE clipboard copy and paste Defines whether a user is able to copy and paste content from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. No. Serial Console Section A group of settings. Enable VirtIO serial console The VirtIO serial console is emulated through VirtIO channels, using SSH and key pairs, and allows you to access a virtual machine's serial console directly from a client machine's command line, instead of opening a console from the Administration Portal or the VM Portal. The serial console requires direct access to the Manager, since the Manager acts as a proxy for the connection, provides information about virtual machine placement, and stores the authentication keys. Select the check box to enable the VirtIO console on the virtual machine. Requires a firewall rule. See Opening a Serial Console to a Virtual Machine . Yes. A.1.5. Virtual Machine Host Settings Explained The following table details the options available on the Host tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.5. Virtual Machine: Host Settings Field Name Sub-element Description Power cycle required? Start Running On Defines the preferred host on which the virtual machine is to run. Select either: Any Host in Cluster - The virtual machine can start and run on any available host in the cluster. Specific Host(s) - The virtual machine will start running on a particular host in the cluster. However, the Manager or an administrator can migrate the virtual machine to a different host in the cluster depending on the migration and high-availability settings of the virtual machine. Select the specific host or group of hosts from the list of available hosts. No. The virtual machine can migrate to that host while running. CPU options Pass-Through Host CPU When selected, allows virtual machines to use the host's CPU flags. When selected, Migration Options is set to Allow manual migration only . Yes Migrate only to hosts with the same TSC frequency When selected, this virtual machine can only be migrated to a host with the same TSC frequency. This option is only valid for High Performance virtual machines. Yes Migration Options Migration mode Defines options to run and migrate the virtual machine. If the options here are not used, the virtual machine will run or migrate according to its cluster's policy. Allow manual and automatic migration - The virtual machine can be automatically migrated from one host to another in accordance with the status of the environment, or manually by an administrator. Allow manual migration only - The virtual machine can only be migrated from one host to another manually by an administrator. Do not allow migration - The virtual machine cannot be migrated, either automatically or manually. No Migration policy Defines the migration convergence policy. If the check box is left unselected, the host determines the policy. Cluster default (Minimal downtime) - Overrides in vdsm.conf are still applied. The guest agent hook mechanism is disabled. Minimal downtime - Allows the virtual machine to migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled. Post-copy migration - When used, post-copy migration pauses the migrating virtual machine vCPUs on the source host, transfers only a minimum of memory pages, activates the virtual machine vCPUs on the destination host, and transfers the remaining memory pages while the virtual machine is running on the destination. The post-copy policy first tries pre-copy to verify whether convergence can occur. The migration switches to post-copy if the virtual machine migration does not converge after a long time. This significantly reduces the downtime of the migrated virtual machine, and also guarantees that the migration finishes regardless of how rapidly the memory pages of the source virtual machine change. It is optimal for migrating virtual machines in heavy continuous use, which would not be possible to migrate with standard pre-copy migration. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts. Warning If the network connection breaks prior to the completion of the post-copy process, the Manager pauses and then kills the running virtual machine. Do not use post-copy migration if the virtual machine availability is critical or if the migration network is unstable. Suspend workload if needed - Allows the virtual machine to migrate in most situations, including when the virtual machine is running a heavy workload. Because of this, virtual machines may experience a more significant downtime than with some other settings. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled. No Enable migration encryption Allows the virtual machine to be encrypted during migration. Cluster default Encrypt Don't encrypt No Parallel Migrations Allows you to specify whether and how many parallel migration connections to use. Cluster default : Parallel migration connections are determined by the cluster default. Disabled : The virtual machine is migrated using a single, non-parallel connection. Auto : The number of parallel connections is automatically determined. This settings might automatically disable parallel connections. Auto Parallel : The number of parallel connections is automatically determined. Custom : Allows you to specify the preferred number of parallel connections, the actual number may be lower. Number of VM Migration Connections This setting is only available when Custom is selected. The preferred number of custom parallel migrations, between 2 and 255. Configure NUMA NUMA Node Count The number of virtual NUMA nodes available in a host that can be assigned to the virtual machine. No NUMA Pinning Opens the NUMA Topology window. This window shows the host's total CPUs, memory, and NUMA nodes, and the virtual machine's virtual NUMA nodes. You can manually pin virtual NUMA nodes to host NUMA nodes by clicking and dragging each vNUMA from the box on the right to a NUMA node on the left. You can also set Tune Mode for memory allocation: Strict - Memory allocation will fail if the memory cannot be allocated on the target node. Preferred - Memory is allocated from a single preferred node. If sufficient memory is not available, memory can be allocated from other nodes. Interleave - Memory is allocated across nodes in a round-robin algorithm. If you define NUMA pinning, Migration Options is set to Allow manual migration only . Yes A.1.6. Virtual Machine High Availability Settings Explained The following table details the options available on the High Availability tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.6. Virtual Machine: High Availability Settings Field Name Description Power cycle required? Highly Available Select this check box if the virtual machine is to be highly available. For example, in cases of host maintenance, all virtual machines are automatically live migrated to another host. If the host crashes and is in a non-responsive state, only virtual machines with high availability are restarted on another host. If the host is manually shut down by the system administrator, the virtual machine is not automatically live migrated to another host. Note that this option is unavailable for virtual machines defined as Server or Desktop if the Migration Options setting in the Hosts tab is set to Do not allow migration . For a virtual machine to be highly available, it must be possible for the Manager to migrate the virtual machine to other available hosts as necessary. However, for virtual machines defined as High Performance , you can define high availability regardless of the Migration Options setting. Yes. Target Storage Domain for VM Lease Select the storage domain to hold a virtual machine lease, or select No VM Lease to disable the functionality. When a storage domain is selected, it will hold a virtual machine lease on a special volume that allows the virtual machine to be started on another host if the original host loses power or becomes unresponsive. This functionality is only available on storage domain V4 or later. Note If you define a lease, the only Resume Behavior available is KILL. Yes. Resume Behavior Defines the desired behavior of a virtual machine that is paused due to storage I/O errors, once a connection with the storage is reestablished. You can define the desired resume behavior even if the virtual machine is not highly available. The following options are available: AUTO_RESUME - The virtual machine is automatically resumed, without requiring user intervention. This is recommended for virtual machines that are not highly available and that do not require user intervention after being in the paused state. LEAVE_PAUSED - The virtual machine remains in pause mode until it is manually resumed or restarted. KILL - The virtual machine is automatically resumed if the I/O error is remedied within 80 seconds. However, if more than 80 seconds pass, the virtual machine is ungracefully shut down. This is recommended for highly available virtual machines, to allow the Manager to restart them on another host that is not experiencing the storage I/O error. KILL is the only option available when using virtual machine leases. No. Priority for Run/Migration queue Sets the priority level for the virtual machine to be migrated or restarted on another host. No. Watchdog Allows users to attach a watchdog card to a virtual machine. A watchdog is a timer that is used to automatically detect and recover from failures. Once set, a watchdog timer continually counts down to zero while the system is in operation, and is periodically restarted by the system to prevent it from reaching zero. If the timer reaches zero, it signifies that the system has been unable to reset the timer and is therefore experiencing a failure. Corrective actions are then taken to address the failure. This functionality is especially useful for servers that demand high availability. Watchdog Model : The model of watchdog card to assign to the virtual machine. At current, the only supported model is i6300esb . Watchdog Action : The action to take if the watchdog timer reaches zero. The following actions are available: none - No action is taken. However, the watchdog event is recorded in the audit log. reset - The virtual machine is reset and the Manager is notified of the reset action. poweroff - The virtual machine is immediately shut down. dump - A dump is performed and the virtual machine is paused. The guest's memory is dumped by libvirt, therefore, neither 'kdump' nor 'pvpanic' is required. The dump file is created in the directory that is configured by auto_dump_path in the /etc/libvirt/qemu.conf file on the host. pause - The virtual machine is paused, and can be resumed by users. Yes. A.1.7. Virtual Machine Resource Allocation Settings Explained The following table details the options available on the Resource Allocation tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.7. Virtual Machine: Resource Allocation Settings Field Name Sub-element Description Power cycle required? CPU Allocation CPU Profile The CPU profile assigned to the virtual machine. CPU profiles define the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are defined for a cluster, based on quality of service entries created for data centers. No. CPU Shares Allows users to set the level of CPU resources a virtual machine can demand relative to other virtual machines. Low - 512 Medium - 1024 High - 2048 Custom - A custom level of CPU shares defined by the user. No. CPU Pinning Policy None - Runs without any CPU pinning. Manual - Runs a manually specified virtual CPU on a specific physical CPU and a specific host. Available only when the virtual machine is pinned to a Host. Resize and Pin NUMA - Resizes the virtual CPU and NUMA topology of the virtual machine according to the Host, and pins them to the Host resources. Dedicated - Exclusively pins virtual CPUs to host physical CPUs. Available for cluster compatibility level 4.7 or later. If the virtual machine has NUMA enabled, all nodes must be unpinned. Isolate Threads - Exclusively pins virtual CPUs to host physical CPUs. Each virtual CPU gets a physical core. Available for cluster compatibility level 4.7 or later. If the virtual machine has NUMA enabled, all nodes must be unpinned. No. CPU Pinning topology Enables the virtual machine's virtual CPU (vCPU) to run on a specific physical CPU (pCPU) in a specific host. The syntax of CPU pinning is v#p[_v#p] , for example: 0#0 - Pins vCPU 0 to pCPU 0. 0#0_1#3 - Pins vCPU 0 to pCPU 0, and pins vCPU 1 to pCPU 3. 1#1-4,^2 - Pins vCPU 1 to one of the pCPUs in the range of 1 to 4, excluding pCPU 2. The CPU Pinning Topology is populated automatically when Resize and Pin NUMA pinning is selected in CPU Pinning Policy . In order to pin a virtual machine to a host, you must also select the following on the Host tab: Start Running On: Specific Pass-Through Host CPU If CPU pinning is set and you change Start Running On: Specific a CPU pinning topology will be lost window appears when you click OK . When defined, Migration Options in the Hosts tab is set to Allow manual migration only . Yes. Memory Allocation Physical Memory Guaranteed The amount of physical memory guaranteed for this virtual machine. Should be any number between 0 and the defined memory for this virtual machine. If lowered, yes. Otherwise, no. Memory Balloon Device Enabled Enables the memory balloon device for this virtual machine. Enable this setting to allow memory overcommitment in a cluster. Enable this setting for applications that allocate large amounts of memory suddenly but set the guaranteed memory to the same value as the defined memory.Use ballooning for applications and loads that slowly consume memory, occasionally release memory, or stay dormant for long periods of time, such as virtual desktops. See Optimization Settings Explained in the Administration Guide for more information. Yes. Trusted Platform Module TPM Device Enabled Enables the addition of an emulated Trusted Platform Module (TPM) device. Select this check box to add an emulated Trusted Platform Module device to a virtual machine. TPM devices can only be used on x86_64 machines with UEFI firmware and PowerPC machines with pSeries firmware installed. See Adding Trusted Platform Module devices for more information. Yes. IO Threads IO Threads Enabled Enables IO threads. Select this check box to improve the speed of disks that have a VirtIO interface by pinning them to a thread separate from the virtual machine's other functions. Improved disk performance increases a virtual machine's overall performance. Disks with VirtIO interfaces are pinned to an IO thread using a round-robin algorithm. Yes. Queues Multi Queues Enabled Enables multiple queues. This check box is selected by default. It creates up to four queues per vNIC, depending on how many vCPUs are available. It is possible to define a different number of queues per vNIC by creating a custom property as follows: engine-config -s "CustomDeviceProperties={type=interface;prop={ other-nic-properties ;queues=[1-9][0-9]*}}" where other-nic-properties is a semicolon-separated list of pre-existing NIC custom properties. Yes. VirtIO-SCSI Enabled Allows users to enable or disable the use of VirtIO-SCSI on the virtual machines. Not applicable. VirtIO-SCSI Multi Queues Enabled The VirtIO-SCSI Multi Queues Enabled option is only available when VirtIO-SCSI Enabled is selected. Select this check box to enable multiple queues in the VirtIO-SCSI driver. This setting can improve I/O throughput when multiple threads within the virtual machine access the virtual disks. It creates up to four queues per VirtIO-SCSI controller, depending on how many disks are connected to the controller and how many vCPUs are available. Not applicable. Storage Allocation The Storage Allocation option is only available when the virtual machine is created from a template. Not applicable. Thin Provides optimized usage of storage capacity. Disk space is allocated only as it is required. When selected, the format of the disks will be marked as QCOW2 and you will not be able to change it. Not applicable. Clone Optimized for the speed of guest read and write operations. All disk space requested in the template is allocated at the time of the clone operation. Possible disk formats are QCOW2 or Raw . Not applicable. Disk Allocation The Disk Allocation option is only available when you are creating a virtual machine from a template. Not applicable. Alias An alias for the virtual disk. By default, the alias is set to the same value as that of the template. Not applicable. Virtual Size The total amount of disk space that the virtual machine based on the template can use. This value cannot be edited, and is provided for reference only. Not applicable. Format The format of the virtual disk. The available options are QCOW2 and Raw . When Storage Allocation is Thin , the disk format is QCOW2 . When Storage Allocation is Clone , select QCOW2 or Raw . Not applicable. Target The storage domain on which the virtual disk is stored. By default, the storage domain is set to the same value as that of the template. Not applicable. Disk Profile The disk profile to assign to the virtual disk. Disk profiles are created based on storage profiles defined in the data centers. For more information, see Creating a Disk Profile . Not applicable. A.1.8. Virtual Machine Boot Options Settings Explained The following table details the options available on the Boot Options tab of the New Virtual Machine and Edit Virtual Machine windows Table A.8. Virtual Machine: Boot Options Settings Field Name Description Power cycle required? First Device After installing a new virtual machine, the new virtual machine must go into Boot mode before powering up. Select the first device that the virtual machine must try to boot: Hard Disk CD-ROM Network (PXE) Yes. Second Device Select the second device for the virtual machine to use to boot if the first device is not available. The first device selected in the option does not appear in the options. Yes. Attach CD If you have selected CD-ROM as a boot device, select this check box and select a CD-ROM image from the drop-down menu. The images must be available in the ISO domain. Yes. Enable menu to select boot device Enables a menu to select the boot device. After the virtual machine starts and connects to the console, but before the virtual machine starts booting, a menu displays that allows you to select the boot device. This option should be enabled before the initial boot to allow you to select the required installation media. Yes. A.1.9. Virtual Machine Random Generator Settings Explained The following table details the options available on the Random Generator tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.9. Virtual Machine: Random Generator Settings Field Name Description Power cycle required? Random Generator enabled Selecting this check box enables a paravirtualized Random Number Generator PCI device (virtio-rng). This device allows entropy to be passed from the host to the virtual machine in order to generate a more sophisticated random number. Note that this check box can only be selected if the RNG device exists on the host and is enabled in the host's cluster. Yes. Period duration (ms) Specifies the duration of the RNG's "full cycle" or "full period" in milliseconds. If omitted, the libvirt default of 1000 milliseconds (1 second) is used. If this field is filled, Bytes per period must be filled also. Yes. Bytes per period Specifies how many bytes are permitted to be consumed per period. Yes. Device source: The source of the random number generator. This is automatically selected depending on the source supported by the host's cluster. /dev/urandom source - The Linux-provided random number generator. /dev/hwrng source - An external hardware generator. Yes. A.1.10. Virtual Machine Custom Properties Settings Explained The following table details the options available on the Custom Properties tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.10. Virtual Machine Custom Properties Settings Field Name Description Recommendations and Limitations Power cycle required? sndbuf Enter the size of the buffer for sending the virtual machine's outgoing data over the socket. Default value is 0. - Yes hugepages Enter the huge page size in KB. Set the huge page size to the largest size supported by the pinned host. The recommended size for x86_64 is 1 GB. The virtual machine's huge page size must be the same size as the pinned host's huge page size. The virtual machine's memory size must fit into the selected size of the pinned host's free huge pages. The NUMA node size must be a multiple of the huge page's selected size. Yes vhost Disables vhost-net, which is the kernel-based virtio network driver on virtual network interface cards attached to the virtual machine. To disable vhost, the format for this property is LogicalNetworkName : false . This will explicitly start the virtual machine without the vhost-net setting on the virtual NIC attached to LogicalNetworkName . vhost-net provides better performance than virtio-net, and if it is present, it is enabled on all virtual machine NICs by default. Disabling this property makes it easier to isolate and diagnose performance issues, or to debug vhost-net errors; for example, if migration fails for virtual machines on which vhost does not exist. Yes sap_agent Enables SAP monitoring on the virtual machine. Set to true or false . - Yes viodiskcache Caching mode for the virtio disk. writethrough writes data to the cache and the disk in parallel, writeback does not copy modifications from the cache to the disk, and none disables caching. In order to ensure data integrity in the event of a fault in storage, in the network, or in a host during migration, do not migrate virtual machines with viodiskcache enabled, unless virtual machine clustering or application-level clustering is also enabled. Yes scsi_hostdev Optionally, if you add a SCSI host device to a virtual machine, you can specify the optimal SCSI host device driver. For details, see Adding Host Devices to a Virtual Machine . scsi_generic : (Default) Enables the guest operating system to access OS-supported SCSI host devices attached to the host. Use this driver for SCSI media changers that require raw access, such as tape or CD changers. scsi_block : Similar to scsi_generic but better speed and reliability. Use for SCSI disk devices. If trim or discard for the underlying device is desired, and it's a hard disk, use this driver. scsi_hd : Provides performance with lowered overhead. Supports large numbers of devices. Uses the standard SCSI device naming scheme. Can be used with aio-native. Use this driver for high-performance SSDs. virtio_blk_pci : Provides the highest performance without the SCSI overhead. Supports identifying devices by their serial numbers. If you are not sure, try scsi_hd . Yes Warning Increasing the value of the sndbuf custom property results in increased occurrences of communication failure between hosts and unresponsive virtual machines. A.1.11. Virtual Machine Icon Settings Explained You can add custom icons to virtual machines and templates. Custom icons can help to differentiate virtual machines in the VM Portal. The following table details the options available on the Icon tab of the New Virtual Machine and Edit Virtual Machine windows. Note This table does not include information on whether a power cycle is required because these settings apply to the virtual machine's appearance in the Administration portal , not to its configuration. Table A.11. Virtual Machine: Icon Settings Button Name Description Upload Click this button to select a custom image to use as the virtual machine's icon. The following limitations apply: Supported formats: jpg, png, gif Maximum size: 24 KB Maximum dimensions: 150px width, 120px height Power cycle required? Use default A.1.12. Virtual Machine Foreman/Satellite Settings Explained The following table details the options available on the Foreman/Satellite tab of the New Virtual Machine and Edit Virtual Machine windows Table A.12. Virtual Machine:Foreman/Satellite Settings Field Name Description Power cycle required? Provider If the virtual machine is running Red Hat Enterprise Linux and the system is configured to work with a Satellite server, select the name of the Satellite from the list. This enables you to use Satellite's content management feature to display the relevant Errata for this virtual machine. See Configuring Satellite Errata for more details. Yes.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/appe-Reference_Settings_in_Administration_Portal_and_User_Portal_Windows
|
Chapter 4. Installing a cluster on vSphere using the Assisted Installer
|
Chapter 4. Installing a cluster on vSphere using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. The Assisted Installer supports the various deployment platforms with a focus on the following infrastructures: Bare metal Nutanix vSphere 4.1. Additional resources Installing OpenShift Container Platform with the Assisted Installer
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_vsphere/installing-vsphere-assisted-installer
|
5.6. anaconda
|
5.6. anaconda 5.6.1. RHBA-2012:0782 - anaconda bug fix and enhancement update Updated anaconda packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The anaconda package contains portions of the Anaconda installation program that can be run by the user for reconfiguration and advanced installation options. Bug Fixes BZ# 690058 Prior to this update, the noprobe argument in a kickstart file was not passed to the last known codepath. Consequently, the noprobe request was not properly honored by Anaconda . This update improves the code so that the argument is passed to the last known codepath. As a result, device drivers are loaded according to the device command in the kickstart file. BZ# 691794 Previously, an improper device file that provided access to an array as a whole was used to initialize the boot loader in a Device Mapper Multipath ( DM-Multipath ) environment. Consequently, the system was not bootable. Anaconda has been modified to enumerate all drives in an array and initialize the boot loader on each of them. As a result, the system now boots as expected. BZ# 723404 When performing a minimal installation from media without the use of a network, network devices did not have a working default network configuration. Consequently, bringing a network device up after reboot using the ifup command failed. This update sets the value of BOOTPROTO to dhcp in default network device configuration files. As a result, network devices can be activated successfully using the ifup command after reboot in the scenario described. BZ# 727136 When Anaconda places a PowerPC Reference Platform ( PReP ) boot partition on a different drive to the root partition, the system cannot boot. This update forces the PReP boot partition to be on the same drive as the root partition. As a result, the system boots as expected. BZ# 734128 Due to a regression, when installing on systems with pre-existing mirrored Logical Volumes ( LV ), the installer failed to properly detect the Logical Volume Management configuration containing mirrored logical volumes. Consequently, a mirrored logical volume created before installation was not shown and could not be used in kickstart . The code to handle mirrored logical volumes has been updated to make use of the udev information that changed due to a bug fix. As a result, mirrored logical volumes are correctly detected by the installer. BZ# 736457 On IBM System z architectures, z/VM guests with only one CPU allocated failed to read the Conversational Monitor System (CMS) configuration file used by the installation environment. Consequently, users of z/VM guests with a single CPU had to either pass all installation environment configuration values on the kernel boot line or supply the information at the interactive prompts as the installation environment booted up. This update improves the code to detect the number of guests after mounting the /proc file. As a result, guests with one CPU can bring the boot device online so the CMS configuration file can be read and automated installations proceed as expected. BZ# 738577 The repo commands in kickstart generated by Anaconda contained base installation repository information but they should contain only additional repositories added either by the repo kickstart command or in the graphical user interface (GUI). Consequently, in media installations, the repo command generated for installation caused a failure when the kickstart file was used. With this update, Anaconda now generates repo commands only for additional repositories. As a result, kickstart will not fail for media installations. BZ# 740870 Manual installation on to BIOS RAID devices of level 0 or level 1 produced an Intel Media Storage Manager ( IMSM ) metadata read error in the installer. Consequently, users were not able to install to such devices. With this update, Anaconda properly detects BIOS RAID level 0 and level 1 IMSM metadata. As a result, users are able to install to these devices. BZ# 746495 The LiveCD environment was missing a legacy symlink to the devkit-disks utility. Consequently, the call that modified automounter behavior was never properly executed. The code has been updated to call the proper non-legacy binary. As a result, USB devices used during installation are no longer automounted. BZ# 747219 The console tty1 was put under control of Anaconda , but was not returned when Anaconda exited. Consequently, init did not have permission to modify tty1's settings to enable Ctrl + C functionality when Anaconda exited, which resulted in Ctrl + C not working when the installer prompted the user to press the Ctrl + C or Ctrl + Alt + Delete key combination after Anaconda terminated unexpectedly. A code returning tty1 control back to init was added to Anaconda. As a result, Ctrl + C now works as expected if the user is prompted to press it when Anaconda crashes. BZ# 750126 The Bash version used in the buildinstall script had a bug that influenced parsing of the =~ operator. This operator is used to check for the architecture when including files. Consequently, some binaries which provide the grub command were present on x86_64 versions of the installer, but were missing from i686 media. The Bash code has been modified to prevent this bug. As a result, the binaries are now also present on i686 media and users can now use the grub command from installation media as expected. BZ# 750417 Due to bad ordering in the unmounting sequence, the dynamic linker failed to link libraries, which caused the mdadm utility not to work and exit with the status code of 127 . This update fixes the ordering in the unmounting sequence and as a result, the dynamic linker and mdadm now work correctly. BZ# 750710 There was no check to see if the file descriptors passed as stdout and stderr were distinct. Consequently, if the stdout and stderr descriptors were the same, using them both for writing resulted in overwriting and the log file not containing all of the lines expected. With this update, if the stdout and stderr descriptors are the same then only one of them is used for both stdin and stderr. As a result, the log file contains all lines from both stdout and stderr. BZ# 753108 When installing on a system with more than one disk with a PowerPC Reference Platform ( PReP ) partition present, the PReP partitions that should be left untouched were updated. This update corrects the problem so that PReP partitions other than the one used during installation are left untouched. As a result, old PReP partitions do not get updated. BZ# 754031 The kernel command line /proc/cmdline ends with \n but the installer only checked for \0 . Consequently, the devel argument was not detected when it was the last argument on the command line and the installation failed. This update improves the code to also check for \n . As a result, the devel argument is correctly parsed and installation proceeds as expected. BZ# 756608 Network installations on IBM System z check the nameserver address provided using the ping command. Environments restricting ICMP ECHO packets will cause this test to fail, halting the installation and asking the user whether or not the provided nameserver address is valid. Consequently, automated installations using kickstart will stop if this test fails. With this update, in the event that the ping test fails, the nslookup command is used to validate the provided nameserver address. If the nslookup test succeeds then kickstart will continue with the installation. As a result, automated network installations on IBM System z in non-interactive mode will complete as expected in the scenario described. BZ# 760250 When configuring a system with multiple active network interfaces and the ksdevice = link command was present, the link specification was not used consistently for device activation and device configuration. Consequently, other network devices having link status were sometimes misconfigured using the settings targeted to the device activated by the installer. With this update, the code has been improved and now refers to the same device with link specification both in case of device activation and device configuration. As a result, when multiple devices with link status are present during installation, ksdevice = link specification of the device to be activated and used by the installer does not cause misconfiguration of another device having link status. BZ# 766902 When configuring the network using the Anaconda GUI hostname screen, the keyboard shortcut for the Configure Network button was missing. This update adds the C keyboard shortcut. Network configuration can now be invoked using the Alt + C keyboard shortcut. BZ# 767727 The Ext2FS class in Anaconda has a maximum file size attribute correctly set to 8 TB , but Ext3FS and Ext4FS inherited this value without overriding it. Consequently, when attempting to create an ext3 or ext4 file system of a size greater than 8Tb the installer would not allow it. With this update, the installer's upper bound for new ext3 and ext4 filesystem size has been adjusted from 8Tb to 16TB . As a result, the installer now allows creation of ext3 and ext4 filesystems up to 16TB . BZ# 769145 The Anaconda dhcptimeout boot option was not working. NetworkManager used a DHCP transaction timeout of 45 seconds without the possibility of configuring a different value. Consequently, in certain cases NetworkManager failed to obtain a network address. NetworkManager has been extended to read the timeout parameter from a DHCP configuration file and use that instead of the default value. Anaconda has been updated to write out the dhcptimeout value to the interface configuration file used for installation. As a result, the boot option dhcptimeout works and NetworkManager now waits to obtain an address for the duration of the DHCP transaction period as specified in the DHCP client configuration file. BZ# 783245 Prior to this update, USB3 modules were not in the Anaconda install image. Consequently, USB3 devices were not detected by Anaconda during installation. This update adds the USB3 modules to the install image and USB3 devices are now detected during installation. BZ# 783841 When the kickstart clearpart command or the installer's automatic partitioning options to clear old data from the system's disks were used with complex storage devices such as logical volumes and software RAID, LVM tools caused the installation process to become unresponsive due to a deadlock. Consequently, the installer failed when trying to remove old metadata from complex storage devices. This update changes the LVM commands in the udev rules packaged with the installer to use a less restrictive method of locking and the installer was changed to explicitly remove partitions from a disk instead of simply creating a new partition table on top of the old contents when it initializes a disk. As a result, LVM no longer hangs in the scenario described. BZ# 785400 The /usr/lib/anaconda/textw/netconfig_text.py file tried to import a module from the wrong location. Consequently, Anaconda failed to start and the following error message was generated: The code has been corrected and the error no longer occurs in the scenario described. BZ# 788537 Prior to this update, kickstart repository entries did not use the global proxy setting. Consequently, on networks restricted to use a proxy installation would terminate unexpectedly when attempting to connect to additional repository entries in a kickstart file if no proxy had been explicitly specified. This update changes the code to use the global proxy if an additional repository has no proxy set for it. As a result, the global proxy setting will be used and installation will proceed as expected in the scenario described. BZ# 800388 The kickstart pre and post installation scripts had no information about the proxy being used by Anaconda . As a consequence, programs such as wget and curl would not work properly in a pre-installation and post-installation script on networks restricted to using a proxy. This update sets the PROXY , PROXY_USER , PROXY_PASSWORD environmental variables. As a result, pre and post installation scripts now have access to the proxy setting used by Anaconda. BZ# 802397 Using the --onbiosdisk = NUMBER option for the kickstart part command sometimes caused installation failures as Anaconda was not able to find the disk that matches the specified BIOS disk number. Users wishing to use BIOS disk numbering to control kickstart installations were not able to successfully install Red Hat Enterprise Linux. This update adjusts the comparison in Anaconda that matches the BIOS disk number to determine the Linux device name. As a result, users wishing to use BIOS disk numbering to control kickstart installations will now be able to successfully install Red Hat Enterprise Linux. BZ# 805910 Due to a regression, when running the system in Rescue mode with no or only uninitialized disks, the Anaconda storage subsystem did not check for the presence of a GUI before presenting the user with a list of options. Consequently, when the user selected continue the installer terminated unexpectedly with a traceback. This update adds a check for presence of the GUI and falls back to a TUI if there is none. As a result, the user is informed about the lack of usable disks in the scenario described. BZ# 823810 When using Anaconda with Qlogic qla4xxx devices in firmware boot mode and with iSCSI targets set up in BIOS (either enabled or disabled), the devices were exposed as iSCSI devices. But in this mode the devices cannot be handled with the iscsiadm and libiscsi tools used by the installer. Consequently, installation failed with a traceback during examination of storage devices by the installer. This update changes the installer to not try to manage iSCSI devices set up with qla4xxx firmware with iscsiadm or libiscsi. As a result, installation in an environment with iSCSI targets set up by qla4xxx devices in firmware mode finishes successfully. Note The firmware boot mode is turned on and off by the qla4xxx.ql4xdisablesysfsboot boot option. With this update, it is enabled by default. Enhancements BZ# 500273 There was no support for binding of iSCSI connections to network interfaces, which is required for installations using multiple iSCSI connections to a target on a single subnet for Device Mapper Multipath (DM-Multipath) connectivity. Consequently, DM-Multipath connectivity could not be used on a single subnet as all devices used the default network interface. With this update, the Bind targets to network interfaces option has been added to the " Advanced Storage Options " dialog box. When turned on, targets discovered specifically for all active network interfaces are available for selection and login. For kickstart installations a new iscsi --iface option can be used to specify network interface to which a target should be bound. Once interface binding is used, all iSCSI connections have to be bound, that is to say the --iface option has to be specified for all iscsi commands in kickstart. Network devices required for iSCSI connections can be activated either using kickstart network command with the --activate option or in the graphical user interface (GUI) using the Configure Network button from the " Advanced Storage Options " dialog ( " Connect Automatically " has to be checked when configuring the device so that the device is also activated in the installer). As a result, it is now possible to configure and use DM-Multipath connectivity for iSCSI devices using different network interfaces on a single subnet during installation. BZ# 625697 The curl command line tool was not in the install image file. Consequently, curl could not be used in the %pre section of kickstart. This update adds curl to the install image and curl can be used in the %pre section of kickstart. BZ# 660686 Support for installation using IP over InfiniBand ( IPoIB ) interfaces has been added. As a result, it is possible to install systems connected directly to an InfiniBand network using IPoIB network interfaces. BZ# 663647 Two new options were added to the kickstart volgroup command to specify initially unused space in megabytes or as a percentage of the total volume group size. These options are only valid for volume groups being created during installation. As a result, users can effectively reserve space in a new volume group for snapshots while still using the --grow option for logical volumes within the same volume group. BZ# 671230 The GPT disk label is now used for disks of size 2.2 TB and larger. As a result, Anaconda now allows installation to disks of size 2.2 TB and larger, but the installed system will not always boot properly on non- EFI systems. Disks of size 2.2 TB and larger may be used during the installation process, but only as data disks; they should not be used as bootable disks. BZ# 705328 When an interface configuration file is created by a configuration application such as Anaconda , NetworkManager generates the Universally Unique IDentifier ( UUID ) by hashing the existing configuration file name. Consequently, the same UUID was generated on multiple installed systems for a given network device name. With this update, a random UUID is generated by Anaconda for NetworkManager so that it does not have to generate the connection UUID by hashing the configuration file name. As a result, each network connection of all installed systems has different UUID. BZ# 735791 When IPv6 support is set to be disabled by the installer using the noipv6 boot option, or the network --nopipv6 kickstart command, or by using the " Configure TCP/IP " screen of the loader Text User Interface ( TUI ), and no network device is configured for IPv6 during installation, the IPv6 kernel modules on the installed system will now be disabled. BZ# 735857 The ability to configure a VLAN discovery option for Fibre Channel over Ethernet ( FCoE ) devices added during installation using Anaconda 's graphical user interface was required. All FCoE devices created in Anaconda installer were configured to perform VLAN discovery using the fcoemon daemon by setting the AUTO_VLAN value of its configuration file to yes . A new " Use auto vlan " checkbox was added to the " Advanced Storage Options " dialog, which is invoked by the Add Advanced Target button in " Advanced Storage Devices " screen. As a result, when adding FCoE device in Anaconda, it is now possible to configure the VLAN discovery option of the device using " Use auto vlan " checkbox in " Advanced Storage Options " dialog. The value of AUTO_VLAN option of FCoE device configuration file /etc/fcoe/cfg-device is set accordingly. BZ# 737097 The lsscsi and sg3_utils were not present in the install image. Consequently, maintenance of Data Integrity Field ( DIF ) disks was not possible. This update adds the lsscsi and sg3_utils to the install image and now utilities to maintain DIF disks can be used during the installation. BZ# 743784 Anaconda creates FCoE configuration files under the /etc/fcoe/ directory using biosdevname , which is the new style interface naming scheme, for all the available Ethernet interfaces for FCoE BFS. However, it did not add the ifname kernel command line argument for FCoE interface that stays offline after discovering FCoE targets during installation. Because of this, during subsequent reboot the system tried to find the old style eth X interface name in /etc/fcoe/ , which does not match the file created by Anaconda using biosdevname. Therefore, due to the missing FCoE config file, FCoE interface is never created on this interface. Consequently, during FCoE BFS installation, when an Ethernet interface went offline after discovering the targets, FCoE links did not come up after reboot. This update adds dracut ip parameters for all FCoE interfaces including those that went offline during installation. As a result, FCoE interfaces disconnected during installation will be activated after reboot. BZ# 744129 Installations with the swap --recommended command in kickstart created a swap file of size 2 GB plus the installed RAM size regardless of the amount of RAM installed. Consequently, machines with a large amount of RAM had huge swap files prolonging the time before the oom_kill syscall was invoked even in malfunctioning cases. In this update, swap size calculations for swap --recommended were changed to meet the values recommended in the documentation https://access.redhat.com/site/solutions/15244 and the --hibernation option was added for the swap kickstart command and as the default in GUI/TUI installations. As a result, machines with a lot of RAM have a reasonable swap size now if swap --recommended is used. However, hibernation might not work with this configuration. If users want to use hibernation they should use swap --hibernation . BZ# 755147 If there are multiple Ethernet interfaces configured for FCoE boot, by default, only the primary interface is turned on and the other interfaces are not configured. This update sets the value ONBOOT = yes in the ifcfg configuration file during installation for all network interfaces used by FCoE. As a result, all network devices used for installation to FCoE storage devices are activated automatically after reboot. BZ# 770486 This update adds the Netcat ( nc ) networking utility to the install environment. Users can now use the nc program in Rescue mode. BZ# 773545 The virt-what shell script has been added to the install image. Users can now use the virt-what tool in kickstart. BZ# 784327 Firmware files were loaded only from RPM files in USDprefix/lib/firmware paths on a Driver Update Disk ( DUD ). This update adds the USDprefix/lib/firmware/updates directory to the path to be searched for firmware. RPM files containing firmware updates can now have firmware files in %prefix/lib/firmware/updates . Users of anaconda should upgrade to these updated packages, which resolve these issues and add these enhancements.
|
[
"No module named textw.netconfig_text"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/anaconda
|
13.3. Performing a Driver Update During Installation
|
13.3. Performing a Driver Update During Installation You can perform a driver update during installation in the following ways: let the installer automatically find a driver update disk. let the installer prompt you for a driver update. use a boot option to specify a driver update disk. 13.3.1. Let the Installer Find a Driver Update Disk Automatically Attach a block device with the filesystem label OEMDRV before starting the installation process. The installer will automatically examine the device and load any driver updates that it detects and will not prompt you during the process. Refer to Section 13.2.1.1, "Preparing to use an image file on local storage" to prepare a storage device for the installer to find.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-performing_a_driver_update_during_installation-ppc
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/getting_started_with_eclipse_temurin/making-open-source-more-inclusive
|
Chapter 20. Using the system health dashboard
|
Chapter 20. Using the system health dashboard The Red Hat Advanced Cluster Security for Kubernetes (RHACS) system health dashboard provides a single interface for viewing health-related information about the RHACS components. 20.1. Analyzing and managing the system health information You can view all the health-related information associated with the Red Hat Advanced Cluster Security for Kubernetes (RHACS) components on the System Health page. Procedure In the RHACS portal, click Platform Configuration System Health . Optional: To download the administration usage data as a CSV file, perform the following steps: Click Show administration usage . Note The Administration usage page displays the administration usage data you have collected, including the number of secured Kubernetes nodes and CPU units. The current usage reflects the last metrics received from Sensors, with a delay of about 5 minutes. Maximum usage is aggregated hourly and only includes clusters that are still connected. The date range is inclusive and matches your time zone. This data is not sent to Red Hat or displayed as Prometheus metrics. Enter the start and end date in YYYY-MM-DD format. If you do not specify the start and end date, the first and last day of the month is selected automatically. To download the administration usage data, click Download CSV . Optional: To download the platform data, perform the following steps: Click Generate diagnostic bundle . Optional: Choose the appropriate method to filter the platform data that you want to include in the compressed file: To filter the platform data based on specific clusters, select one or more clusters from the Filter by clusters drop-down list. If you do not select a cluster, all clusters are selected automatically. To filter the platform data according to the start time, enter a time in coordinated universal time (UTC) format, yyyy-mm-ddThh:mmZ . If you do not specify a start time, the diagnostic bundle includes the platform data for the last 20 minutes. To download the platform data, click Download diagnostic bundle . Optional: To view the Cluster status , Sensor upgrade , or Credential expiration information associated with one or more clusters, perform the following steps: Click View clusters . To display further details on a cluster, click on the cluster name. You can find all the status information associated with the selected cluster in the Cluster summary section. Click . Optional: To download the required YAML file to update your Helm values, click Download Helm values . Click Finish . Optional: To update the certificate associated with the RHACS components such as Central, StackRox Scanner, or Scanner V4, perform the following steps: Scroll down to locate the component for which you want to update the certificate. Click Download YAML . Optional: To reissue the internal certificates, perform the following steps: Click Reissuing internal certificates . Follow the instructions in the Red Hat documentation to apply the YAML and complete the reissue. 20.2. System health dashboard page overview The System Health dashboard page organizes information into the following groups: Cluster status Indicates the overall status of the cluster, and the status of essential Red Hat Advanced Cluster Security for Kubernetes (RHACS) components such as Sensor, Collector and Admission Controller. Sensor upgrade Indicates the status of any pending or in-progress upgrades for Sensor. Credential expiration Indicates the expiration date of the critical credentials. StackRox Scanner Vulnerability Definitions Indicates the status of the vulnerability definitions database that StackRox Scanner uses. Scanner V4 Vulnerabilities Indicates the status of the vulnerability definitions database that Scanner V4 uses. Image Integrations Indicates the status of integrated image registries and scanners. Notifier Integrations Indicates the status of any notifiers that you have integrated. Backup Integrations Indicates the status of any backup providers that you have integrated. Declarative configuration Indicates the status and consistency of the declarative configurations. Central certificate Indicates the expiration date of the Central certificate. StackRox Scanner certificate Indicates the expiration date of the StackRox Scanner certificate. Scanner V4 certificate Indicates the expiration date of the Scanner V4 certificate. 20.2.1. Image Integrations section The Image Integrations section organizes the information into the following groups: Name Indicates the name of the integrated image. Label Indicates a descriptive label associated with the integration for additional context. Error message Indicates any issues or errors that the integration encounters. Date Indicates the timestamp of the last health check or status update for the integration. 20.3. Administration usage page overview When you click Show administration usage in the System Health page, you can view the product usage data for the number of secured Kubernetes nodes and CPU units for secured clusters based on metrics collected from Sensors. You can use this information to estimate Red Hat Advanced Cluster Security for Kubernetes (RHACS) consumption data for reporting. For more information on how CPU units are defined in Kubernetes, see CPU resource units (Kubernetes documentation). Note OpenShift Container Platform provides its own usage reports. You can use this information with self-managed Kubernetes systems. RHACS provides the following usage data in the web portal and API: Currently secured CPU units Indicates the number of Kubernetes CPU units that your RHACS secured clusters uses, as of the latest metrics collection. Node count Indicates the number of Kubernetes nodes that RHACS secures, as of the latest metrics collection. Maximum secured CPU units Indicates the maximum number of CPU units that your RHACS secured clusters use, measured hourly and aggregated for the time period that you specified. Node count Indicates the maximum number of Kubernetes nodes that RHACS secures, measured hourly and aggregated for the time period that you specified. CPU units observation date Indicates the date on which the maximum secured CPU units data was collected. Node count observation date Indicates the date on which the maximum secured node count data was collected.
| null |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/use-system-health-dashboard
|
Chapter 2. Cluster Observability Operator overview
|
Chapter 2. Cluster Observability Operator overview The Cluster Observability Operator (COO) is an optional component of the OpenShift Container Platform designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default OpenShift Container Platform monitoring system. The COO deploys the following monitoring components: Prometheus - A highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write. Thanos Querier (optional) - Enables querying of Prometheus instances from a central location. Alertmanager (optional) - Provides alert configuration capabilities for different services. UI plugins (optional) - Enhances the observability capabilities with plugins for monitoring, logging, distributed tracing and troubleshooting. Korrel8r (optional) - Provides observability signal correlation, powered by the open source Korrel8r project. 2.1. COO compared to default monitoring stack The COO components function independently of the default in-cluster monitoring stack, which is deployed and managed by the Cluster Monitoring Operator (CMO). Monitoring stacks deployed by the two Operators do not conflict. You can use a COO monitoring stack in addition to the default platform monitoring components deployed by the CMO. The key differences between COO and the default in-cluster monitoring stack are shown in the following table: Feature COO Default monitoring stack Scope and integration Offers comprehensive monitoring and analytics for enterprise-level needs, covering cluster and workload performance. However, it lacks direct integration with OpenShift Container Platform and typically requires an external Grafana instance for dashboards. Limited to core components within the cluster, for example, API server and etcd, and to OpenShift-specific namespaces. There is deep integration into OpenShift Container Platform including console dashboards and alert management in the console. Configuration and customization Broader configuration options including data retention periods, storage methods, and collected data types. The COO can delegate ownership of single configurable fields in custom resources to users by using Server-Side Apply (SSA), which enhances customization. Built-in configurations with limited customization options. Data retention and storage Long-term data retention, supporting historical analysis and capacity planning Shorter data retention times, focusing on short-term monitoring and real-time detection. 2.2. Key advantages of using COO Deploying COO helps you address monitoring requirements that are hard to achieve using the default monitoring stack. 2.2.1. Extensibility You can add more metrics to a COO-deployed monitoring stack, which is not possible with core platform monitoring without losing support. You can receive cluster-specific metrics from core platform monitoring through federation. COO supports advanced monitoring scenarios like trend forecasting and anomaly detection. 2.2.2. Multi-tenancy support You can create monitoring stacks per user namespace. You can deploy multiple stacks per namespace or a single stack for multiple namespaces. COO enables independent configuration of alerts and receivers for different teams. 2.2.3. Scalability Supports multiple monitoring stacks on a single cluster. Enables monitoring of large clusters through manual sharding. Addresses cases where metrics exceed the capabilities of a single Prometheus instance. 2.2.4. Flexibility Decoupled from OpenShift Container Platform release cycles. Faster release iterations and rapid response to changing requirements. Independent management of alerting rules. 2.3. Target users for COO COO is ideal for users who need high customizability, scalability, and long-term data retention, especially in complex, multi-tenant enterprise environments. 2.3.1. Enterprise-level users and administrators Enterprise users require in-depth monitoring capabilities for OpenShift Container Platform clusters, including advanced performance analysis, long-term data retention, trend forecasting, and historical analysis. These features help enterprises better understand resource usage, prevent performance issues, and optimize resource allocation. 2.3.2. Operations teams in multi-tenant environments With multi-tenancy support, COO allows different teams to configure monitoring views for their projects and applications, making it suitable for teams with flexible monitoring needs. 2.3.3. Development and operations teams COO provides fine-grained monitoring and customizable observability views for in-depth troubleshooting, anomaly detection, and performance tuning during development and operations. 2.4. Using Server-Side Apply to customize Prometheus resources Server-Side Apply is a feature that enables collaborative management of Kubernetes resources. The control plane tracks how different users and controllers manage fields within a Kubernetes object. It introduces the concept of field managers and tracks ownership of fields. This centralized control provides conflict detection and resolution, and reduces the risk of unintended overwrites. Compared to Client-Side Apply, it is more declarative, and tracks field management instead of last applied state. Server-Side Apply Declarative configuration management by updating a resource's state without needing to delete and recreate it. Field management Users can specify which fields of a resource they want to update, without affecting the other fields. Managed fields Kubernetes stores metadata about who manages each field of an object in the managedFields field within metadata. Conflicts If multiple managers try to modify the same field, a conflict occurs. The applier can choose to overwrite, relinquish control, or share management. Merge strategy Server-Side Apply merges fields based on the actor who manages them. Procedure Add a MonitoringStack resource using the following configuration: Example MonitoringStack object apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo A Prometheus resource named sample-monitoring-stack is generated in the coo-demo namespace. Retrieve the managed fields of the generated Prometheus resource by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields Example output managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{"uid":"81da0d9a-61aa-4df3-affc-71015bcbde5a"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{"type":"Reconciled"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{"shardID":"0"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status Check the metadata.managedFields values, and observe that some fields in metadata and spec are managed by the MonitoringStack resource. Modify a field that is not controlled by the MonitoringStack resource: Change spec.enforcedSampleLimit , which is a field not set by the MonitoringStack resource. Create the file prom-spec-edited.yaml : prom-spec-edited.yaml apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000 Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Note You must use the --server-side flag. Get the changed Prometheus object and note that there is one more section in managedFields which has spec.enforcedSampleLimit : USD oc get prometheus -n coo-demo Example output managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply 1 managedFields 2 spec.enforcedSampleLimit Modify a field that is managed by the MonitoringStack resource: Change spec.LogLevel , which is a field managed by the MonitoringStack resource, using the following YAML configuration: # changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1 1 spec.logLevel has been added Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Example output error: Apply failed with 1 conflict: conflict with "observability-operator": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts Notice that the field spec.logLevel cannot be changed using Server-Side Apply, because it is already managed by observability-operator . Use the --force-conflicts flag to force the change. USD oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts Example output prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied With --force-conflicts flag, the field can be forced to change, but since the same field is also managed by the MonitoringStack resource, the Observability Operator detects the change, and reverts it back to the value set by the MonitoringStack resource. Note Some Prometheus fields generated by the MonitoringStack resource are influenced by the fields in the MonitoringStack spec stanza, for example, logLevel . These can be changed by changing the MonitoringStack spec . To change the logLevel in the Prometheus object, apply the following YAML to change the MonitoringStack resource: apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info To confirm that the change has taken place, query for the log level by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}' Example output info Note If a new version of an Operator generates a field that was previously generated and controlled by an actor, the value set by the actor will be overridden. For example, you are managing a field enforcedSampleLimit which is not generated by the MonitoringStack resource. If the Observability Operator is upgraded, and the new version of the Operator generates a value for enforcedSampleLimit , this will overide the value you have previously set. The Prometheus object generated by the MonitoringStack resource may contain some fields which are not explicitly set by the monitoring stack. These fields appear because they have default values. Additional resources Kubernetes documentation for Server-Side Apply (SSA)
|
[
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo",
"oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields",
"managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{\"uid\":\"81da0d9a-61aa-4df3-affc-71015bcbde5a\"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{\"type\":\"Available\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{\"type\":\"Reconciled\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{\"shardID\":\"0\"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status",
"apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"oc get prometheus -n coo-demo",
"managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply",
"changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"error: Apply failed with 1 conflict: conflict with \"observability-operator\": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts",
"oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts",
"prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info",
"oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}'",
"info"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/cluster_observability_operator/cluster-observability-operator-overview
|
Chapter 8. About the installer inventory file
|
Chapter 8. About the installer inventory file Red Hat Ansible Automation Platform works against a list of managed nodes or hosts in your infrastructure that are logically organized, using an inventory file. You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario and describe host deployments to Ansible. By using an inventory file, Ansible can manage a large number of hosts with a single command. Inventories also help you use Ansible more efficiently by reducing the number of command line options you have to specify. The inventory file can be in one of many formats, depending on the inventory plugins that you have. The most common formats are INI and YAML . Inventory files listed in this document are shown in INI format. The location of the inventory file depends on the installer you used. The following table shows possible locations: Installer Location RPM /opt/ansible-automation-platform/installer RPM bundle tar /ansible-automation-platform-setup-bundle-<latest-version> RPM non-bundle tar /ansible-automation-platform-setup-<latest-version> Container bundle tar /ansible-automation-platform-containerized-setup-bundle-<latest-version> Container non-bundle tar /ansible-automation-platform-containerized-setup-<latest-version> You can verify the hosts in your inventory using the command: ansible all -i <path-to-inventory-file. --list-hosts Example inventory file The first part of the inventory file specifies the hosts or groups that Ansible can work with. For more information on registry_username and registry_password , see Setting registry_username and registry_password . Platform gateway is the service that handles authentication and authorization for the Ansible Automation Platform. It provides a single entry into the Ansible Automation Platform and serves the platform user interface so you can authenticate and access all of the Ansible Automation Platform services from a single location. 8.1. Guidelines for hosts and groups Databases When using an external database, ensure the [database] sections of your inventory file are properly set up. To improve performance, do not colocate the database and the automation controller on the same server. Important When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling the Ansible Automation Platform. Automation hub If there is an [automationhub] group, you must include the variables automationhub_pg_host and automationhub_pg_port . Add Ansible automation hub information in the [automationhub] group. Do not install Ansible automation hub and automation controller on the same node. Provide a reachable IP address or fully qualified domain name (FQDN) for the [automationhub] and [automationcontroller] hosts to ensure that users can synchronize and install content from Ansible automation hub and automation controller from a different node. The FQDN must not contain the _ symbol, as it will not be processed correctly in Skopeo. You may use the - symbol, as long as it is not at the start or the end of the host name. Do not use localhost . Private automation hub Do not install private automation hub and automation controller on the same node. You can use the same PostgreSQL (database) instance, but they must use a different (database) name. If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, it can result in an installation you cannot use as a container registry without certificate issues. Important You must separate the installation of automation controller and Ansible automation hub because the [database] group does not distinguish between the two if both are installed at the same time. If you use one value in [database] and both automation controller and Ansible automation hub define it, they would use the same database. Automation controller Automation controller does not configure replication or failover for the database that it uses. Automation controller works with any replication that you have. Event-Driven Ansible controller Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller. Platform gateway The platform gateway is the service that handles authentication and authorization for Ansible Automation Platform. It provides a single entry into the platform and serves the platform's user interface. Clustered installations When upgrading an existing cluster, you can also reconfigure your cluster to omit existing instances or instance groups. Omitting the instance or the instance group from the inventory file is not enough to remove them from the cluster. In addition to omitting instances or instance groups from the inventory file, you must also deprovision instances or instance groups before starting the upgrade. For more information, see Deprovisioning nodes or groups . Otherwise, omitted instances or instance groups continue to communicate with the cluster, which can cause issues with automation controller services during the upgrade. If you are creating a clustered installation setup, you must replace [localhost] with the hostname or IP address of all instances. Installers for automation controller and automation hub do not accept [localhost] All nodes and instances must be able to reach any others by using this hostname or address. You cannot use the localhost ansible_connection=local on one of the nodes. Use the same format for the host names of all the nodes. Therefore, this does not work: [automationhub] localhost ansible_connection=local hostA hostB.example.com 172.27.0.4 Instead, use these formats: [automationhub] hostA hostB hostC or [automationhub] hostA.example.com hostB.example.com hostC.example.com 8.2. Deprovisioning nodes or groups You can deprovision nodes and instance groups using the Ansible Automation Platform installer. Running the installer will remove all configuration files and logs attached to the nodes in the group. Note You can deprovision any hosts in your inventory except for the first host specified in the [automationcontroller] group. To deprovision nodes, append node_state=deprovision to the node or group within the inventory file. For example: To remove a single node from a deployment: [automationcontroller] host1.example.com host2.example.com host4.example.com node_state=deprovision or To remove an entire instance group from a deployment: [instance_group_restrictedzone] host4.example.com host5.example.com [instance_group_restrictedzone:vars] node_state=deprovision 8.3. Inventory variables The second part of the example inventory file, following [all:vars] , is a list of variables used by the installer. Using all means the variables apply to all hosts. To apply variables to a particular host, use [hostname:vars] . For example, [automationhub:vars] . 8.4. Rules for declaring variables in inventory files The values of string variables are declared in quotes. For example: pg_database='awx' pg_username='awx' pg_password='<password>' When declared in a :vars section, INI values are interpreted as strings. For example, var=FALSE creates a string equal to FALSE . Unlike host lines, :vars sections accept only a single entry per line, so everything after the = must be the value for the entry. Host lines accept multiple key=value parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator. Values that contain whitespace can be quoted (single or double). For more information, see Python shlex parsing rules . If a variable value set in an INI inventory must be a certain type (for example, a string or a boolean value), always specify the type with a filter in your task. Do not rely on types set in INI inventories when consuming variables. Note Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly. If a parameter value in the Ansible inventory file contains special characters, such as #, { or }, you must double-escape the value (that is enclose the value in both single and double quotation marks). For example, to use mypasswordwith#hashsigns as a value for the variable pg_password , declare it as pg_password='"mypasswordwith#hashsigns"' in the Ansible host inventory file. Disclaimer : Links contained in this information to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. 8.5. Securing secrets in the inventory file You can encrypt sensitive or secret variables with Ansible Vault. However, encrypting the variable names and the variable values makes it hard to find the source of the values. To circumvent this, you can encrypt the variables individually by using ansible-vault encrypt_string , or encrypt a file containing the variables. Procedure Create a file labeled credentials.yml to store the encrypted credentials. USD cat credentials.yml admin_password: my_long_admin_pw pg_password: my_long_pg_pw registry_password: my_long_registry_pw Encrypt the credentials.yml file using ansible-vault . USD ansible-vault encrypt credentials.yml New Vault password: Confirm New Vault password: Encryption successful Important Store your encrypted vault password in a safe place. Verify that the credentials.yml file is encrypted. USD cat credentials.yml USDANSIBLE_VAULT;1.1; AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763 Run setup.sh for installation of Ansible Automation Platform 2.5 and pass both credentials.yml and the --ask-vault-pass option . USD ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-pass 8.6. Additional inventory file variables You can further configure your Red Hat Ansible Automation Platform installation by including additional variables in the inventory file. These configurations add optional features for managing your Red Hat Ansible Automation Platform. Add these variables by editing the inventory file using a text editor. A table of predefined values for inventory file variables can be found in Inventory file variables in the Red Hat Ansible Automation Platform Installation Guide .
|
[
"ansible all -i <path-to-inventory-file. --list-hosts",
"[automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationedacontroller] automationedacontroller.example.com [automationgateway] gateway.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='' pg_port='' pg_database='awx' pg_username='awx' pg_password='<password>' registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationgateway_admin_password=\"\" automationgateway_pg_host=\"\" automationgateway_pg_database=\"\" automationgateway_pg_username=\"\" automationgateway_pg_password=\"\"",
"[automationhub] localhost ansible_connection=local hostA hostB.example.com 172.27.0.4",
"[automationhub] hostA hostB hostC",
"[automationhub] hostA.example.com hostB.example.com hostC.example.com",
"[automationcontroller] host1.example.com host2.example.com host4.example.com node_state=deprovision",
"[instance_group_restrictedzone] host4.example.com host5.example.com [instance_group_restrictedzone:vars] node_state=deprovision",
"pg_database='awx' pg_username='awx' pg_password='<password>'",
"cat credentials.yml admin_password: my_long_admin_pw pg_password: my_long_pg_pw registry_password: my_long_registry_pw",
"ansible-vault encrypt credentials.yml New Vault password: Confirm New Vault password: Encryption successful",
"cat credentials.yml USDANSIBLE_VAULT;1.1; AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763",
"ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-pass"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/planning_your_installation/about_the_installer_inventory_file
|
Chapter 25. Managing the default gateway setting
|
Chapter 25. Managing the default gateway setting The default gateway is a router that forwards network packets when no other route matches the destination of a packet. In a local network, the default gateway is typically the host that is one hop closer to the internet. 25.1. Setting the default gateway on an existing connection by using nmcli In most situations, administrators set the default gateway when they create a connection. However, you can also set or update the default gateway setting on a previously created connection by using the nmcli utility. Prerequisites At least one static IP address must be configured on the connection on which the default gateway will be set. If the user is logged in on a physical console, user permissions are sufficient. Otherwise, the user must have root permissions. Procedure Set the IP addresses of the default gateway: To set the IPv4 default gateway, enter: To set the IPv6 default gateway, enter: Restart the network connection for changes to take effect: Warning All connections currently using this network connection are temporarily interrupted during the restart. Verification Verify that the route is active: To display the IPv4 default gateway, enter: To display the IPv6 default gateway, enter: 25.2. Setting the default gateway on an existing connection by using the nmcli interactive mode In most situations, administrators set the default gateway when they create a connection. However, you can also set or update the default gateway setting on a previously created connection by using the interactive mode of the nmcli utility. Prerequisites At least one static IP address must be configured on the connection on which the default gateway will be set. If the user is logged in on a physical console, user permissions are sufficient. Otherwise, the user must have root permissions. Procedure Open the nmcli interactive mode for the required connection: Set the default gateway To set the IPv4 default gateway, enter: To set the IPv6 default gateway, enter: Optional: Verify that the default gateway was set correctly: Save the configuration: Restart the network connection for changes to take effect: Warning All connections currently using this network connection are temporarily interrupted during the restart. Leave the nmcli interactive mode: Verification Verify that the route is active: To display the IPv4 default gateway, enter: To display the IPv6 default gateway, enter: 25.3. Setting the default gateway on an existing connection by using nm-connection-editor In most situations, administrators set the default gateway when they create a connection. However, you can also set or update the default gateway setting on a previously created connection using the nm-connection-editor application. Prerequisites At least one static IP address must be configured on the connection on which the default gateway will be set. Procedure Open a terminal, and enter nm-connection-editor : Select the connection to modify, and click the gear wheel icon to edit the existing connection. Set the IPv4 default gateway. For example, to set the IPv4 address of the default gateway on the connection to 192.0.2.1 : Open the IPv4 Settings tab. Enter the address in the gateway field to the IP range the gateway's address is within: Set the IPv6 default gateway. For example, to set the IPv6 address of the default gateway on the connection to 2001:db8:1::1 : Open the IPv6 tab. Enter the address in the gateway field to the IP range the gateway's address is within: Click OK . Click Save . Restart the network connection for changes to take effect. For example, to restart the example connection using the command line: Warning All connections currently using this network connection are temporarily interrupted during the restart. Verification Verify that the route is active. To display the IPv4 default gateway: To display the IPv6 default gateway: Additional resources Configuring an Ethernet connection by using nm-connection-editor 25.4. Setting the default gateway on an existing connection by using control-center In most situations, administrators set the default gateway when they create a connection. However, you can also set or update the default gateway setting on a previously created connection using the control-center application. Prerequisites At least one static IP address must be configured on the connection on which the default gateway will be set. The network configuration of the connection is open in the control-center application. Procedure Set the IPv4 default gateway. For example, to set the IPv4 address of the default gateway on the connection to 192.0.2.1 : Open the IPv4 tab. Enter the address in the gateway field to the IP range the gateway's address is within: Set the IPv6 default gateway. For example, to set the IPv6 address of the default gateway on the connection to 2001:db8:1::1 : Open the IPv6 tab. Enter the address in the gateway field to the IP range the gateway's address is within: Click Apply . Back in the Network window, disable and re-enable the connection by switching the button for the connection to Off and back to On for changes to take effect. Warning All connections currently using this network connection are temporarily interrupted during the restart. Verification Verify that the route is active. To display the IPv4 default gateway: To display the IPv6 default gateway: Additional resources Configuring an Ethernet connection by using control-center 25.5. Setting the default gateway on an existing connection by using nmstatectl In most situations, administrators set the default gateway when they create a connection. However, you can also set or update the default gateway setting on a previously created connection by using the nmstatectl utility. Use the nmstatectl utility to set the default gateway through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Prerequisites At least one static IP address must be configured on the connection on which the default gateway will be set. The enp1s0 interface is configured, and the IP address of the default gateway is within the subnet of the IP configuration of this interface. The nmstate package is installed. Procedure Create a YAML file, for example ~/set-default-gateway.yml , with the following content: --- routes: config: - destination: 0.0.0.0/0 -hop-address: 192.0.2.1 -hop-interface: enp1s0 These settings define 192.0.2.1 as the default gateway, and the default gateway is reachable through the enp1s0 interface. Apply the settings to the system: Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory 25.6. Setting the default gateway on an existing connection by using the network RHEL system role A host forwards a network packet to its default gateway if the packet's destination can neither be reached through the directly-connected networks nor through any of the routes configured on the host. To configure the default gateway of a host, set it in the NetworkManager connection profile of the interface that is connected to the same network as the default gateway. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. In most situations, administrators set the default gateway when they create a connection. However, you can also set or update the default gateway setting on a previously-created connection. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 198.51.100.20/24 - 2001:db8:1::1/64 gateway4: 198.51.100.254 gateway6: 2001:db8:1::fffe dns: - 198.51.100.200 - 2001:db8:1::ffbb dns_search: - example.com state: up For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the active network settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 25.7. How NetworkManager manages multiple default gateways In certain situations, for example for fallback reasons, you set multiple default gateways on a host. However, to avoid asynchronous routing issues, each default gateway of the same protocol requires a separate metric value. Note that RHEL only uses the connection to the default gateway that has the lowest metric set. You can set the metric for both the IPv4 and IPv6 gateway of a connection using the following command: Important Do not set the same metric value for the same protocol in multiple connection profiles to avoid routing issues. If you set a default gateway without a metric value, NetworkManager automatically sets the metric value based on the interface type. For that, NetworkManager assigns the default value of this network type to the first connection that is activated, and sets an incremented value to each other connection of the same type in the order they are activated. For example, if two Ethernet connections with a default gateway exist, NetworkManager sets a metric of 100 on the route to the default gateway of the connection that you activate first. For the second connection, NetworkManager sets 101 . The following is an overview of frequently-used network types and their default metrics: Connection type Default metric value VPN 50 Ethernet 100 MACsec 125 InfiniBand 150 Bond 300 Team 350 VLAN 400 Bridge 425 TUN 450 Wi-Fi 600 IP tunnel 675 Additional resources Configuring policy-based routing to define alternative routes 25.8. Configuring NetworkManager to avoid using a specific profile to provide a default gateway You can configure that NetworkManager never uses a specific profile to provide the default gateway. Follow this procedure for connection profiles that are not connected to the default gateway. Prerequisites The NetworkManager connection profile for the connection that is not connected to the default gateway exists. Procedure If the connection uses a dynamic IP configuration, configure that NetworkManager does not use this connection as the default connection for the corresponding IP type, meaning that NetworkManager will never assign the default route to it: Note that setting ipv4.never-default and ipv6.never-default to yes , automatically removes the default gateway's IP address for the corresponding protocol from the connection profile. Activate the connection: Verification Use the ip -4 route and ip -6 route commands to verify that RHEL does not use the network interface for the default route for the IPv4 and IPv6 protocol. 25.9. Fixing unexpected routing behavior due to multiple default gateways There are only a few scenarios, such as when using Multipath TCP, in which you require multiple default gateways on a host. In most cases, you configure only a single default gateway to avoid unexpected routing behavior or asynchronous routing issues. Note To route traffic to different internet providers, use policy-based routing instead of multiple default gateways. Prerequisites The host uses NetworkManager to manage network connections, which is the default. The host has multiple network interfaces. The host has multiple default gateways configured. Procedure Display the routing table: For IPv4, enter: For IPv6, enter: Entries starting with default indicate a default route. Note the interface names of these entries displayed to dev . Use the following commands to display the NetworkManager connections that use the interfaces you identified in the step: In these examples, the profiles named Corporate-LAN and Internet-Provider have the default gateways set. Because, in a local network, the default gateway is typically the host that is one hop closer to the internet, the rest of this procedure assumes that the default gateways in the Corporate-LAN are incorrect. Configure that NetworkManager does not use the Corporate-LAN connection as the default route for IPv4 and IPv6 connections: Note that setting ipv4.never-default and ipv6.never-default to yes , automatically removes the default gateway's IP address for the corresponding protocol from the connection profile. Activate the Corporate-LAN connection: Verification Display the IPv4 and IPv6 routing tables and verify that only one default gateway is available for each protocol: For IPv4, enter: For IPv6, enter: Additional resources Configuring policy-based routing to define alternative routes
|
[
"nmcli connection modify <connection_name> ipv4.gateway \" <IPv4_gateway_address> \"",
"nmcli connection modify <connection_name> ipv6.gateway \" <IPv6_gateway_address> \"",
"nmcli connection up <connection_name>",
"ip -4 route default via 192.0.2.1 dev example proto static metric 100",
"ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium",
"nmcli connection edit <connection_name>",
"nmcli> set ipv4.gateway \" <IPv4_gateway_address> \"",
"nmcli> set ipv6.gateway \" <IPv6_gateway_address> \"",
"nmcli> print ipv4.gateway: <IPv4_gateway_address> ipv6.gateway: <IPv6_gateway_address>",
"nmcli> save persistent",
"nmcli> activate <connection_name>",
"nmcli> quit",
"ip -4 route default via 192.0.2.1 dev example proto static metric 100",
"ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium",
"nm-connection-editor",
"nmcli connection up example",
"ip -4 route default via 192.0.2.1 dev example proto static metric 100",
"ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium",
"ip -4 route default via 192.0.2.1 dev example proto static metric 100",
"ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium",
"--- routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.1 next-hop-interface: enp1s0",
"nmstatectl apply ~/set-default-gateway.yml",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 198.51.100.20/24 - 2001:db8:1::1/64 gateway4: 198.51.100.254 gateway6: 2001:db8:1::fffe dns: - 198.51.100.200 - 2001:db8:1::ffbb dns_search: - example.com state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"gateway\": \"198.51.100.254\", \"interface\": \"enp1s0\", }, \"ansible_default_ipv6\": { \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", }",
"nmcli connection modify <connection_name> ipv4.route-metric <value> ipv6.route-metric <value>",
"nmcli connection modify <connection_name> ipv4.never-default yes ipv6.never-default yes",
"nmcli connection up <connection_name>",
"ip -4 route default via 192.0.2.1 dev enp1s0 proto static metric 101 default via 198.51.100.1 dev enp7s0 proto static metric 102",
"ip -6 route default via 2001:db8:1::1 dev enp1s0 proto static metric 101 pref medium default via 2001:db8:2::1 dev enp7s0 proto static metric 102 pref medium",
"nmcli -f GENERAL.CONNECTION,IP4.GATEWAY,IP6.GATEWAY device show enp1s0 GENERAL.CONNECTION: Corporate-LAN IP4.GATEWAY: 192.0.2.1 IP6.GATEWAY: 2001:db8:1::1 nmcli -f GENERAL.CONNECTION,IP4.GATEWAY,IP6.GATEWAY device show enp7s0 GENERAL.CONNECTION: Internet-Provider IP4.GATEWAY: 198.51.100.1 IP6.GATEWAY: 2001:db8:2::1",
"nmcli connection modify Corporate-LAN ipv4.never-default yes ipv6.never-default yes",
"nmcli connection up Corporate-LAN",
"ip -4 route default via 198.51.100.1 dev enp7s0 proto static metric 102",
"ip -6 route default via 2001:db8:2::1 dev enp7s0 proto static metric 102 pref medium"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/managing-the-default-gateway-setting_configuring-and-managing-networking
|
Administration Guide
|
Administration Guide Red Hat Certificate System 10 Covers management tasks such as adding users; requesting, renewing, and revoking certificates; publishing CRLs; and managing smart cards. Florian Delehaye Red Hat Customer Content Services [email protected] Marc Muehlfeld Red Hat Customer Content Services Petr Bokoc Red Hat Customer Content Services Filip Hanzelka Red Hat Customer Content Services Tomas Capek Red Hat Customer Content Services Ella Deon Ballard Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/index
|
1.3. Concurrent Versions System (CVS)
|
1.3. Concurrent Versions System (CVS) Concurrent Versions System , commonly abbreviated as CVS , is a centralized version control system with a client-server architecture. It is a successor to the older Revision Control System (RCS), and allows multiple developers to cooperate on the same project while keeping track of every change made to the files that are under revision control. 1.3.1. Installing and Configuring CVS Installing the cvs Package In Red Hat Enterprise Linux 6, CVS is provided by the cvs package. To install the cvs package and all its dependencies on your system, type the following at a shell prompt as root : yum install cvs This installs a command line CVS client, a CVS server, and other related tools to the system. Setting Up the Default Editor When using CVS on the command line, certain commands such as cvs import or cvs commit require the user to write a short log message. To determine which text editor to start, the cvs client application first reads the contents of the environment variable USDCVSEDITOR , then reads the more general environment variable USDEDITOR , and if none of these is set, it starts vi . To persistently change the value of the USDCVSEDITOR environment variable, run the following command: echo " export CVSEDITOR= command " >> ~/.bashrc This adds the export CVSEDITOR= command line to your ~/.bashrc file. Replace command with a command that runs the editor of your choice (for example, emacs ). Note that for this change to take effect in the current shell session, you must execute the commands in ~/.bashrc by typing the following at a shell prompt: . ~/.bashrc Example 1.14. Setting up the default text editor To configure the CVS client to use Emacs as a text editor, type:
|
[
"~]USD echo \"export CVSEDITOR=emacs\" >> ~/.bashrc ~]USD . ~/.bashrc"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/collaborating.cvs
|
7.211. Red Hat Enterprise Linux Release Notes
|
7.211. Red Hat Enterprise Linux Release Notes 7.211.1. RHEA-2013:0439 - Red Hat Enterprise Linux 6.4 Release Notes Updated packages containing the Release Notes for Red Hat Enterprise Linux 6.4 are now available. Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security and bug fix errata. The Red Hat Enterprise Linux 6.4 Release Notes documents the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release. Detailed notes on all changes in this minor release are available in the Technical Notes. Refer to the Online Release Notes for the most up-to-date version of the Red Hat Enterprise Linux 6.4 Release Notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/6.4_Release_Notes/index.html Note Starting with the 6.4 release of the online Release Notes, the "Device Drivers" chapter has been moved to the online Technical Notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.4_Technical_Notes/ch-device_drivers.html
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/release-notes
|
Chapter 3. Differences from upstream OpenJDK 11
|
Chapter 3. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.25/rn-openjdk-diff-from-upstream
|
Chapter 129. Splunk
|
Chapter 129. Splunk Since Camel 2.13 Both producer and consumer are supported The Splunk component provides access to Splunk using the Splunk provided client api, and it enables you to publish and search for events in Splunk. 129.1. Dependencies When using splunk with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-splunk-starter</artifactId> </dependency> 129.2. URI format 129.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 129.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 129.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 129.4. Component Options The Splunk component supports 6 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean splunkConfigurationFactory (advanced) To use the SplunkConfigurationFactory. SplunkConfigurationFactory healthCheckConsumerEnabled (health) Used for enabling or disabling all consumer based health checks from this component. true boolean healthCheckProducerEnabled (health) Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. true boolean 129.5. Endpoint Options The Splunk endpoint is configured using URI syntax: with the following path and query parameters: 129.5.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name has no purpose. String 129.5.2. Query Parameters (45 parameters) Name Description Default Type app (common) Splunk app. String connectionTimeout (common) Timeout in MS when connecting to Splunk server. 5000 int host (common) Splunk host. localhost String owner (common) Splunk owner. String port (common) Splunk port. 8089 int scheme (common) Splunk scheme. https String count (consumer) A number that indicates the maximum number of entities to return. int earliestTime (consumer) Earliest time of the search time window. String initEarliestTime (consumer) Initial start offset of the first search. String latestTime (consumer) Latest time of the search time window. String savedSearch (consumer) The name of the query saved in Splunk to run. String search (consumer) The Splunk query to run. String sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean streaming (consumer) Sets streaming mode. Streaming mode sends exchanges as they are received, rather than in a batch. false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy eventHost (producer) Override the default Splunk event host field. String index (producer) Splunk index to write to. String raw (producer) Should the payload be inserted raw. false boolean source (producer) Splunk source argument. String sourceType (producer) Splunk sourcetype argument. String tcpReceiverLocalPort (producer) Splunk tcp receiver port defined locally on splunk server. (For example if splunk port 9997 is mapped to 12345, tcpReceiverLocalPort has to be 9997). Integer tcpReceiverPort (producer) Splunk tcp receiver port. int lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean password (security) Password for Splunk. String sslProtocol (security) Set the ssl protocol to use. Enum values: TLSv1.2 TLSv1.1 TLSv1 SSLv3 TLSv1.2 SSLSecurityProtocol token (security) User's token for Splunk. This takes precedence over password when both are set. String username (security) Username for Splunk. String useSunHttpsHandler (security) Use sun.net.www.protocol.https.Handler Https handler to establish the Splunk Connection. Can be useful when running in application servers to avoid app. server https handling. false boolean 129.6. Producer Endpoints Endpoint Description stream Streams data to a named index or the default if not specified. When using stream mode be aware of that Splunk has some internal buffer (about 1MB or so) before events gets to the index. If you need realtime, better use submit or tcp mode. submit submit mode. Uses Splunk rest api to publish events to a named index or the default if not specified. tcp tcp mode. Streams data to a tcp port, and requires a open receiver port in Splunk. When publishing events the message body should contain a SplunkEvent. See comment under message body. Example from("direct:start").convertBodyTo(SplunkEvent.class) .to("splunk://submit?username=user&password=123&index=myindex&sourceType=someSourceType&source=mySource")... In this example a converter is required to convert to a SplunkEvent class. 129.7. Consumer Endpoints: Endpoint Description normal Performs normal search and requires a search query in the search option. realtime Performs realtime search on live data and requires a search query in the search option. savedsearch Performs search based on a search query saved in splunk and requires the name of the query in the savedSearch option. Example from("splunk://normal?delay=5000&username=user&password=123&initEarliestTime=-10s&search=search index=myindex sourcetype=someSourcetype") .to("direct:search-result"); The camel-splunk component creates a route exchange per search result with a SplunkEvent in the body. 129.8. Message body Splunk operates on data in key/value pairs. The SplunkEvent class is a placeholder for such data, and should be in the message body for the producer. Likewise it will be returned in the body per search result for the consumer. You can send raw data to Splunk by setting the raw option on the producer endpoint. This is useful for, for example, json/xml and other payloads where Splunk has build in support. 129.9. Use Cases Search Twitter for tweets with music and publish events to Splunk: from("twitter://search?type=polling&keywords=music&delay=10&consumerKey=abc&consumerSecret=def&accessToken=hij&accessTokenSecret=xxx") .convertBodyTo(SplunkEvent.class) .to("splunk://submit?username=foo&password=bar&index=camel-tweets&sourceType=twitter&source=music-tweets"); To convert a Tweet to a SplunkEvent you could use a converter like: @Converter public class Tweet2SplunkEvent { @Converter public static SplunkEvent convertTweet(Status status) { SplunkEvent data = new SplunkEvent("twitter-message", null); //data.addPair("source", status.getSource()); data.addPair("from_user", status.getUser().getScreenName()); data.addPair("in_reply_to", status.getInReplyToScreenName()); data.addPair(SplunkEvent.COMMON_START_TIME, status.getCreatedAt()); data.addPair(SplunkEvent.COMMON_EVENT_ID, status.getId()); data.addPair("text", status.getText()); data.addPair("retweet_count", status.getRetweetCount()); if (status.getPlace() != null) { data.addPair("place_country", status.getPlace().getCountry()); data.addPair("place_name", status.getPlace().getName()); data.addPair("place_street", status.getPlace().getStreetAddress()); } if (status.getGeoLocation() != null) { data.addPair("geo_latitude", status.getGeoLocation().getLatitude()); data.addPair("geo_longitude", status.getGeoLocation().getLongitude()); } return data; } } Search Splunk for tweets: from("splunk://normal?username=foo&password=bar&initEarliestTime=-2m&search=search index=camel-tweets sourcetype=twitter") .log("USD{body}"); 129.10. Other comments Splunk comes with a variety of options for leveraging machine generated data with prebuilt apps for analyzing and displaying this. For example the jmx application could be used to publish jmx attributes, such as, route and jvm metrics to Splunk, and displaying this on a dashboard. 129.11. Spring Boot Auto-Configuration The component supports 7 options, which are listed below. Name Description Default Type camel.component.splunk.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.splunk.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.splunk.enabled Whether to enable auto configuration of the splunk component. This is enabled by default. Boolean camel.component.splunk.health-check-consumer-enabled Used for enabling or disabling all consumer based health checks from this component. true Boolean camel.component.splunk.health-check-producer-enabled Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. true Boolean camel.component.splunk.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.splunk.splunk-configuration-factory To use the SplunkConfigurationFactory. The option is a org.apache.camel.component.splunk.SplunkConfigurationFactory type. SplunkConfigurationFactory
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-splunk-starter</artifactId> </dependency>",
"splunk://[endpoint]?[options]",
"splunk:name",
"from(\"direct:start\").convertBodyTo(SplunkEvent.class) .to(\"splunk://submit?username=user&password=123&index=myindex&sourceType=someSourceType&source=mySource\")",
"from(\"splunk://normal?delay=5000&username=user&password=123&initEarliestTime=-10s&search=search index=myindex sourcetype=someSourcetype\") .to(\"direct:search-result\");",
"from(\"twitter://search?type=polling&keywords=music&delay=10&consumerKey=abc&consumerSecret=def&accessToken=hij&accessTokenSecret=xxx\") .convertBodyTo(SplunkEvent.class) .to(\"splunk://submit?username=foo&password=bar&index=camel-tweets&sourceType=twitter&source=music-tweets\");",
"@Converter public class Tweet2SplunkEvent { @Converter public static SplunkEvent convertTweet(Status status) { SplunkEvent data = new SplunkEvent(\"twitter-message\", null); //data.addPair(\"source\", status.getSource()); data.addPair(\"from_user\", status.getUser().getScreenName()); data.addPair(\"in_reply_to\", status.getInReplyToScreenName()); data.addPair(SplunkEvent.COMMON_START_TIME, status.getCreatedAt()); data.addPair(SplunkEvent.COMMON_EVENT_ID, status.getId()); data.addPair(\"text\", status.getText()); data.addPair(\"retweet_count\", status.getRetweetCount()); if (status.getPlace() != null) { data.addPair(\"place_country\", status.getPlace().getCountry()); data.addPair(\"place_name\", status.getPlace().getName()); data.addPair(\"place_street\", status.getPlace().getStreetAddress()); } if (status.getGeoLocation() != null) { data.addPair(\"geo_latitude\", status.getGeoLocation().getLatitude()); data.addPair(\"geo_longitude\", status.getGeoLocation().getLongitude()); } return data; } }",
"from(\"splunk://normal?username=foo&password=bar&initEarliestTime=-2m&search=search index=camel-tweets sourcetype=twitter\") .log(\"USD{body}\");"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-splunk-component-starter
|
Network APIs
|
Network APIs OpenShift Container Platform 4.18 Reference guide for network APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/index
|
Chapter 5. Using the API
|
Chapter 5. Using the API For more information, see the Rhea API reference , Rhea example suite and Rhea examples . 5.1. Handling messaging events Red Hat build of Rhea is an asynchronous event-driven API. To define how the application handles events, the user registers event-handling functions on the container object. These functions are then called as network activity or timers trigger new events. Example: Handling messaging events var rhea = require("rhea"); var container = rhea.create_container(); container.on("sendable", function (event) { console.log("A message can be sent"); }); container.on("message", function (event) { console.log("A message is received"); }); These are only a few common-case events. The full set is documented in the Rhea API reference . 5.2. Accessing event-related objects The event argument has attributes for accessing the object the event is regarding. For example, the connection_open event sets the event connection attribute. In addition to the primary object for the event, all objects that form the context for the event are set as well. Attributes with no relevance to a particular event are null. Example: Accessing event-related objects event. container event. connection event. session event. sender event. receiver event. delivery event. message 5.3. Creating a container The container is the top-level API object. It is the entry point for creating connections, and it is responsible for running the main event loop. It is often constructed with a global event handler. Example: Creating a container var rhea = require("rhea"); var container = rhea.create_container(); 5.4. Setting the container identity Each container instance has a unique identity called the container ID. When Red Hat build of Rhea makes a network connection, it sends the container ID to the remote peer. To set the container ID, pass the id option to the create_container method. Example: Setting the container identity var container = rhea.create_container( {id: "job-processor-3"} ); If the user does not set the ID, the library will generate a UUID when the container is constructed.
|
[
"var rhea = require(\"rhea\"); var container = rhea.create_container(); container.on(\"sendable\", function (event) { console.log(\"A message can be sent\"); }); container.on(\"message\", function (event) { console.log(\"A message is received\"); });",
"event. container event. connection event. session event. sender event. receiver event. delivery event. message",
"var rhea = require(\"rhea\"); var container = rhea.create_container();",
"var container = rhea.create_container( {id: \"job-processor-3\"} );"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_rhea/3.0/html/using_rhea/using_the_api
|
28.4. Configuring Persistent Memory for use in Device DAX mode
|
28.4. Configuring Persistent Memory for use in Device DAX mode Device DAX ( devdax ) provides a means for applications to directly access storage, without the involvement of a file system. The benefit of device DAX is that it provides a guaranteed fault granularity, which can be configured using the --align option with the ndctl utility: The given command ensures that the operating system would fault in 2MiB pages at a time. For the Intel 64 and AMD64 architecture, the following fault granularities are supported: 4KiB 2MiB 1GiB Device DAX nodes ( /dev/dax N.M ) only supports the following system call: open() close() mmap() fallocate() read() and write() variants are not supported because the use case is tied to persistent memory programming.
|
[
"ndctl create-namespace --force --reconfig= namespace0.0 --mode=devdax --align= 2M"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/configuring-persistent-memory-for-use-in-device-dax-mode
|
Chapter 10. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation
|
Chapter 10. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 10.3, "Manual creation of infrastructure nodes" section for more information. 10.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 10.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 10.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 10.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere .
|
[
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_rhodf
|
Chapter 43. Hardware Enablement
|
Chapter 43. Hardware Enablement LSI Syncro CS HA-DAS adapters Red Hat Enterprise Linux 7.1 included code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter is provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.2 and later are encouraged to provide feedback to Red Hat and LSI. (BZ#1062759) tss2 enables TPM 2.0 for IBM Power LE The tss2 package adds IBM implementation of a Trusted Computing Group Software Stack (TSS) 2.0 as a Technology Preview for the IBM Power LE architecture. This package enables users to interact with TPM 2.0 devices. (BZ#1384452) ibmvnic Device Driver Starting with Red Hat Enterprise Linux 7.3, the ibmvnic Device Driver has been available as a Technology Preview for IBM POWER architectures. vNIC (Virtual Network Interface Controller) is a PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. (BZ# 1391561 , BZ#947163)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_hardware_enablement
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_guide/making-open-source-more-inclusive
|
8.2.3.2. Incremental Backups
|
8.2.3.2. Incremental Backups Unlike full backups, incremental backups first look to see whether a file's modification time is more recent than its last backup time. If it is not, the file has not been modified since the last backup and can be skipped this time. On the other hand, if the modification date is more recent than the last backup date, the file has been modified and should be backed up. Incremental backups are used in conjunction with a regularly-occurring full backup (for example, a weekly full backup, with daily incrementals). The primary advantage gained by using incremental backups is that the incremental backups run more quickly than full backups. The primary disadvantage to incremental backups is that restoring any given file may mean going through one or more incremental backups until the file is found. When restoring a complete file system, it is necessary to restore the last full backup and every subsequent incremental backup. In an attempt to alleviate the need to go through every incremental backup, a slightly different approach was implemented. This is known as the differential backup .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-disaster-backups-types-incremental
|
Chapter 50. Virtualization
|
Chapter 50. Virtualization USB 3.0 support for KVM guests USB 3.0 host adapter (xHCI) emulation for KVM guests remains a Technology Preview in Red Hat Enterprise Linux 7. (BZ#1103193) Select Intel network adapters now support SR-IOV as a guest on Hyper-V In this update for Red Hat Enterprise Linux guest virtual machines running on Hyper-V, a new PCI passthrough driver adds the ability to use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf driver. This ability is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine. The feature is currently supported with Microsoft Windows Server 2016. (BZ#1348508) No-IOMMU mode for VFIO drivers As a Technology Preview, this update adds No-IOMMU mode for virtual function I/O (VFIO) drivers. The No-IOMMU mode provides the user with full user-space I/O (UIO) access to a direct memory access (DMA)-capable device without a I/O memory management unit (IOMMU). Note that in addition to not being supported, using this mode is not secure due to the lack of I/O management provided by IOMMU. (BZ# 1299662 ) virt-v2v can now use vmx configuration files to convert VMware guests As a Technology Preview, the virt-v2v utility now includes the vmx input mode, which enables the user to convert a guest virtual machine from a VMware vmx configuration file. Note that to do this, you also need access to the corresponding VMware storage, for example by mounting the storage using NFS. It is also possible to access the storage using SSH, by adding the -it ssh parameter. (BZ# 1441197 , BZ# 1523767 ) virt-v2v can convert Debian and Ubuntu guests As a technology preview, the virt-v2v utility can now convert Debian and Ubuntu guest virtual machines. Note that the following problems currently occur when performing this conversion: virt-v2v cannot change the default kernel in the GRUB2 configuration, and the kernel configured in the guest is not changed during the conversion, even if a more optimal version of the kernel is available on the guest. After converting a Debian or Ubuntu VMware guest to KVM, the name of the guest's network interface may change, and thus requires manual configuration. (BZ# 1387213 ) Virtio devices can now use vIOMMU As a Technology Preview, this update enables virtio devices to use virtual Input/Output Memory Management Unit (vIOMMU). This guarantees the security of Direct Memory Access (DMA) by allowing the device to DMA only to permitted addresses. However, note that only guest virtual machines using Red Hat Enterprise Linux 7.4 or later are able to use this feature. (BZ# 1283251 , BZ#1464891) virt-v2v converts VMWare guests faster and more reliably As a Technology Preview, the virt-v2v utility can now use the VMWare Virtual Disk Development Kit (VDDK) to import a VMWare guest virtual machine to a KVM guest. This enables virt-v2v to connect directly to the VMWare ESXi hypervisor, which improves the speed and reliability of the conversion. Note that this conversion import method requires the external nbdkit utility and its VDDK plug-in. (BZ#1477912) Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. However, OVMF is not bootable with virtualization components available in RHEL 7. Note that OVMF is fully supported in RHEL 8. (BZ#653382)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_virtualization
|
Global File System 2
|
Global File System 2 Red Hat Enterprise Linux 6 Red Hat Global File System 2 Steven Levine Red Hat Customer Content Services [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/index
|
Chapter 3. Viewing CVEs in the Red Hat Insights for OpenShift vulnerability dashboard to find your vulnerable OpenShift clusters and images
|
Chapter 3. Viewing CVEs in the Red Hat Insights for OpenShift vulnerability dashboard to find your vulnerable OpenShift clusters and images In the CVE list view, you can assess the CVEs impacting your clusters so that you can take the appropriate actions to protect your organization. You can also focus on a single CVE by clicking on CVE ID to see details that help you understand exactly which clusters are exposed by that CVE. In the default view of the CVE list view, you are able to get details such as the following: CVE ID : Shows details about the CVE, CVE ID number Publish date : Shows the date the CVE was published Severity : Shows the severity rating (Critical, Important, Moderate, Low, or Unknown) of the CVE CVSS base score : Shows the Common Vulnerability Scoring System (CVSS) base score (0-10) Exposed clusters : Shows the number of clusters currently affected by a CVE Exposed images : Shows the number of images deployed in your clusters that are currently affected by a CVE From this view, you can filter and sort using these details to help you focus on the most critical CVEs affecting your clusters and images. Important The default view of the CVE list page shows a default filter selection of ( Exposed clusters with 1 or More clusters ). The default filters show you CVEs that affect one or more clusters in your organization. To find all CVEs, including those that do not affect any clusters reported by Red Hat Insights for OpenShift, you can click the X beside the filters to deselect the filter. Filters are persistent throughout your session in the Red Hat Hybrid Console. Additional resources Severity Ratings Official CVSS documentation 3.1. Navigating to the CVE list view You can use Red Hat Insights for OpenShift vulnerability dashboard to view CVEs affecting your organization to help you find information that helps you achieve your desired security posture. To start looking at CVEs related to your organization, you need to navigate to the CVEs listed in the Vulnerability dashboard. Prerequisites Your Red Hat account and your cluster are registered to the same organization. You have logged into the Red Hat Hybrid Cloud Console. Your cluster has been registered to Red Hat OpenShift Cluster Manager. For more information about registering clusters, see Registering OpenShift Container Platform clusters to OpenShift Cluster Manager . Your cluster has been active (Telemetry and the Insights Operator has sent data about your cluster) in the past 30 days. For more information about Telemetry and the Insights Operator, see About remote health monitoring . Procedure Navigate to the OpenShift > Vulnerability Dashboard > CVEs to view the CVEs that affect your clusters and images. If you see a message that reads, "No CVEs found," Red Hat Insights has not found any CVEs that affect your Insights for OpenShift infrastructure. 3.2. Refining the CVE list view results to protect your organization To make the most use of Red Hat Insights for OpenShift vulnerability dashboard, you can refine the CVE list view results by: Searching for a specific CVE Finding information about a specific CVE Filtering results in the CVE list view Filtering CVEs by severity Sorting results in the CVE list view 3.2.1. Searching for a specific CVE You can use the filters in the CVE list view to search for details about a CVE. Prerequisites Your Red Hat account and your cluster are registered to the same organization. You have logged into the Red Hat Hybrid Cloud Console. Your cluster has been registered to Red Hat OpenShift Cluster Manager. For more information about registering clusters, see Registering OpenShift Container Platform clusters to OpenShift Cluster Manager . Your cluster has been active (Telemetry and the Insights Operator has sent data about your cluster) in the past 30 days. For more information about Telemetry and the Insights Operator, see About remote health monitoring . Procedure Navigate to OpenShift > Vulnerability Dashboard > CVEs . Select CVE in the filter drop-down list. Enter the CVE ID you want information about (for this example, CVE-2022-2526 ) into the search field. Wait for the search results to populate (or press Return or Enter). You will see a list of CVE IDs that match what you entered in the search box. For this example, two CVEs (2022-2526 and 2022-25265) match because both contain numbers from the search. The search results also show you information about the Publish date , Severity , CVSS base score , Exposed clusters , and Exposed images related to the CVE. Click the CVE ID to get more details about the CVE. You will see details about the CVE. A list of clusters affected by the CVE show by default in the Exposed cluster tab. (Optional) Click the Exposed images tab to see information about images affected by the CVE. 3.2.2. Finding information about a specific CVE In the Insights for OpenShift vulnerability dashboard, you can click a CVE to open the CVE details view. In the details view, you can see a description of the CVE and other information, such as: CVE ID Publish date Description of the CVE Severity level CVSS base score You can also see a list of exposed clusters because the Exposed clusters tab is selected by default. The Exposed clusters tab shows how many exposed clusters are affected by the CVE. In this view you can get the following information: Name of the cluster Status of the cluster Type of cluster Version of OpenShift used Provider of the cluster, such as AWS or other Last Seen Shows the last time (in the form of minutes, hours, or days) since information was last uploaded from the cluster to the Insights for OpenShift vulnerability dashboard service. You can also get information about exposed images related to the CVE. When you click the Exposed images tab, you can get the following information: Name of the image Registry name that the image is registered to Version of the registry Clusters shows how many images that are vulnerable or exposed by the CVE in the cluster. Clicking on a number in this column will show you: Prerequisites Your Red Hat account and your cluster are registered to the same organization. You have logged into the Red Hat Hybrid Cloud Console. Your cluster has been registered to Red Hat OpenShift Cluster Manager. For more information about registering clusters, see Registering OpenShift Container Platform clusters to OpenShift Cluster Manager . Your cluster has been active (Telemetry and the Insights Operator has sent data about your cluster) in the past 30 days. For more information about Telemetry and the Insights Operator, see About remote health monitoring . Procedure Navigate to OpenShift > Vulnerability Dashboard > CVEs . Review the resulting list and find a CVE ID of interest (for example, 2022-2526). Click the CVE ID to find additional details about the CVE. Note If your clusters are not affected by the CVE, you will see the message: "No matching clusters found." 3.2.3. Filtering results in the CVE list view In the CVE list view, you can apply filters to the list of CVEs so that you can focus on specific information, such as the severity level of a CVE, or clusters in a specific version of Insights for OpenShift. After selecting an individual CVE, you can apply primary and secondary filters to the resulting list of clusters. The filters and options that you can apply are: CVE ID : Filter by ID or description. Publish date : Select from All, Last 7 days, Last 30 days, Last 90 days, Last year, or More than 1 year ago. Severity : Select one or more values: Critical, Important, Moderate, Low, or Unknown. CVSS base score : Enter a range from 0-10. Exposed clusters : Select to only show CVEs with clusters currently affected, or with no clusters affected. Prerequisites Your Red Hat account and your cluster are registered to the same organization. You have logged into the Red Hat Hybrid Cloud Console. Your cluster has been registered to Red Hat OpenShift Cluster Manager. For more information about registering clusters, see Registering OpenShift Container Platform clusters to OpenShift Cluster Manager . Your cluster has been active (Telemetry and the Insights Operator has sent data about your cluster) in the past 30 days. For more information about Telemetry and the Insights Operator, see About remote health monitoring . Procedure Navigate to OpenShift > Vulnerability Dashboard > CVEs . The default view of the CVE list view shows a default filter selection of 1 or More clusters . Select a primary filter (for example, Publish date ) from the drop-down list of filters. Select a secondary filter from the drop-down list of filters. For this example, select Last 30 days from the Filter by publish date drop-down arrow. Note Your currently selected filters are visible to help you keep track of what you are filtering. Confirm your filter selection, and then review the resulting information. This example shows CVEs from the last 30 days. If the last action results in a message indicating "No matching CVEs found," this means your clusters are not affected by any CVEs. Important Filters remain active until you deselect them or leave the Red Hat Insights for OpenShift session. Reset or deselect unneeded filters to avoid unintended results. To deactivate filters. click the X to each filter (or any default filters) you selected, as needed. 3.2.4. Filtering CVEs by severity In Red Hat Insights for OpenShift, you can filter the CVE list to show the most critical CVEs. Prerequisites Your Red Hat account and your cluster are registered to the same organization. You have logged into the Red Hat Hybrid Cloud Console. Your cluster has been registered to Red Hat OpenShift Cluster Manager. For more information about registering clusters, see Registering OpenShift Container Platform clusters to OpenShift Cluster Manager . Your cluster has been active (Telemetry and the Insights Operator has sent data about your cluster) in the past 30 days. For more information about Telemetry and the Insights Operator, see About remote health monitoring . Procedure Navigate to OpenShift > Vulnerability Dashboard > CVEs . Click the drop-down filter list. Select Severity . Click the drop-down arrow in the Filter by Severity field. Select a filter (for example, Critical ) to display all CVEs with that severity rating. Click the CVE ID to get additional information. 3.2.5. Sorting results in the CVE list view You can also sort the list of CVEs in this view to assess the most relevant information first. For example, to see all of the most critical CVEs affecting your clusters and images, sort by the severity or CVSS base score. You can sort the CVE page results by these columns: CVE ID Publish date Severity CVSS base score Exposed clusters Exposed images Prerequisites Your Red Hat account and your cluster are registered to the same organization. You have logged into the Red Hat Hybrid Cloud Console. Your cluster has been registered to Red Hat OpenShift Cluster Manager. For more information about registering clusters, see Registering OpenShift Container Platform clusters to OpenShift Cluster Manager . Your cluster has been active (Telemetry and the Insights Operator has sent data about your cluster) in the past 30 days. For more information about Telemetry and the Insights Operator, see About remote health monitoring . Procedure Navigate to OpenShift > Vulnerability Dashboard > CVEs . Click the sort arrow to the column heading of the column you want to sort.
| null |
https://docs.redhat.com/en/documentation/red_hat_insights_for_openshift/1-latest/html/assessing_security_vulnerabilities_in_your_openshift_cluster_using_red_hat_insights/assembly_vuln-using-the-cve-list-view
|
Chapter 8. Bug fixes
|
Chapter 8. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 9.4 that have a significant impact on users. 8.1. Installer and image creation Anaconda displays WWID identifiers for multipath storage devices on the Installation Destination screen Previously, Anaconda did not display any details, for example, device number, WWPN, or LUN for the multipath storage devices. As a consequence, it was difficult to select the correct installation destination from the Installation Destination > Add a disk screen. With this update, Anaconda now displays WWID identifiers for multipath storage devices. As a result, you can now easily identify and select the required installation destination on the advanced storage device screen. Jira:RHEL-11384 [1] Installer now accepts additional time zone definitions in Kickstart files Anaconda switched to a different, more restrictive method of validating time zone selections. This caused some time zone definitions, such as Japan, to be no longer valid despite being accepted in versions. Legacy Kickstart files with these definitions had to be updated. Otherwise, they would default to the Americas/New_York time zone. The list of valid time zones was previously taken from pytz.common_timezones in the pytz Python library. This update changes the validation settings for the timezone Kickstart command to use pytz.all_timezones , which is a superset of the common_timezones list, and allows significantly more time zones to be specified. This change ensures that old Kickstart files made for Red Hat Enterprise Linux 6 still specify valid time zones. Note: This change only applies to the timezone Kickstart command. The time zone selection in the graphical and text-based interactive interfaces remains unchanged. Existing Kickstart files for Red Hat Enterprise Linux 9 that had valid time zone selections do not require any updates. Jira:RHEL-13150 [1] The installer now correctly creates bond device with multiple ports and a BOOTIF option Previously, the installation program created incorrect connection profiles when the installation was booted with a bond network device with multiple ports along with the BOOTIF boot option. Consequently, the device used by the BOOTIF option was not added to the bond device though it was configured as one of its ports. With this update, the installation program now correctly creates profiles in initramfs when the BOOTIF boot option is used. As a result, all the specified ports are now added to the bond device on the installed system. Jira:RHEL-4766 Anaconda replaces the misleading error message when failing to boot an installation image Previously, when the installation program failed to boot the installation image, for example due to missing source of stage2 specified in inst.stage2 or inst.repo , Anaconda displayed the following misleading error message: With this update, Anaconda issues a proper warning message to minimize the confusion. Jira:RHEL-5638 The new version of xfsprogs no longer shrinks the size of /boot Previously, the xfsprogs package with the 5.19 version in the RHEL 9.3 caused the size of /boot to shrink. As a consequence, it caused a difference in the available space on the /boot partition, if compared to the RHEL 9.2 version. This fix increases the /boot partition to 600 MiB for all images, instead of 500 MiB, and the /boot partition is no longer affected by space issues. Jira:RHEL-7999 8.2. Security Libreswan accepts IPv6 SAN extensions Previously, IPsec connection failed when setting up certificate-based authentication with a certificate that contained a subjectAltName (SAN) extension with an IPv6 address. With this update, the pluto daemon has been modified to accept IPv6 SAN as well as IPv4. As a result, IPsec connection is now correctly established with IPv6 address embedded in the certificate as an ID. Jira:RHEL-12278 Rules for managing virtual routing with ip vrf are added to the SELinux policy You can use the ip vrf command to manage virtual routing of other network services. Previously, selinux-policy did not contain rules to support this usage. With this update, SELinux policy rules allow explicit transitions from the ip domain to the httpd , sshd , and named domains. These transitions apply when the ip command uses the setexeccon library call. Jira:RHEL-14246 [1] SELinux policy denies SSH login for unconfined users when unconfined_login is set to off Previously, the SELinux policy was missing a rule to deny unconfined users to log in via SSH when the unconfined_login boolean was set to off . As a consequence, with unconfined_login set to off , users still could log in with SSHD to an unconfined domain. This update adds a rule to the SELinux policy, and as a result, users cannot log in via sshd as unconfined when unconfined_login is off . Jira:RHEL-1551 SELinux policy allows rsyslogd to execute confined commands Previously, the SELinux policy was missing a rule to allow the rsyslogd daemon to execute SELinux-confined commands, such as systemctl . As a consequence, commands executed as an argument of the omprog directive failed. This update adds rules to the SELinux policy so that executables in the /usr/libexec/rsyslog directory that are run as an argument of omprog are in the syslogd_unconfined_script_t unconfined domain. As a result, commands executed as an argument of omprog finish successfully. Jira:RHEL-11174 kmod runs in the SELinux MLS policy Previously, the SELinux did not assign a private type for the /var/run/tmpfiles.d/static-nodes.conf file. As a consequence, the kmod utility may fail to work in the SELinux multi-level security (MLS) policy. This update adds the kmod_var_run_t label for /var/run/tmpfiles.d/static-nodes.conf to the SELinux policy, and as a result, kmod runs successfully in the SELinux MLS policy. Jira:RHEL-1553 selinux-autorelabel runs in SELinux MLS policy Previously, the SELinux policy did not assign a private type for the /usr/libexec/selinux/selinux-autorelabel utility. As a consequence, selinux-autorelabel.service might fail to work in the SELinux multi-level security (MLS) policy. This update adds the semanage_exec_t label to /usr/libexec/selinux/selinux-autorelabel , and as a result, selinux-autorelabel.service runs successfully in the SELinux MLS policy. Jira:RHEL-14289 /bin = /usr/bin file context equivalency rule added to SELinux policy Previously, the SELinux policy did not contain the /bin = /usr/bin file context equivalency rule. As a consequence, the restorecond daemon did not work correctly. This update adds the missing rule to the policy, and as a consequence, restorecond works correctly in SELinux enforcing mode. IMPORTANT This change overrides any local policy modules which use file context specification for a pattern in /bin . Jira:RHEL-5032 SELinux policy contains rules for additional services and applications This version of the selinux-policy package contains additional rules. Most notably, users in the sysadm_r role can execute the following commands: sudo traceroute (RHEL-14077) sudo tcpdump (RHEL-15432) Jira:RHEL-15432 SELinux policy adds permissions for QAT firmware Previously, when updating the Intel QuickAssist Technology (QAT) with the Intel VT-d kernel option enabled, missing SELinux permissions caused denials. This update adds additional permissions for the qat service. As a result, QAT can be updated correctly. Jira:RHEL-19051 [1] Rsyslog can execute privileged commands through omprog Previously, the omprog module of Rsyslog could not execute certain external programs, especially programs that contain privileged commands. As a consequence, the use of scripts that involve privileged commands through omprog was restricted. With this update, the SELinux policy was adjusted. Place your scripts into the /usr/libexec/rsyslog directory to ensure compatibility with the adjusted SELinux policy. As a result, Rsyslog now can execute scripts, including those with privileged commands, through the omprog module. Jira:RHEL-5196 The semanage fcontext command no longer reorders local modifications The semanage fcontext -l -C command lists local file context modifications stored in the file_contexts.local file. The restorecon utility processes the entries in the file_contexts.local from the most recent entry to the oldest. Previously, semanage fcontext -l -C listed the entries in an incorrect order. This mismatch between processing order and listing order caused problems when managing SELinux rules. With this update, semanage fcontext -l -C displays the rules in the correct and expected order, from the oldest to the newest. Jira:RHEL-25263 [1] CardOS 5.3 cards with offsets no longer cause problems in OpenSC Previously, file caching did not work correctly for some CardOS 5.3 cards that stored certificates on different offsets of a single PKCS #15 file. This occurred because file caching ignored the offset part of the file, which caused repetitive overriding of the cache and reading invalid data from file cache. The problem was identified and fixed upstream, and after this update, CardOS 5.3 cards work correctly with the file cache. Jira:RHEL-4079 [1] 8.3. Subscription management subscription-manager no longer retains nonessential text in the terminal Starting with RHEL 9.1, subscription-manager displays progress information while processing any operation. Previously, for some languages, typically non-Latin, progress messages did not clean up after the operation finished. With this update, all the messages are cleaned up properly when the operation finishes. If you have disabled the progress messages before, you can re-enable them by entering the following command: Bugzilla:2136694 [1] 8.4. Software management The librhsm library now returns the correct /etc/rhsm-host prefix if librhsm is run in a container The librhsm library rewrites path prefixes to CA certificates from the /etc/rhsm to /etc/rhsm-host path if librhsm is run in a container. Previously, librhsm returned the wrong /etc/rhsm-host-host prefix because of a string manipulation mistake. With this update, the issue has been fixed, and the librhsm library now returns the correct /etc/rhsm-host prefix. Jira:RHEL-14224 systemd now correctly manages the /run/user/0 directory created by librepo Previously, if the librepo functions were called from an Insights client before logging in root, the /run/user/0 directory could be created with a wrong SELinux context type. This prevented systemd from cleaning the directory after you logged out from root. With this update, the librepo package now sets a default creation type according to default file system labeling rules defined in a SELinux policy. As a result, systemd now correctly manages the /run/user/0 directory created by librepo . Jira:RHEL-11240 systemd now correctly manages the /run/user/0 directory created by libdnf Previously, if the libdnf functions were called from an Insights client before logging in root, the /run/user/0 directory could be created with a wrong SELinux context type. This prevented systemd from cleaning the directory after you logged out from root. With this update, the libdnf package now sets a default creation type according to default file system labeling rules defined in a SELinux policy. As a result, systemd now correctly manages the /run/user/0 directory created by libdnf . Jira:RHEL-11238 The dnf needs-restarting --reboothint command now recommends a reboot to update the CPU microcode To fully update the CPU microcode, you must reboot a system. Previously, when you installed the microcode_ctl package, which contains the updated CPU microcode, the dnf needs-restarting --reboothint command did not recommend the reboot. With this update, the issue has been fixed, and dnf needs-restarting --reboothint now recommends a reboot to update the CPU microcode. Jira:RHEL-4600 8.5. Shells and command-line tools The top -u command now displays at least one process when you sort the processes by memory Previously, when you executed the top command with the -u <user> parameter, where the user was different from the one running the command, all processes disappeared when the M key was pressed to sort the processes by memory. With this update, the top command displays at least one process when you sort the processes by memory. Note To preserve the position of the cursor, not all processes are displayed. You can scroll up through the results to display the remaining processes. Jira:RHEL-16278 ReaR now determines the presence of a BIOS boot loader when both BIOS and UEFI boot loaders are installed Previously, in a hybrid boot loader setup ( UEFI and BIOS ), when UEFI was used to boot, Relax-and-Recover (ReaR) restored only the UEFI boot loader and not the BIOS boot loader. This would result in a system that had a GUID Partition Table ( GPT ), a BIOS Boot Partition, but not a BIOS boot loader. In this situation, ReaR failed to create the rescue image, the attempt to produce a backup or a rescue image by using the rear mkbackup or rear mkrescue command would fail with the following error message: With this update, ReaR determines the presence of both UEFI and BIOS boot loaders, restores them, and does not fail when it does not encounter the BIOS boot loader on the system with the BIOS Boot Partition in GPT . As a result, systems with the hybrid UEFI and BIOS boot loader setup can be backed up and recovered multiple times. Jira:RHEL-16864 [1] ReaR no longer uses the logbsize , sunit and swidth mount options during recovery Previously, when restoring an XFS file system with the parameters different from the original ones by using the MKFS_XFS_OPTIONS configuration setting, Relax-and-Recover (ReaR) mounted this file system with mount options applicable for the original file system, but not for the restored file system. As a consequence, the disk layout recreation would fail with the following error message when ReaR ran the mount command : The kernel log displayed either of the following messages: With this update, ReaR avoids using the logbsize , sunit and swidth mount options when mounting recreated XFS file systems. As a result, when you use the MKFS_XFS_OPTIONS configuration setting, the disk layout recreation succeeds. Jira:RHEL-10478 [1] ReaR recovery no longer fails on systems with a small thin pool metadata size Previously, ReaR did not save the size of the pool metadata volume when saving a layout of an LVM volume group with a thin pool. During recovery, ReaR recreated the pool with the default size even if the system used a non-default pool metadata size. As a consequence, when the original pool metadata size was smaller than the default size and no free space was available in the volume group, the layout recreation during system recovery failed with a message in the log similar to these examples: or With this update, the recovered system has a metadata volume with the same size as the original system. As a result, the recovery of a system with a small thin pool metadata size and no extra free space in the volume group finishes successfully. Jira:RHEL-6984 ReaR now preserves logs from the bprestore command of NetBackup in the rescue system and the recovered system Previously, when using the NetBackup integration ( BACKUP=NBU ), ReaR added the log from the bprestore command during recovery to a directory that was deleted on exit. Additionally, ReaR did not save further logs produced by the command under the /usr/openv/netbackup/logs/bprestore/ directory on the recovered system. As a consequence, if the bprestore command failed during recovery, the logs were deleted unless the rear recover command was run with the -d or -D option. Moreover, even if the recovery finished successfully, the logs under /usr/openv/netbackup/logs/bprestore/ directory were lost after a reboot and could not be examined. With this update, ReaR keeps the log from the bprestore command in the /var/lib/rear/restore directory in the rescue system where it persists after the rear recover command has finished until the rescue system is rebooted. If the system is recovered, all logs from /usr/openv/netbackup/logs/bprestore/ are copied to the /var/log/rear/recover/restore directory together with the log from /var/lib/rear/restore in case further examination is required. Jira:RHEL-17393 ReaR no longer fails during recovery if the TMPDIR variable is set in the configuration file Previously, the ReaR default configuration file /usr/share/rear/conf/default.conf contained the following instructions: The instructions mentioned above did not work correctly because the TMPDIR variable had the same value in the rescue environment, which was not correct if the directory specified in the TMPDIR variable did not exist in the rescue image. As a consequence, when the rescue image was booted, setting and exporting TMPDIR in the /etc/rear/local.conf file led to the following error : or the following error and cancel later, when running rear recover : With this update, ReaR clears the TMPDIR variable in the rescue environment. ReaR also detects when the variable has been set in /etc/rear/local.conf , and prints a warning if the variable is set. The comment in /usr/share/rear/conf/default.conf has been changed to instruct to set and export TMPDIR in the environment before executing rear instead of setting it in /etc/rear/local.conf . If the command export TMPDIR=... is used in /etc/rear/local.conf , ReaR now prints the following warning: As a result, the recovery is successful in the described configuration. Setting TMPDIR in a configuration file such as /etc/rear/local.conf is now deprecated and the functionality will be removed in a future release. It is recommended to remove such settings from /etc/rear/local.conf , and to set and export TMPDIR in the environment before calling ReaR instead. Jira:RHEL-24847 8.6. Networking wwan_hwsim is now in the kernel-modules-internal package The wwan_hwsim kernel module provides a framework for simulating and testing various networking scenarios that use wireless wide area network (WWAN) devices. Previously, wwan_hwsim was a part of the kernel-modules-extra package. However, with this release, it is moved to the kernel-modules-internal package, which contains other similarly-oriented utilities. Note that the WWAN feature for PCI modem is still a Technology Preview. Jira:RHEL-24618 [1] The xdp-loader features command now works as expected The xdp-loader utility was compiled against the version of libbpf . As a consequence, xdp-loader features failed with an error: The utility is now compiled against the correct libbpf version. As a result, the command now works as expected. Jira:RHEL-3382 Mellanox ConnectX-5 adapter works in the DMFS mode Previously, while using the Ethernet switch device driver model ( switchdev ) mode, the mlx5 driver failed if configured in the device managed flow steering ( DMFS ) mode on the ConnectX-5 adapter. Consequently, the following error message appeared: As a result, when you update the firmware version of the ConnectX-5 adapter to 16.35.3006 or later, the error message will not appear. Jira:RHEL-9897 [1] 8.7. Kernel crash was rebased to version 8.0.4 The crash utility was upgraded to version 8.0.4, which provides multiple bug fixes. Notable repairs include: Fixed the segmentation fault when the non-panicking CPUs failed to stop during the kernel panic. The critical error incorrectly did not cause the kernel panic when the panic_on_oops kernel parameter was disabled. The crash utility did not properly resolve the hashed freelist pointers for the kernels compiled with the CONFIG_SLAB_FREELIST_HARDENED=y configuration option. A change in the kernel module memory layout terminology. The change replaced module_layout with module_memory to better indicate memory-related aspects of the crash utility. Without this change, crash cannot start a session with an error message such as this: Jira:RHEL-9009 tuna launches GUI when needed Previously, if you ran the tuna utility without any subcommand, it would launch the GUI. This behavior was desirable if you had a display. In the opposite case, tuna on a machine without a display would not exit gracefully. With this update, tuna detects whether you have a display, and the GUI is launched or not launched accordingly. Jira:RHEL-8859 [1] Intel TPM chips are now detected correctly Previously, a side effect in a bug fix to AMD Trusted Platform Module (TPM) chips also affected Intel TPM chips. As a consequence, RHEL failed to detect certain Intel TPM chips. With this update, the AMD TPM bug fix has been revised. As a result, RHEL now detects the Intel TPM chips correctly. Jira:RHEL-18985 [1] RHEL previously failed to recognize NVMe disks when VMD was enabled When you reset or reattached a driver, the Volume Management Device (VMD) domain previously did not soft-reset. Consequently, the hardware could not properly detect and enumerate its devices. With this update, the operating system with VMD enabled now correctly recognizes NVMe disks, especially when resetting a server or working with a VM machine. Bugzilla:2128610 [1] 8.8. File systems and storage multipathd now successfully removes devices that have outstanding queued I/O Previously, the multipathd command did not disable the queue_if_no_path parameter before removing a device. This was possible only if there was an outstanding queued I/O to the multipath device itself, and not to the partition devices. Consequently, multipathd would hang, and could no longer maintain the multipath devices. With this update, the multipathd now disables queuing before executing the remove command such as multipath -F , multipath -f <device> , multipathd remove maps , or multipathd remove map <device> . As a result, multipathd now successfully removes devices that have outstanding queued I/O. Jira:RHEL-4998 [1] The no_read_workqueue , no_write_workqueue , and try_verify_in_taskle options of the dm-crypt and dm-verity devices are temporarily disabled Previously, the dm-crypt devices created by using either the no_read_workqueue or no_write_workqueue option and dm-verity devices created by using the try_verify_in_tasklet option caused memory corruption. Consequently, random kernel memory was corrupted, which caused various system problems. With this update, these options are temporarily disabled. Note that this fix can cause dm-verity and dm-crypt to perform slower on some workloads. Jira:RHEL-23572 [1] Multipathd now checks if a device is incorrectly queuing I/O Previously, a multipath device restarted queuing I/O, even though it was configured to fail, under the following conditions: The multipath device was configured with the queue_if_no_paths parameter set to several retries. A path device was removed from the multipath device that had no working paths and was no longer queuing I/O. With this update, the issue has been fixed. As a result, multipath devices no longer restarts queuing I/O if the queuing is disabled and a path is removed while there are no usable paths. Jira:RHEL-17234 [1] Removing duplicate entry from nvmf_log_connect_error Previously, due to a duplicate commit merge error, a log message was repeated in the nvmf_log_connect_error kernel function. Consequently, when the kernel was unable to connect to a fabric-attached Non-volatile Memory Express (NVMe) device, the Connect command failed message appeared twice. With this update, the duplicate log message is now removed from the kernel, resulting in only a single log message available for each error. Jira:RHEL-21545 [1] The kernel no longer crashes when namespaces are added and removed Previously, when NVMe namespaces were rapidly added and removed, a namespace disappeared between successive commands used to probe the namespace. In a specific case, a storage array did not return an invalid namespace error but instead returned a buffer filled with zero. Consequently, the kernel crashed due to the divide-by-zero error. With this update, the kernel now validates data from responses to both the Identify Namespace data structure issued to the storage. As a result, the kernel no longer crashes. Jira:RHEL-14751 [1] The newly allocated sections of the data device are now properly aligned Previously, when a Stratis pool was expanded, it was possible to allocate the new regions of the pool. But the newly allocated regions were not correctly aligned with the previously allocated regions. Consequently, it could cause a performance degradation along with a nonzero entry in the Stratis thin pool's alignment_offset file in sysfs . With this update, when the pool expands, the newly allocated region of the data device is properly aligned with the previously allocated region. As a result, there is no degradation in performance and no nonzero entry in the Stratis thin pool's alignment_offset file in sysfs . Jira:RHEL-16736 System boots correctly when adding a NVMe-FC device as a mount point in /etc/fstab Previously, due to a known issue in the nvme-cli nvmf-autoconnect systemd services, systems failed to boot while adding the Non-volatile Memory Express over Fibre Channel (NVMe-FC) devices as a mount point in the /etc/fstab file. Consequently, the system entered into an emergency mode. With this update, a system boots without any issue when mounting an NVMe-FC device. Jira:RHEL-8171 [1] LUNs are now visible during the operating system installation Previously, the system was not using the authentication information from firmware sources, specifically in cases involving iSCSI hardware offload with CHAP (Challenge-Handshake Authentication Protocol) authentication stored in the iSCSI iBFT (Boot Firmware Table). As a consequence, the iSCSI login failed during installation. With the fix in the udisks2-2.9.4-9.el9 firmware authentication, this issue is now resolved and LUNs are visible during the installation and initial boot. Bugzilla:2213769 [1] 8.9. High availability and clusters Configuring the tls and keep_active_partition_tie_breaker quorum device options without specifying --force Previously, when configuring a quorum device, a user could not configure the tls and keep_active_partition_tie_breaker options for a quorum device model net without specifying the --force option. With this update, configuring these options no longer requires you to specify --force . Jira:RHEL-7746 Issues with moving and banning clone and bundle resources now corrected This bug fix addresses two limitations of moving bundled and clone resources: When a user tried to move a bundled resource out of its bundle or ban it from running in its bundle, pcs created a constraint but the constraint had no effect. This caused the move to fail with an error message. With this fix, pcs disallows moving and banning bundled resources from their bundles and prints an error message noting that bundled resources cannot be moved out of their bundles. When a user tried to move a bundle or clone resource, pcs exited with an error message noting that bundle or clone resources cannot be moved. This fix relaxes validation of move commands. It is now possible to move clone and bundle resources. When moving clone resources, you must specify a destination node if more than one instance of a clone is running. Only one-replica bundles can be moved. Jira:RHEL-7744 Output of pcs status command no longer shows warning for expired constraints Previously, when moving a cluster resource created a temporary location constraint, the pcs status command displayed a warning even after the constraint expired. With this fix, the pcs status command filters out expired constraints and they no longer generate a warning message in the command output. Jira:RHEL-7669 Disabling the auto_tie_breaker quorum option no longer allowed when SBD fencing requires it Previously, pcs allowed a user to disable the auto_tie_breaker quorum option even when a cluster configuration required this option for SBD fencing to work correctly. With this fix, pcs generates an error message when a user attempts to disable auto_tie_breaker on a system where SBD fencing requires that the auto_tie_breaker option be enabled. Jira:RHEL-7730 8.10. Dynamic programming languages, web and database servers httpd works correctly if a DAV repository location is configured by using a regular expression match Previously, if a Distributed Authoring and Versioning (DAV) repository was configured in the Apache HTTP Server by using a regular expression match (such as LocationMatch ), the mod_dav httpd module was unable to determine the root of the repository from the path name. As a consequence, httpd did not handle requests from third-party providers (for example, Sub-version's mod_dav_svn module). With this update, you can specify the repository root path by using the new DevBasePath directive in the httpd.conf file. For example: As a result, httpd handles requests correctly if a DAV repository location is configured by using a regular expression match. Jira:RHEL-6600 8.11. Compilers and development tools ldconfig no longer crashes after an interrupted system upgrade Previously, the ldconfig utility terminated unexpectedly with a segmentation fault when processing incomplete shared objects left in the /usr/lib64 directory after an interrupted system upgrade. With this update, ldconfig ignores temporary files written during system upgrades. As a result, ldconfig no longer crashes after an interrupted system upgrade. Jira:RHEL-14383 glibc now uses the number of configured processors for malloc arena tuning Previously, glibc used the per-thread CPU affinity mask for tuning the maximum arena count for malloc . As a consequence, restricting the thread affinity mask to a small subset of CPUs in the system could lead to performance degradation. glibc has been changed to use the configured number of CPUs for determining the maximum arena count. As a result, applications use a larger number of arenas, even when running with a restricted per-thread CPU affinity mask, and the performance degradation no longer occurs. Jira:RHEL-17157 [1] Improved glibc compatibility with applications using dlclose on shared objects involved in a dependency cycle Previously, when unloading a shared object in a dependency cycle using the dlclose function in glibc , that object's ELF destructor might not have been called before all other objects were unloaded. As a consequence of this late ELF destructor execution, applications experienced crashes and other errors due to the initial shared object's dependencies already being deinitialized. With this update, glibc has been fixed to first call the ELF destructor of the immediate object being unloaded before any other ELF destructors are executed. As a result, compatibility with applications using dlclose on shared objects involved in a dependency cycle is improved and crashes no longer occur. Jira:RHEL-2491 [1] make no longer tries to run directories Previously, make did not check if an executable it was trying to run was actually an executable. Consequently, if the path included a directory with the same name as the executable, make tried to run the directory instead. With this update, make now does additional checks when searching for an executable. As a result, make no longer tries to run directories. Jira:RHEL-22829 Improved glibc wide-character write performance Previously, the wide stdio stream implementation in glibc did not treat the default buffer size as large enough for wide-character write operations and used a 16-byte fallback buffer instead, negatively impacting performance. With this update, buffer management is fixed and the entire write buffer is used. As a result, glibc wide-character write performance is improved. Jira:RHEL-19862 [1] The glibc getaddrinfo function now correctly reads ncsd cache information Previously, a bug in the glibc getaddrinfo function would cause it to occasionally return empty elements in the list address information structure. With this update, the getaddrinfo function has been fixed to read and translate ncsd cache data correctly and, as a result, returns correct address information. Jira:RHEL-16643 Improved glibc compatibility with applications using dlclose on shared objects involved in a dependency cycle Previously, when unloading a shared object in a dependency cycle using the dlclose function in glibc , that object's ELF destructor might not have been called before all other objects were unloaded. As a consequence of this late ELF destructor execution, applications experienced crashes and other errors due to the initial shared object's dependencies already being deinitialized. With this update, glibc has been fixed to first call the ELF destructor of the immediate object being unloaded before any other ELF destructors are executed. As a result, compatibility with applications using dlclose on shared objects involved in a dependency cycle is improved and crashes no longer occur. Jira:RHEL-12362 ncsd no longer fails to start due to inconsistent cache expiry information Previously, the glibc Name Service Switch Caching Daemon ( nscd ) could fail to start due to inconsistent cache expiry information in the persistent cache file. With this update, ncsd now marks cache entries with inconsistent timing information for deletion and skips them. As a result, ncsd no longer fails to start due to inconsistent cache expiry information. Jira:RHEL-3397 Consistently fast glibc thread-local storage performance Previously, the glibc dynamic linker did not adjust certain thread-local storage (TLS) metadata after shared objects with TLS were loaded by using the dlopen() function, which consequently caused slow TLS access. With this update, the dynamic linker now updates TLS metadata for TLS changes caused by dlopen() calls. As a result, TLS access is consistently fast. Jira:RHEL-2123 8.12. Identity Management Allocated memory now released when an operation is completed Previously, memory allocated by the KCM for each operation was not being released until the connection was closed. As a result, for client applications that opened a connection and ran many operations on the same connection, it led to a noticeable memory increase because the allocated memory was not released until the connection closed. With this update, the memory allocated for an operation is now released as soon as the operation is completed. Jira:SSSD-7015 IdM clients correctly retrieve information for trusted AD users when their names contain mixed case characters Previously, if you attempted a user lookup or authentication of a user, and that trusted Active Directory (AD) user contained mixed case characters in their names and they were configured with overrides in IdM, an error was returned preventing users from accessing IdM resources. With this update, a case-sensitive comparison is replaced with a case-insensitive comparison that ignores the case of a character. As a result, IdM clients can now lookup users of an AD trusted domain, even if their usernames contain mixed case characters and they are configured with overrides in IdM. Jira:SSSD-6096 SSSD correctly returns an error if no grace logins remain while changing a password Previously, if a user's LDAP password had expired, SSSD tried to change the password even after the initial bind of the user failed as there were no more grace logins left. However, the error returned to the user did not indicate the reason for the failure. With this update, the request to change the password is aborted if the bind fails and SSSD returns an error message indicating there are no more grace logins and the password must be changed by another means. Jira:SSSD-6184 Removing systems from a domain using the realm leave command Previously, if multiple names were set for the ad_server option in the sssd.conf file, running the realm leave command resulted in parsing errors and the system was not removed from the domain. With this update, the ad_server option is properly evaluated and the correct domain controller name is used and the system is correctly removed from the domain. Jira:SSSD-6081 KCM logs to the correct sssd.kcm.log file Previously, logrotate correctly rotated the Kerberos Credential Manager (KCM) log files but KCM incorrectly wrote the logs to the old log file, sssd_kcm.log.1 . If KCM was restarted, it used the correct log file. With this update, after logrotate is invoked, log files are rotated and KCM correctly logs to the sssd_kcm.log file. Jira:SSSD-6652 The realm leave --remove command no longer asks for credentials Previously, the realm utility did not correctly check if a valid Kerberos ticket was available when running the realm leave operation. As a result, users were asked to enter a password even though a valid Kerberos ticket was available. With this update, realm now correctly verifies if there is a valid Kerberos ticket and no longer requests the user to enter a password when running the realm leave --remove command. Jira:SSSD-6425 KDC now runs extra checks when general constrained delegation requests is processed Previously, the forwardable flag in Kerberos tickets issued by KDCs running on Red Hat Enterprise Linux 8 was vulnerable, allowing unauthorized modification without detection. This vulnerability could lead to impersonation attacks, even from or by users without specific privileges. With this update, KDC runs extra checks when it processes general constrained delegation requests, ensuring detection and rejection of unauthorized flag modifications, thus removing the vulnerability. Jira:RHEL-9984 [1] Check on the forwardable flag is disabled in cases where SIDs are generated for the domain Previously, the update providing a fix for CVE-2020-17049 relied on the Kerberos PAC to run certain checks on the ticket forwardable flag when the KDC processes a general constrained delegation request. However, the PAC is generated only on domains where the SIDs generation task was executed in the past. While this task is automatically performed for all IdM domains created on Red Hat Enterprise Linux (RHEL) 8.5 and newer, domains initialized on older versions require manual execution of this task. In case the SIDs generation task was never executed manually for IdM domains initialized on RHEL 8.4 and older, the PAC will be missing on Kerberos tickets, resulting in rejection of all general constrained delegation requests. This includes IdM's HTTP API, which relies on general constrained delegation. With this update, the check of the forwardable flag is disabled in cases where SIDs were not generated for the domain. Services relying on general constrained delegation, including IdM HTTP API, continue working. However, Red Hat recommends running the SIDs generation task on the domain as soon as possible, especially if the domain has custom general constrained delegation rules configured. Until this is done, the domain remains vulnerable to CVE-2020-17049. Jira:RHEL-22313 IdM Vault encryption and decryption no longer fails in FIPS mode Previously, IdM Vault used OpenSSL RSA-PKCS1v15 as the default padding wrapping algorithm. However, none of the FIPS certified modules in RHEL supported PKCS#1 v1.5 as a FIPS approved algorithm, causing IdM Vault to fail in FIPS mode. With this update, IdM Vault supports the RSA-OAEP padding wrapping algorithm as a fallback. As a result, IdM Vault encryption and decryption now work correctly in FIPS mode. Jira:RHEL-12143 [1] Directory Server no longer fails after abandoning the paged result search Previously, a race condition was a reason for heap corruption and Directory Server failure during abandoning paged result search. With this update, the race condition was fixed, and Directory Server failure no longer occurs. Jira:RHEL-16830 [1] If the nsslapd-numlisteners attribute value is more than 2 , Directory Server no longer fails Previously, if the nsslapd-numlisteners attribute value was higher than 2 , Directory Server sometimes closed the listening file descriptor instead of the accepted file descriptor. As a consequence, a segmentation fault occurred in Directory Server. With this update, Directory Server closes the correct descriptor and continues listening on ports correctly. Jira:RHEL-17175 The autobind operation now does not impacts operations performed on other connections Previously, when the autobind operation was in progress, Directory Server stopped listening to new operations on any connection. With this update, the autobind operation does not impact the operations performed on the other connection. Jira:RHEL-5111 The IdM client installer no longer specifies the TLS CA configuration in the ldap.conf file Previously, the IdM client installer specified the TLS CA configuration in the ldap.conf file. With this update, OpenLDAP uses the default truststore and the IdM client installer does not set up the TLS CA configuration in the ldap.conf file. Bugzilla:2094673 Integration between shadow-utils and sss_cache for local user caching is disabled In RHEL 9, the SSSD implicit files provider domain, which retrieves user information from local files such as /etc/shadow and group information from /etc/groups , was disabled by default. However, the integration in shadow-utils was not fully disabled, which resulted in calls to sss_cache when adding or deleting local users. The unnecessary cache updates caused performance issues for some users. With this update, the shadow-utils integration with sss_cache is fully disabled, and the performance issues caused by unnecessary cache updates no longer occur. Jira:RHEL-56786 8.13. The web console VNC console now works at most resolutions Previously, when using the Virtual Network Computing (VNC) console under certain display resolutions, a mouse offset problem was present or only a part of the interface was visible. Consequently, using the VNC console was not possible. With this update, the problem has been fixed and the VNC console works correctly at most resolutions, with the exception of ultra high resolutions, such as 3840 x 2160 px. Note that a small offset between the recorded and displayed positions of the cursor might still be present. However, this does not significantly impact the usability of the VNC console. Bugzilla:2030836 8.14. Red Hat Enterprise Linux system roles Cluster start no longer times out when the SBD delay-start value is high Previously, when a user configured SBD fencing in a cluster by using the ha_cluster system role and set the delay-start option to a value close to or higher than 90 seconds, the cluster start timed out. This is because the default systemd start timeout is 90 seconds, which the system reached before the SBD start delay value. With this fix, the ha_cluster system role overrides the sbd.service start timeout in systemd so that it is higher than the value of delay-start . This allows the system to start successfully even with high values of the delay-start option. Jira:RHEL-18026 [1] network role validates routing rules with 0.0.0.0/0 or ::/0 Previously, when the from: or to: settings were set to the 0.0.0.0/0 or ::/0 addresses in the routing rule, the network RHEL system role failed to configure the routing rule and rejected the settings as invalid. With this update, the network role allows 0.0.0.0/0 and ::/0 for from: and to: in routing rule validation. As a result, the role successfully configures the routing rules without raising the validation errors. Jira:RHEL-1683 Running read-scale clusters and installing mssql-server-ha no longer requires certain variables Previously, if you used the mssql RHEL system role to configure a read-scale cluster without certain variables ( mssql_ha_virtual_ip , mssql_ha_login , mssql_ha_login_password , and mssql_ha_cluster_run_role ), the role failed with an error message "Variable not defined". However, these variables are not necessary to run a read-scale cluster. The role also tried to install the mssql-server-ha , which is not required for a read-scale cluster. With this fix, the requirement for these variables was removed. As a result, running a read-scale cluster proceeds successfully without the error message. Jira:RHEL-3540 The Kdump system role works correctly when the kexec_crash_size file is busy The /sys/kernel/kexec_crash_size file provides the size of the memory region allocated for crash kernel memory. Previously, the Kdump system role failed when the /sys/kernel/kexec_crash_size file was busy. With this update, the system role retries reading the file when it is available. As a result, the system role no longer fails when the file is busy. Jira:RHEL-3353 selinux role no longer uses the item loop variable Previously, the selinux RHEL system role used the item loop variable. This might have resulted in the following warning message when you called the selinux role from another role: With this release, the selinux role uses __selinux_item as a loop variable. As a result, the warning that the item variable is already in use is no longer displayed even if you call the selinux role from another role. Jira:RHEL-19040 The ha_cluster system role now correctly configures a firewall on a qnetd host Previously, when a user configured a qnetd host and set the ha_cluster_manage_firewall variable to true by using the ha_cluster system role, the role did not enable high-availability services in the firewall. With this fix, the ha_cluster system role now correctly configures a firewall on a qnetd host. Jira:RHEL-17875 The postgresql RHEL system role now installs the correct version of PostgreSQL Previously, if you tried to run the postgresql RHEL system role with the postgresql_version: "15" variable defined on a RHEL managed node, PostgreSQL version 13 was installed instead of version 15. This bug has been fixed, and the postgresql role installs the version set in the variable. Jira:RHEL-5274 keylime_server role correctly reports registrar service status Previously, when the keylime_server role playbook provided incorrect information, the role incorrectly reported the start as successful. With this update, the role now correctly reports a failure when incorrect information is provided, and the timeout when waiting for opened ports has been reduced from approximately 300 seconds to approximately 30 seconds. Jira:RHEL-15909 The podman RHEL system role now sets and cancels linger properly for rootless containers Previously, the podman RHEL system role did not set and cancel linger properly for rootless containers. Consequently, deploying secrets or containers for rootless users produced errors in some cases, and failed to cancel linger when removing resources in some cases. With this update, the podman RHEL system role ensures that linger is enabled for rootless users before doing any secret or container resource management, and ensures that linger is canceled for rootless users when there are no more secrets or container resources to be managed. As a result, the role correctly manages lingering for rootless users. Jira:RHEL-22228 nbde_server role now works with socket overrides Previously, the nbde_server RHEL system role assumed that the only file in the tangd socket override directory was the override.conf file for a custom port. Consequently, the role deleted the directory if there was no port customization without checking other files, and the system re-created the directory in subsequent runs. With this release, the role has been fixed to prevent changing attributes of the port override file and deleting the directory if there are other files. As a result, the role correctly works if tangd socket override files are managed also outside of the role. Jira:RHEL-25508 A volume quadlet service name no longer fails Previously, starting the volume service name produced an error similar to the following one: "Could not find the requested service NAME.volume: host" With this update, the volume quadlet service name is changed to basename-volume.service . As a result, the volume service starts with no errors. For more information, see Volume unit man page. Jira:RHEL-21401 Ansible now preserves JSON strings for use in secrets Previously, Ansible converted JSON strings to the corresponding JSON object if the value was used in a loop and strings similar to data: "{{ value }}" As a consequence, you cannot pass JSON strings as secrets and have the value preserved. This update casts the data value to a string when passing to the podman_secret module. As a result, JSON strings are preserved as-is for use in secrets. Jira:RHEL-22309 The rhc system role no longer fails on the registered systems when rhc_auth contains activation keys Previously, a failure occurred when you executed playbook files on the registered systems with the activation key specified in the rhc_auth parameter. This issue has been resolved. It is now possible to execute playbook files on the already registered systems, even when activation keys are provided in the rhc_auth parameter. Bugzilla:2186218 8.15. Virtualization RT VMs with a FIFO scheduler now boots correctly Previously, after setting a real-time (RT) virtual machine (VM) to use the fifo setting for the vCPU scheduler, the VM became unresponsive when you attempted to boot it. Instead, the VM displayed the Guest has not initialized the display (yet) error. With this update, the error has been fixed, and setting fifo for the vCPU scheduler works as expected in the described circumstances. Jira:RHEL-2815 [1] A dump failure no longer blocks IBM Z VMs with Secure Execution from running Previously, when a dump of an IBM Z virtual machine (VM) with Secure Execution failed, the VM remained in a paused state and was blocked from running. For example, dumping a VM by using the virsh dump command fails if there is not enough space on the disk. The underlying code has been fixed and Secure Execution VMs resume operation successfully after a dump failure. Jira:RHEL-16695 [1] The installation program shows the expected system disk to install RHEL on VM Previously, when installing RHEL on a VM using virtio-scsi devices, it was possible that these devices did not appear in the installation program because of a device-mapper-multipath bug. Consequently, during installation, if some devices had a serial set and some did not, the multipath command was claiming all the devices that had a serial. Due to this, the installation program was unable to find the expected system disk to install RHEL in the VM. With this update, multipath correctly sets the devices with no serial as having no World Wide Identifier (WWID) and ignores them. On installation, multipath only claims devices that multipathd uses to bind a multipath device, and the installation program shows the expected system disk to install RHEL in the VM. Bugzilla:1926147 [1] Using a large number of queues no longer causes VMs to fail Previously, virtual machines (VMs) might have failed when the virtual Trusted Platform Module (vTPM) device was enabled and the multi-queue virtio-net feature was configured to use more than 250 queues. This problem was caused by a limitation in the vTPM device. With this update, the problem has been fixed and VMs with more than 250 queues and with vTPM enabled now work reliably. Jira:RHEL-13335 [1] Windows guests boot more reliably after a v2v conversion on hosts with AMD EPYC CPUs After using the virt-v2v utility to convert a virtual machine (VM) that uses Windows 11 or a Windows Server 2022 as the guest OS, the VM previously failed to boot. This occurred on hosts that use AMD EPYC series CPUs. Now, the underlying code has been fixed and VMs boot as expected in the described circumstances. Bugzilla:2168082 [1] nodedev-dumpxml lists attributes correctly for certain mediated devices Before this update, the nodedev-dumpxml utility did not list attributes correctly for mediated devices that were created using the nodedev-create command. This has been fixed, and nodedev-dumpxml now displays the attributes of the affected mediated devices properly. Bugzilla:2143158 virtiofs devices could not be attached after restarting virtqemud or libvirtd Previously, restarting the virtqemud or libvirtd services prevented virtiofs storage devices from being attached to virtual machines (VMs) on your host. This bug has been fixed, and you can now attach virtiofs devices in the described scenario as expected. Bugzilla:2078693 Hot plugging a Watchdog card to a virtual machine no longer fails Previously, if no PCI slots were available, adding a Watchdog card to a running virtual machine (VM) failed with the following error: With this update, the problem has been fixed and adding a Watchdog card to a running VM now works as expected. Bugzilla:2173584 blob resources now work correctly for virtio-gpu on IBM Z Previously, the virtio-gpu device was incompatible with blob memory resources on IBM Z systems. As a consequence, if you configured a virtual machine (VM) with virtio-gpu on an IBM Z host to use blob resources, the VM did not have any graphical output. With this update, virtio devices have an optional blob attribute. Setting blob to on enables the use of blob resources in the device. This prevents the described problem in virtio-gpu devices, and can also accelerate the display path by reducing or eliminating copying of pixel data between the guest and host. Note that blob resource support requires QEMU version 6.1 or later. Jira:RHEL-7135 Reinstalling virtio-win drivers no longer causes DNS configuration to reset on the guest In virtual machines (VMs) that use a Windows guest operating system, reinstalling or upgrading virtio-win drivers for the network interface controller (NIC) previously caused DNS settings in the guest to reset. As a consequence, your Windows guest in some cases lost network connectivity. With this update, the described problem has been fixed. As a result, if you reinstall or upgrade from the latest version of virtio-win , the problem no longer occurs. Note, however, that upgrading from a prior version of virtio-win will not fix the problem, and DNS resets might still occur in your Windows guests. Jira:RHEL-1860 [1] The repair function of virtio-win-guest-tool for the virtio-win drivers now works correctly Previously, when using the Repair button of virtio-win-guest-tool for a virtio-win driver, such as the Virtio Balloon Driver, the button had no effect. As a consequence, the driver could not be reinstalled after being removed on the guest. This problem has been fixed and the virtio-win-guest-tool repair function now works correctly in the described circumstances. Jira:RHEL-1517 [1]
|
[
"/run/anaconda/initrd_errors.txt: No such file or directory",
"subscription-manager config --rhsm.progress_messages=1",
"ERROR: Cannot autodetect what is used as boot loader, see default.conf about 'BOOTLOADER'.",
"wrong fs type, bad option, bad superblock on and missing codepage or helper program, or other error.",
"logbuf size must be greater than or equal to log stripe size",
"alignment check failed: sunit/swidth vs. agsize",
"Insufficient free space: 230210 extents needed, but only 230026 available",
"Volume group \"vg\" has insufficient free space (16219 extents): 16226 required.",
"To have a specific working area directory prefix for Relax-and-Recover specify in /etc/rear/local.conf something like # export TMPDIR=\"/prefix/for/rear/working/directory\" # where /prefix/for/rear/working/directory must already exist. This is useful for example when there is not sufficient free space in /tmp or USDTMPDIR for the ISO image or even the backup archive.",
"mktemp: failed to create file via template '/prefix/for/rear/working/directory/tmp.XXXXXXXXXX': No such file or directory cp: missing destination file operand after '/etc/rear/mappings/mac' Try 'cp --help' for more information. No network interface mapping is specified in /etc/rear/mappings/mac",
"ERROR: Could not create build area",
"Warning: Setting TMPDIR in a configuration file is deprecated. To specify a working area directory prefix, export TMPDIR before executing 'rear'",
"Cannot display features, because xdp-loader was compiled against an old version of libbpf without support for querying features.",
"mlx5_core 0000:5e:00.0: mlx5_cmd_out_err:780:(pid 980895): DELETE_FLOW_TABLE_ENTRY(0x938) op_mod(0x0) failed, status bad resource(0x5), syndrome (0xabe70a), err(-22)",
"crash: invalid structure member offset: module_core_size FILE: kernel.c LINE: 3787 FUNCTION: module_init()",
"<LocationMatch \"^/repos/\"> DAV svn DavBasePath /repos SVNParentPath /var/www/svn </LocationMatch>",
"[WARNING]: TASK: fedora.linux_system_roles.selinux : Restore SELinux labels on filesystem tree: The loop variable 'item' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior.",
"Failed to configure watchdog ERROR Error attempting device hotplug: internal error: No more available PCI slots"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.4_release_notes/bug-fixes
|
Chapter 2. Enable the RHEL for SAP Applications or RHEL for SAP Solutions subscriptions
|
Chapter 2. Enable the RHEL for SAP Applications or RHEL for SAP Solutions subscriptions For running SAP NetWeaver application servers the "RHEL for SAP Applications" subscription can be used if the RHEL systems don't need to be locked to a specific RHEL 8 minor release. For running SAP HANA or SAP NetWeaver or S/4HANA application servers that should be tied to the same RHEL 8 minor release as SAP HANA, one of the following subscriptions is required to access Update Services for SAP Solutions (E4S) : for the x86_64 platform: Red Hat Enterprise Linux for SAP Solutions for the PowerPC Little Endian ( ppc64le ) platform: Red Hat Enterprise Linux for SAP Solutions for Power, LE 2.1. Detach existing subscriptions (already registered systems only) Perform the following steps if the SAP System was previously registered using another RHEL subscription. Find the serial number of the subscription that the system is currently subscribed to: # subscription-manager list --consumed | \ awk '/Subscription Name:/|| /Serial:/|| /Pool ID:/|| /Service Type:/{print} /Service Level:/{printf ("%s\n\n", USD0)}' Remove the subscription from the system, using the following command. Replace the string <SERIAL> by the serial number shown in the output of the command. # subscription-manager remove --serial=<SERIAL> 2.2. Attach the RHEL for SAP Applications or RHEL for SAP Solutions subscription To attach the RHEL for SAP Applications or RHEL for SAP Solutions subscriptions, perform the following steps: Find the pool id of the subscription: # subscription-manager list --available --matches='RHEL for SAP*' | \ awk '/Subscription Name:/|| /Pool ID:/|| /Service Type:/{print} /Service Level:/{printf ("%s\n\n", USD0)}' Attach the subscription to the system, using the following command. Replace the string <POOL_ID> with the actual pool ID (or one of the pool IDs) shown in the output of the command. # subscription-manager attach --pool=<POOL_ID>
|
[
"subscription-manager list --consumed | awk '/Subscription Name:/|| /Serial:/|| /Pool ID:/|| /Service Type:/{print} /Service Level:/{printf (\"%s\\n\\n\", USD0)}'",
"subscription-manager remove --serial=<SERIAL>",
"subscription-manager list --available --matches='RHEL for SAP*' | awk '/Subscription Name:/|| /Pool ID:/|| /Service Type:/{print} /Service Level:/{printf (\"%s\\n\\n\", USD0)}'",
"subscription-manager attach --pool=<POOL_ID>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/rhel_for_sap_subscriptions_and_repositories/asmb_enable_rhel-for-sap-subscriptions-and-repositories-8
|
E.2.10. /proc/interrupts
|
E.2.10. /proc/interrupts This file records the number of interrupts per IRQ on the x86 architecture. A standard /proc/interrupts looks similar to the following: For a multi-processor machine, this file may look slightly different: The first column refers to the IRQ number. Each CPU in the system has its own column and its own number of interrupts per IRQ. The column reports the type of interrupt, and the last column contains the name of the device that is located at that IRQ. Each of the types of interrupts seen in this file, which are architecture-specific, mean something different. For x86 machines, the following values are common: XT-PIC - This is the old AT computer interrupts. IO-APIC-edge - The voltage signal on this interrupt transitions from low to high, creating an edge , where the interrupt occurs and is only signaled once. This kind of interrupt, as well as the IO-APIC-level interrupt, are only seen on systems with processors from the 586 family and higher. IO-APIC-level - Generates interrupts when its voltage signal is high until the signal is low again.
|
[
"CPU0 0: 80448940 XT-PIC timer 1: 174412 XT-PIC keyboard 2: 0 XT-PIC cascade 8: 1 XT-PIC rtc 10: 410964 XT-PIC eth0 12: 60330 XT-PIC PS/2 Mouse 14: 1314121 XT-PIC ide0 15: 5195422 XT-PIC ide1 NMI: 0 ERR: 0",
"CPU0 CPU1 0: 1366814704 0 XT-PIC timer 1: 128 340 IO-APIC-edge keyboard 2: 0 0 XT-PIC cascade 8: 0 1 IO-APIC-edge rtc 12: 5323 5793 IO-APIC-edge PS/2 Mouse 13: 1 0 XT-PIC fpu 16: 11184294 15940594 IO-APIC-level Intel EtherExpress Pro 10/100 Ethernet 20: 8450043 11120093 IO-APIC-level megaraid 30: 10432 10722 IO-APIC-level aic7xxx 31: 23 22 IO-APIC-level aic7xxx NMI: 0 ERR: 0"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-interrupts
|
Chapter 97. KafkaTopicSpec schema reference
|
Chapter 97. KafkaTopicSpec schema reference Used in: KafkaTopic Property Description partitions The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning. When absent this will default to the broker configuration for num.partitions . integer replicas The number of replicas the topic should have. When absent this will default to the broker configuration for default.replication.factor . integer config The topic configuration. map topicName The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name. string
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaTopicSpec-reference
|
4. Branding and Chroming the Graphical User Interface
|
4. Branding and Chroming the Graphical User Interface The following sections describe changing the appearance of the graphical user interface (GUI) of the Anaconda installer. There are several elements in the graphical user interface of Anaconda which can be changed to customize the look of the installer. To customize the installer's appearance, you must create a custom product.img file containing a custom installclass (to change the product name displayed in the installer) and your own branding material. The product.img file is not an installation image; it is used to supplement the full installation ISO image by loading your customizations and using them to overwrite files included on the boot image by default. See Section 2, "Working with ISO Images" for information about extracting boot images provided by Red Hat, creating a product.img file and adding this file to the ISO images. 4.1. Customizing Graphical Elements Graphical elements of the installer which can be changed are stored in the /usr/share/anaconda/pixmaps/ directory in the installer runtime file system. This directory contains the following files: Additionally, the /usr/share/anaconda/ directory contains a CSS stylesheet named anaconda-gtk.css , which determines the file names and parameters of the main UI elements - the logo and the backgrounds for the side bar and top bar. The file has the following contents: /* vendor-specific colors/images */ @define-color redhat #021519; /* logo and sidebar classes for RHEL */ .logo-sidebar { background-image: url('/usr/share/anaconda/pixmaps/sidebar-bg.png'); background-color: @redhat; background-repeat: no-repeat; } .logo { background-image: url('/usr/share/anaconda/pixmaps/sidebar-logo.png'); background-position: 50% 20px; background-repeat: no-repeat; background-color: transparent; } AnacondaSpokeWindow #nav-box { background-color: @redhat; background-image: url('/usr/share/anaconda/pixmaps/topbar-bg.png'); background-repeat: no-repeat; color: white; } AnacondaSpokeWindow #layout-indicator { color: black; } The most imporant part of the CSS file is the way it handles scaling based on resolution. The PNG image backgrounds do not scale, they are always displayed in their true dimensions. Instead, the backgrounds have a transparent background, and the style sheet defines a matching background color on the @define-color line. Therefore, the background images "fade" into the background color , which means that the backgrounds work on all resolutions without a need for image scaling. You could also change the background-repeat parameters to tile the background, or, if you are confident that every system you will be installing on will have the same display resolution, you can use background images which fill the entire bar. The rnotes/ directory contains a set of banners. During the installation, banner graphics cycle along the bottom of the screen, approximately once per minute. Any of the files listed above can be customized. Once you do so, follow the instructions in Section 2.2, "Creating a product.img File" to create your own product.img with custom graphics, and then Section 2.3, "Creating Custom Boot Images" to create a new bootable ISO image with your changes included. 4.2. Customizing the Product Name Apart from graphical elements described in the section, you can also customize the product name displayed during the installation. This product name is shown in the top right corner in all screens. To change the product name, you must create a custom installation class . Create a new file named custom.py with content similar to the example below: Example 1. Creating a Custom Installclass from pyanaconda.installclass import BaseInstallClass from pyanaconda.product import productName from pyanaconda import network from pyanaconda import nm class CustomBaseInstallClass(BaseInstallClass): name = "My Distribution" sortPriority = 30000 if not productName.startswith("My Distribution"): hidden = True defaultFS = "xfs" bootloaderTimeoutDefault = 5 bootloaderExtraArgs = [] ignoredPackages = ["ntfsprogs"] installUpdates = False _l10n_domain = "comps" efi_dir = "redhat" help_placeholder = "RHEL7Placeholder.html" help_placeholder_with_links = "RHEL7PlaceholderWithLinks.html" def configure(self, anaconda): BaseInstallClass.configure(self, anaconda) BaseInstallClass.setDefaultPartitioning(self, anaconda.storage) def setNetworkOnbootDefault(self, ksdata): if ksdata.method.method not in ("url", "nfs"): return if network.has_some_wired_autoconnect_device(): return dev = network.default_route_device() if not dev: return if nm.nm_device_type_is_wifi(dev): return network.update_onboot_value(dev, "yes", ksdata) def __init__(self): BaseInstallClass.__init__(self) The file above determines the installer defaults (such as the default file system, etc.), but the part relevant to this procedure is the following block: class CustomBaseInstallClass (BaseInstallClass): name = " My Distribution " sortPriority = 30000 if not productName.startswith("My Distribution"): hidden = True Change My Distribution to the name which you want to display in the installer. Also make sure that the sortPriority attribute is set to more than 20000 ; this makes sure that the new installation class will be loaded first. Warning Do not change any other attributes or class names in the file - otherwise you may cause the installer to behave unpredictably. After you create the custom installclass, follow the steps in Section 2.2, "Creating a product.img File" to create a new product.img file containing your customizations, and the Section 2.3, "Creating Custom Boot Images" to create a new bootable ISO file with your changes included.
|
[
"pixmaps ββ anaconda-selected-icon.svg ββ dialog-warning-symbolic.svg ββ right-arrow-icon.png ββ rnotes β ββ en β ββ RHEL_7_InstallerBanner_Andreas_750x120_11649367_1213jw.png β ββ RHEL_7_InstallerBanner_Blog_750x120_11649367_1213jw.png β ββ RHEL_7_InstallerBanner_CPAccess_CommandLine_750x120_11649367_1213jw.png β ββ RHEL_7_InstallerBanner_CPAccess_Desktop_750x120_11649367_1213jw.png β ββ RHEL_7_InstallerBanner_CPAccess_Help_750x120_11649367_1213jw.png β ββ RHEL_7_InstallerBanner_Middleware_750x120_11649367_1213jw.png β ββ RHEL_7_InstallerBanner_OPSEN_750x120_11649367_1213cd.png β ββ RHEL_7_InstallerBanner_RHDev_Program_750x120_11649367_1213cd.png β ββ RHEL_7_InstallerBanner_RHELStandardize_750x120_11649367_1213jw.png β ββ RHEL_7_InstallerBanner_Satellite_750x120_11649367_1213cd.png ββ sidebar-bg.png ββ sidebar-logo.png ββ topbar-bg.png",
"/* vendor-specific colors/images */ @define-color redhat #021519; /* logo and sidebar classes for RHEL */ .logo-sidebar { background-image: url('/usr/share/anaconda/pixmaps/sidebar-bg.png'); background-color: @redhat; background-repeat: no-repeat; } .logo { background-image: url('/usr/share/anaconda/pixmaps/sidebar-logo.png'); background-position: 50% 20px; background-repeat: no-repeat; background-color: transparent; } AnacondaSpokeWindow #nav-box { background-color: @redhat; background-image: url('/usr/share/anaconda/pixmaps/topbar-bg.png'); background-repeat: no-repeat; color: white; } AnacondaSpokeWindow #layout-indicator { color: black; }",
"from pyanaconda.installclass import BaseInstallClass from pyanaconda.product import productName from pyanaconda import network from pyanaconda import nm class CustomBaseInstallClass(BaseInstallClass): name = \"My Distribution\" sortPriority = 30000 if not productName.startswith(\"My Distribution\"): hidden = True defaultFS = \"xfs\" bootloaderTimeoutDefault = 5 bootloaderExtraArgs = [] ignoredPackages = [\"ntfsprogs\"] installUpdates = False _l10n_domain = \"comps\" efi_dir = \"redhat\" help_placeholder = \"RHEL7Placeholder.html\" help_placeholder_with_links = \"RHEL7PlaceholderWithLinks.html\" def configure(self, anaconda): BaseInstallClass.configure(self, anaconda) BaseInstallClass.setDefaultPartitioning(self, anaconda.storage) def setNetworkOnbootDefault(self, ksdata): if ksdata.method.method not in (\"url\", \"nfs\"): return if network.has_some_wired_autoconnect_device(): return dev = network.default_route_device() if not dev: return if nm.nm_device_type_is_wifi(dev): return network.update_onboot_value(dev, \"yes\", ksdata) def __init__(self): BaseInstallClass.__init__(self)",
"class CustomBaseInstallClass (BaseInstallClass): name = \" My Distribution \" sortPriority = 30000 if not productName.startswith(\"My Distribution\"): hidden = True"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/anaconda_customization_guide/sect-anaconda-visuals
|
Chapter 3. Managing Apicurio Registry content using the web console
|
Chapter 3. Managing Apicurio Registry content using the web console You can manage schema and API artifacts stored in Apicurio Registry by using the Apicurio Registry web console. This includes uploading and browsing Apicurio Registry content, configuring optional rules for content, and generating client sdk code: Section 3.1, "Viewing artifacts using the Apicurio Registry web console" Section 3.2, "Adding artifacts using the Apicurio Registry web console" Section 3.3, "Configuring content rules using the Apicurio Registry web console" Section 3.4, "Generating client SDKs for OpenAPI artifacts using the Apicurio Registry web console" Section 3.5, "Changing an artifact owner using the Apicurio Registry web console" Section 3.6, "Configuring Apicurio Registry instance settings using the web console" Section 3.7, "Exporting and importing data using the Apicurio Registry web console" 3.1. Viewing artifacts using the Apicurio Registry web console You can use the Apicurio Registry web console to browse the schema and API artifacts stored in Apicurio Registry. This section shows a simple example of viewing Apicurio Registry artifacts, groups, versions, and artifact rules. Prerequisites Apicurio Registry is installed and running in your environment. You are logged in to the Apicurio Registry web console: http://MY_REGISTRY_URL/ui Artifacts have been added to Apicurio Registry using the web console, command line, Maven plug-in, or a Java client application. Procedure On the Artifacts tab, browse the list of artifacts stored in Apicurio Registry, or enter a search string to find an artifact. You can select from the list to search by specific criteria such as name, group, labels, or global ID. Figure 3.1. Artifacts in Apicurio Registry web console Click an artifact to view the following details: Overview : Displays artifact version metadata such as artifact name, artifact ID, global ID, content ID, labels, properties, and so on. Also displays rules for validity and compatibility that you can configure for artifact content. Documentation (OpenAPI and AsyncAPI only): Displays automatically-generated REST API documentation. Content : Displays a read-only view of the full artifact content. For JSON content, you can click JSON or YAML to display your preferred format. References : Displays a read-only view of all artifacts referenced by this artifact. You can also click View artifacts that reference this artifact . If additional versions of this artifact have been added, you can select them from the Version list in page header. To save the artifact contents to a local file, for example, my-openapi.json or my-protobuf-schema.proto , and click Download at the end of the page. Additional resources Section 3.2, "Adding artifacts using the Apicurio Registry web console" Section 3.3, "Configuring content rules using the Apicurio Registry web console" Chapter 10, Apicurio Registry content rule reference 3.2. Adding artifacts using the Apicurio Registry web console You can use the Apicurio Registry web console to upload schema and API artifacts to Apicurio Registry. This section shows simple examples of uploading Apicurio Registry artifacts and adding new artifact versions. Prerequisites Apicurio Registry is installed and running in your environment. You are logged in to the Apicurio Registry web console: http://MY_REGISTRY_URL/ui Procedure On the Artifacts tab, click Upload artifact , and specify the following details: Group & ID : Use the default empty settings to automatically generate an artifact ID and add the artifact to the default artifact group. Alternatively, you can enter an optional artifact group name or ID. Type : Use the default Auto-Detect setting to automatically detect the artifact type, or select the artifact type from the list, for example, Avro Schema or OpenAPI . You must manually select the Kafka Connect Schema artifact type, which cannot be automatically detected. Artifact : Specify the artifact location using either of the following options: From file : Click Browse , and select a file, or drag and drop a file. For example, my-openapi.json or my-schema.proto . Alternatively, you can enter the file contents in the text box. From URL : Enter a valid and accessible URL, and click Fetch . For example: https://petstore3.swagger.io/api/v3/openapi.json . Click Upload and view the artifact details: Overview : Displays artifact version metadata such as artifact name, artifact ID, global ID, content ID, labels, properties, and so on. Also displays rules for validity and compatibility that you can configure for artifact content. Documentation (OpenAPI and AsyncAPI only): Displays automatically-generated REST API documentation. Content : Displays a read-only view of the full artifact content. For JSON content, you can click JSON or YAML to display your preferred format. References : Displays a read-only view of all artifacts referenced by this artifact. You can also click View artifacts that reference this artifact . You can add artifact references using the Apicurio Registry Maven plug-in or REST API only. The following example shows an example OpenAPI artifact: Figure 3.2. Artifact details in Apicurio Registry web console On the Overview tab, click the Edit pencil icon to edit artifact metadata such as name or description. You can also enter an optional comma-separated list of labels for searching, or add key-value pairs of arbitrary properties associated with the artifact. To add properties, perform the following steps: Click Add property . Enter the key name and the value. Repeat the first two steps to add multiple properties. Click Save . To save the artifact contents to a local file, for example, my-protobuf-schema.proto or my-openapi.json , click Download at the end of the page. To add a new artifact version, click Upload new version in the page header, and drag and drop or click Browse to upload the file, for example, my-avro-schema.json or my-openapi.json . To delete an artifact, click Delete in the page header. Warning Deleting an artifact deletes the artifact and all of its versions, and cannot be undone. Additional resources Section 3.1, "Viewing artifacts using the Apicurio Registry web console" Section 3.3, "Configuring content rules using the Apicurio Registry web console" Chapter 10, Apicurio Registry content rule reference 3.3. Configuring content rules using the Apicurio Registry web console You can use the Apicurio Registry web console to configure optional rules to prevent invalid or incompatible content from being added to Apicurio Registry. All configured artifact-specific rules or global rules must pass before a new artifact version can be uploaded to Apicurio Registry. Configured artifact-specific rules override any configured global rules. This section shows a simple example of configuring global and artifact-specific rules. Prerequisites Apicurio Registry is installed and running in your environment. You are logged in to the Apicurio Registry web console: http://MY_REGISTRY_URL/ui Artifacts have been added to Apicurio Registry using the web console, command line, Maven plug-in, or a Java client application. When role-based authorization is enabled, you have administrator access for global rules and artifact-specific rules, or developer access for artifact-specific rules only. Procedure On the Artifacts tab, browse the list of artifacts in Apicurio Registry, or enter a search string to find an artifact. You can select from the list to search by specific criteria such as artifact name, group, labels, or global ID. Click an artifact to view its version details and content rules. In Artifact-specific rules , click Enable to configure a validity, compatibility, or integrity rule for artifact content, and select the appropriate rule configuration from the list. For example, for Validity rule , select Full . Figure 3.3. Artifact content rules in Apicurio Registry web console To access global rules, click the Global rules tab. Click Enable to configure global validity, compatibility, or integrity rules for all artifact content, and select the appropriate rule configuration from the list. To disable an artifact rule or global rule, click the trash icon to the rule. Additional resources Section 3.2, "Adding artifacts using the Apicurio Registry web console" Chapter 10, Apicurio Registry content rule reference 3.4. Generating client SDKs for OpenAPI artifacts using the Apicurio Registry web console You can use the Apicurio Registry web console to configure, generate, and download client software development kits (SDKs) for OpenAPI artifacts. You can then use the generated client SDKs to build your client applications for specific platforms based on the OpenAPI. Apicurio Registry generates client SDKs for the following programming languages: C# Go Java PHP Python Ruby Swift TypeScript Note Client SDK generation for OpenAPI artifacts runs in your browser only, and cannot be automated by using an API. You must regenerate the client SDK each time a new artifact version is added in Apicurio Registry. Prerequisites Apicurio Registry is installed and running in your environment. You are logged in to the Apicurio Registry web console: http://MY_REGISTRY_URL/ui An OpenAPI artifact has been added to Apicurio Registry using the web console, command line, Maven plug-in, or a Java client application. Procedure On the Artifacts tab, browse the list of artifacts stored in Apicurio Registry, or enter a search string to find a specific OpenAPI artifact. You can select from the list to search by criteria such as name, group, labels, or global ID. Click the OpenAPI artifact in the list to view its details. In the Version metadata section, click Generate client SDK , and configure the following settings in the dialog: Language : Select the programming language in which to generate the client SDK, for example, Java . Generated client class name : Enter the class name for the client SDK, for example, MyJavaClientSDK. Generated client package name : Enter the package name for the client SDK, for example, io.my.example.sdk Click Show advanced settings to configure optional comma-separated lists of path patterns to include or exclude: Include path patterns : Enter specific paths to include when generating the client SDK, for example, **/.*, **/my-path/* . If this field is empty, all paths are included. Exclude path patterns : Enter specific paths to exclude when generating the client SDK, for example, **/my-other-path/* . If this field is empty, no paths are excluded. Figure 3.4. Generate a Java client SDK in Apicurio Registry web console When you have configured the settings in the dialog, click Generate and download . Enter a file name for the client SDK in the dialog, for example, my-client-java.zip , and click Save to download. Additional resources Apicurio Registry uses Kiota from Microsoft to generate the client SDKs. For more information, see the Kiota project in GitHub . For more details and examples of using the generated SDKs to build client applications, see the Kiota documentation . 3.5. Changing an artifact owner using the Apicurio Registry web console As an administrator or as an owner of a schema or API artifact, you can use the Apicurio Registry web console to change the artifact owner to another user account. For example, this feature is useful if the Artifact owner-only authorization option is set for the Apicurio Registry instance on the Settings tab so that only owners or administrators can modify artifacts. You might need to change owner if the owner user leaves the organization or the owner account is deleted. Note The Artifact owner-only authorization setting and the artifact Owner field are displayed only if authentication was enabled when the Apicurio Registry instance was deployed. For more details, see Installing and deploying Red Hat build of Apicurio Registry on OpenShift . Prerequisites The Apicurio Registry instance is deployed and the artifact is created. You are logged in to the Apicurio Registry web console as the artifact's current owner or as an administrator: http://MY_REGISTRY_URL/ui Procedure On the Artifacts tab, browse the list of artifacts stored in Apicurio Registry, or enter a search string to find the artifact. You can select from the list to search by criteria such as name, group, labels, or global ID. Click the artifact that you want to reassign. In the Version metadata section, click the pencil icon to the Owner field. In the New owner field, select or enter an account name. Click Change owner . Additional resources Installing and deploying Red Hat build of Apicurio Registry on OpenShift 3.6. Configuring Apicurio Registry instance settings using the web console As an administrator, you can use the Apicurio Registry web console to configure dynamic settings for Apicurio Registry instances at runtime. You can manage configuration options for features such as authentication, authorization, and API compatibility. Note Authentication and authorization settings are only displayed in the web console if authentication was already enabled when the Apicurio Registry instance was deployed. For more details, see the Installing and deploying Red Hat build of Apicurio Registry on OpenShift . Prerequisites The Apicurio Registry instance is already deployed. You are logged in to the Apicurio Registry web console with administrator access: http://MY_REGISTRY_URL/ui Procedure In the Apicurio Registry web console, click the Settings tab. Select the settings that you want to configure for this Apicurio Registry instance: Table 3.1. Authentication settings Setting Description HTTP basic authentication Displayed only when authentication is already enabled. When selected, Apicurio Registry users can authenticate using HTTP basic authentication, in addition to OAuth. Not selected by default. Table 3.2. Authorization settings Setting Description Anonymous read access Displayed only when authentication is already selected. When selected, Apicurio Registry grants read-only access to requests from anonymous users without any credentials. This setting is useful if you want to use this instance to publish schemas or APIs externally. Not selected by default. Artifact owner-only authorization Displayed only when authentication is already enabled. When selected, only the user who created an artifact can modify that artifact. Not selected by default. Artifact group owner-only authorization Displayed only when authentication is already enabled and Artifact owner-only authorization is selected. When selected, only the user who created an artifact group has write access to that artifact group, for example, to add or remove artifacts in that group. Not selected by default. Authenticated read access Displayed only when authentication is already enabled. When selected, Apicurio Registry grants at least read-only access to requests from any authenticated user regardless of their user role. Not selected by default. Table 3.3. Compatibility settings Setting Description Legacy ID mode (compatibility API) When selected, the Confluent Schema Registry compatibility API uses globalId instead of contentId as an artifact identifier. This setting is useful when migrating from legacy Apicurio Registry instances based on the v1 Core Registry API. Not selected by default. Table 3.4. Web console settings Setting Description Download link expiry The number of seconds that a generated link to a .zip download file is active before expiring for security reasons, for example, when exporting artifact data from the instance. Defaults to 30 seconds. UI read-only mode When selected, the Apicurio Registry web console is set to read-only, preventing create, read, update, or delete operations. Changes made using the Core Registry API are not affected by this setting. Not selected by default. Table 3.5. Additional properties Setting Description Delete artifact version When selected, users are permitted to delete artifact versions in this instance by using the Core Registry API. Not selected by default. Additional resources Installing and deploying Red Hat build of Apicurio Registry on OpenShift 3.7. Exporting and importing data using the Apicurio Registry web console As an administrator, you can use the Apicurio Registry web console to export data from one Apicurio Registry instance, and import this data into another Apicurio Registry instance. You can use this feature to easily migrate data between different instances. The following example shows how to export and import existing data in a .zip file from one Apicurio Registry instance to another instance. All of the artifact data contained in the Apicurio Registry instance is exported in the .zip file. Note You can import only Apicurio Registry data that has been exported from another Apicurio Registry instance. Prerequisites Apicurio Registry instances have been created as follows: The source instance that you are exporting from contains at least one schema or API artifact The target instance that you are importing into is empty to preserve unique IDs You are logged into the Apicurio Registry web console with administrator access: http://MY_REGISTRY_URL/ui Procedure In the web console for the source Apicurio Registry instance, view the Artifacts tab. Click the options icon (three vertical dots) to Upload artifact , and select Download all artifacts (.zip file) to export the data for this Apicurio Registry instance to a .zip download file. In the the web console for the target Apicurio Registry instance, view the Artifacts tab. Click the options icon to Upload artifact , and select Upload multiple artifacts . Drag and drop or browse to the .zip download file that you exported earlier. Click Upload and wait for the data to be imported.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/apicurio_registry_user_guide/managing-registry-artifacts-ui_registry
|
Chapter 1. Guide Overview
|
Chapter 1. Guide Overview The purpose of this guide is to walk through the steps that need to be completed prior to booting up the Red Hat Single Sign-On server for the first time. If you just want to test drive Red Hat Single Sign-On, it pretty much runs out of the box with its own embedded and local-only database. For actual deployments that are going to be run in production you'll need to decide how you want to manage server configuration at runtime (standalone or domain mode), configure a shared database for Red Hat Single Sign-On storage, set up encryption and HTTPS, and finally set up Red Hat Single Sign-On to run in a cluster. This guide walks through each and every aspect of any pre-boot decisions and setup you must do prior to deploying the server. One thing to particularly note is that Red Hat Single Sign-On is derived from the JBoss EAP Application Server. Many aspects of configuring Red Hat Single Sign-On revolve around JBoss EAP configuration elements. Often this guide will direct you to documentation outside of the manual if you want to dive into more detail. 1.1. Recommended Additional External Documentation Red Hat Single Sign-On is built on top of the JBoss EAP application server and its sub-projects like Infinispan (for caching) and Hibernate (for persistence). This guide only covers basics for infrastructure-level configuration. It is highly recommended that you peruse the documentation for JBoss EAP and its sub projects. Here is the link to the documentation: JBoss EAP Configuration Guide
| null |
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_installation_and_configuration_guide/guide_overview
|
Chapter 3. Data Grid Server endpoints
|
Chapter 3. Data Grid Server endpoints Data Grid Server endpoints provide client access to the cache manager over Hot Rod and REST protocols. 3.1. Data Grid Server endpoints 3.1.1. Hot Rod Hot Rod is a binary TCP client-server protocol designed to provide faster data access and improved performance in comparison to text-based protocols. Data Grid provides Hot Rod client libraries in Java, C++, C#, Node.js and other programming languages. Topology state transfer Data Grid uses topology caches to provide clients with cluster views. Topology caches contain entries that map internal JGroups transport addresses to exposed Hot Rod endpoints. When client send requests, Data Grid servers compare the topology ID in request headers with the topology ID from the cache. Data Grid servers send new topology views if client have older topology IDs. Cluster topology views allow Hot Rod clients to immediately detect when nodes join and leave, which enables dynamic load balancing and failover. In distributed cache modes, the consistent hashing algorithm also makes it possible to route Hot Rod client requests directly to primary owners. 3.1.2. REST Data Grid exposes a RESTful interface that allows HTTP clients to access data, monitor and maintain clusters, and perform administrative operations. You can use standard HTTP load balancers to provide clients with load balancing and failover capabilities. However, HTTP load balancers maintain static cluster views and require manual updates when cluster topology changes occur. 3.1.3. Memcached Data Grid provides an implementation of the Memcached text protocol for remote client access. Important The current implementation of the Memcached protocol is deprecated and planned for removal in future release. However, it will be replaced with new Memcached connector that offers the following improvements: support for both the TEXT and BINARY protocols authentication against the server security realms encryption (TLS) performance improvements protocol auto-detection which allows the connector to be enabled on the default server port by default The Data Grid Memcached endpoint supports clustering with replicated and distributed cache modes. There are some Memcached client implementations, such as the Cache::Memcached Perl client, that can offer load balancing and failover detection capabilities with static lists of Data Grid server addresses that require manual updates when cluster topology changes occur. 3.1.4. Comparison of endpoint protocols Hot Rod HTTP / REST Topology-aware Y N Hash-aware Y N Encryption Y Y Authentication Y Y Conditional ops Y Y Bulk ops Y N Transactions Y N Listeners Y N Query Y Y Execution Y N Cross-site failover Y N 3.1.5. Hot Rod client compatibility with Data Grid Server Data Grid Server allows you to connect Hot Rod clients with different versions. For instance during a migration or upgrade to your Data Grid cluster, the Hot Rod client version might be a lower Data Grid version than Data Grid Server. Tip Data Grid recommends using the latest Hot Rod client version to benefit from the most recent capabilities and security enhancements. Data Grid 8 and later Hot Rod protocol version 3.x automatically negotiates the highest version possible for clients with Data Grid Server. Data Grid 7.3 and earlier Clients that use a Hot Rod protocol version that is higher than the Data Grid Server version must set the infinispan.client.hotrod.protocol_version property. Additional resources Hot Rod protocol reference Connecting Hot Rod clients to servers with different versions (Red Hat Knowledgebase) 3.2. Configuring Data Grid Server endpoints Control how Hot Rod and REST endpoints bind to sockets and use security realm configuration. You can also configure multiple endpoints and disable administrative capabilities. Note Each unique endpoint configuration must include both a Hot Rod connector and a REST connector. Data Grid Server implicitly includes the hotrod-connector and rest-connector elements, or fields, in an endpoint configuration. You should only add these elements to custom configuration to specify authentication mechanisms for endpoints. Prerequisites Add socket bindings and security realms to your Data Grid Server configuration. Procedure Open your Data Grid Server configuration for editing. Wrap multiple endpoint configurations with the endpoints element. Specify the socket binding that the endpoint uses with the socket-binding attribute. Specify the security realm that the endpoint uses with the security-realm attribute. Disable administrator access with the admin="false" attribute, if required. With this configuration users cannot access Data Grid Console or the Command Line Interface (CLI) from the endpoint. Save the changes to your configuration. Multiple endpoint configuration The following Data Grid Server configuration creates endpoints on separate socket bindings with dedicated security realms: XML <server xmlns="urn:infinispan:server:14.0"> <endpoints> <endpoint socket-binding="public" security-realm="application-realm" admin="false"> </endpoint> <endpoint socket-binding="private" security-realm="management-realm"> </endpoint> </endpoints> </server> JSON { "server": { "endpoints": [{ "socket-binding": "private", "security-realm": "private-realm" }, { "socket-binding": "public", "security-realm": "default", "admin": "false" }] } } YAML server: endpoints: - socketBinding: public securityRealm: application-realm admin: false - socketBinding: private securityRealm: management-realm Additional resources Network interfaces and socket bindings 3.3. Endpoint connectors Connectors configure Hot Rod and REST endpoints to use socket bindings and security realms. Default endpoint configuration <endpoints socket-binding="default" security-realm="default"/> Configuration element or attribute Description endpoints Wraps endpoint connector configuration. endpoint Declares a Data Grid Server endpoint that configures Hot Rod and REST connectors to use a socket binding and security realm. hotrod-connector Includes the Hot Rod endpoint in the endpoint configuration. rest-connector Includes the Hot Rod endpoint in the endpoint configuration. memcached-connector Configures the Memcached endpoint and is disabled by default. Additional resources Data Grid schema reference 3.4. Endpoint IP address filtering rules Data Grid Server endpoints can use filtering rules that control whether clients can connect based on their IP addresses. Data Grid Server applies filtering rules in order until it finds a match for the client IP address. A CIDR block is a compact representation of an IP address and its associated network mask. CIDR notation specifies an IP address, a slash ('/') character, and a decimal number. The decimal number is the count of leading 1 bits in the network mask. The number can also be thought of as the width, in bits, of the network prefix. The IP address in CIDR notation is always represented according to the standards for IPv4 or IPv6. The address can denote a specific interface address, including a host identifier, such as 10.0.0.1/8 , or it can be the beginning address of an entire network interface range using a host identifier of 0, as in 10.0.0.0/8 or 10/8 . For example: 192.168.100.14/24 represents the IPv4 address 192.168.100.14 and its associated network prefix 192.168.100.0 , or equivalently, its subnet mask 255.255.255.0 , which has 24 leading 1-bits. the IPv4 block 192.168.100.0/22 represents the 1024 IPv4 addresses from 192.168.100.0 to 192.168.103.255 . the IPv6 block 2001:db8::/48 represents the block of IPv6 addresses from 2001:db8:0:0:0:0:0:0 to 2001:db8:0:ffff:ffff:ffff:ffff:ffff . ::1/128 represents the IPv6 loopback address. Its prefix length is 128 which is the number of bits in the address. IP address filter configuration In the following configuration, Data Grid Server accepts connections only from addresses in the 192.168.0.0/16 and 10.0.0.0/8 CIDR blocks. Data Grid Server rejects all other connections. XML <server xmlns="urn:infinispan:server:14.0"> <endpoints> <endpoint socket-binding="default" security-realm="default"> <ip-filter> <accept from="192.168.0.0/16"/> <accept from="10.0.0.0/8"/> <reject from="/0"/> </ip-filter> </endpoint> </endpoints> </server> JSON { "server": { "endpoints": { "endpoint": { "socket-binding": "default", "security-realm": "default", "ip-filter": { "accept-from": ["192.168.0.0/16", "10.0.0.0/8"], "reject-from": "/0" } } } } } YAML server: endpoints: endpoint: socketBinding: "default" securityRealm: "default" ipFilter: acceptFrom: ["192.168.0.0/16","10.0.0.0/8"] rejectFrom: "/0" 3.5. Inspecting and modifying rules for filtering IP addresses Configure IP address filtering rules on Data Grid Server endpoints to accept or reject connections based on client address. Prerequisites Install Data Grid Command Line Interface (CLI). Procedure Create a CLI connection to Data Grid Server. Inspect and modify the IP filter rules server connector ipfilter command as required. List all IP filtering rules active on a connector across the cluster: Set IP filtering rules across the cluster. Note This command replaces any existing rules. Remove all IP filtering rules on a connector across the cluster.
|
[
"<server xmlns=\"urn:infinispan:server:14.0\"> <endpoints> <endpoint socket-binding=\"public\" security-realm=\"application-realm\" admin=\"false\"> </endpoint> <endpoint socket-binding=\"private\" security-realm=\"management-realm\"> </endpoint> </endpoints> </server>",
"{ \"server\": { \"endpoints\": [{ \"socket-binding\": \"private\", \"security-realm\": \"private-realm\" }, { \"socket-binding\": \"public\", \"security-realm\": \"default\", \"admin\": \"false\" }] } }",
"server: endpoints: - socketBinding: public securityRealm: application-realm admin: false - socketBinding: private securityRealm: management-realm",
"<endpoints socket-binding=\"default\" security-realm=\"default\"/>",
"<server xmlns=\"urn:infinispan:server:14.0\"> <endpoints> <endpoint socket-binding=\"default\" security-realm=\"default\"> <ip-filter> <accept from=\"192.168.0.0/16\"/> <accept from=\"10.0.0.0/8\"/> <reject from=\"/0\"/> </ip-filter> </endpoint> </endpoints> </server>",
"{ \"server\": { \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\", \"security-realm\": \"default\", \"ip-filter\": { \"accept-from\": [\"192.168.0.0/16\", \"10.0.0.0/8\"], \"reject-from\": \"/0\" } } } } }",
"server: endpoints: endpoint: socketBinding: \"default\" securityRealm: \"default\" ipFilter: acceptFrom: [\"192.168.0.0/16\",\"10.0.0.0/8\"] rejectFrom: \"/0\"",
"server connector ipfilter ls endpoint-default",
"server connector ipfilter set endpoint-default --rules=ACCEPT/192.168.0.0/16,REJECT/10.0.0.0/8`",
"server connector ipfilter clear endpoint-default"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/server-endpoints
|
5.9.8.2. After Red Hat Enterprise Linux Has Been Installed
|
5.9.8.2. After Red Hat Enterprise Linux Has Been Installed Creating a RAID array after Red Hat Enterprise Linux has been installed is a bit more complex. As with the addition of any type of disk storage, the necessary hardware must first be installed and properly configured. Partitioning is a bit different for RAID than it is for single disk drives. Instead of selecting a partition type of "Linux" (type 83) or "Linux swap" (type 82), all partitions that are to be part of a RAID array must be set to "Linux raid auto" (type fd). , it is necessary to actually create the RAID array. This is done with the mdadm program (refer to man mdadm for more information). The RAID array /dev/md0 is now ready to be formatted and mounted. The process at this point is no different than for formatting and mounting a single disk drive.
|
[
"mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[bc]1 mdadm --detail --scan > /dev/mdadm.conf"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-raid-credel-after
|
11.15. Namespaces for Extension Metadata
|
11.15. Namespaces for Extension Metadata When defining the extension metadata in the case of Custom Translators, the properties on tables/views/procedures/columns can define namespace for the properties such that they will not collide with properties specific to JBoss Data Virtualization. The property should be prefixed with alias of the Namespace. Prefixes starting with teiid_ are reserved for use by JBoss Data Virtualization. See "option namespace" in Section A.7, "Productions" . Example 11.8. Example of Namespace Table 11.1. Built-in Namespace Prefixes Prefix URI Description teiid_rel http://www.teiid.org/ext/relational/2012 Relational extensions. Uses include function and native query metadata teiid_sf http://www.teiid.org/translator/salesforce/2012 Salesforce extensions.
|
[
"SET NAMESPACE 'http://custom.uri' AS foo CREATE VIEW MyView (...) OPTIONS (\"foo:mycustom-prop\" 'anyvalue')"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/namespaces_for_extension_metadata
|
Chapter 106. KafkaConnectStatus schema reference
|
Chapter 106. KafkaConnectStatus schema reference Used in: KafkaConnect Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. url string The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. connectorPlugins ConnectorPlugin array The list of connector plugins available in this Kafka Connect deployment. replicas integer The current number of pods being used to provide this resource. labelSelector string Label selector for pods providing this resource.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaConnectStatus-reference
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.